id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2303.17953 | Superconducting topological Dirac semimetals: $P6/m$-Si$_6$ and
$P6/m$-NaSi$_6$ | We theoretically propose that hexagonal silicon-based crystals, $P6/m$-Si$_6$
and $P6/m$-NaSi$_6$, are topological Dirac semimetals with superconducting
critical temperatures of 12 K and 13 K, respectively, at ambient pressure. Band
inversion occurs with the Fu-Kane topological invariant $\mathbb{Z}_2=1$, even
in the absence of spin-orbit coupling. The Dirac nodes protected by $C_6$
crystal rotational symmetry remain gapless with spin-orbit coupling. Using
first-principles calculations, we find pressure-induced topological phase
transitions for $P6/m$-Si$_6$ and $P6/m$-NaSi$_6$ with critical external
pressures of 11.5 GPa and 14.9 GPa, respectively. Above the critical pressures,
the Dirac bands are gapped with $\mathbb{Z}_2=0$, while the superconducting
states and the crystal symmetries are retained.Our results may shed light into
a search for silicon-based topological materials with superconductivity. | Alex Takyung Lee, Kyungwha Park, In-Ho Lee | 2023-03-31T10:31:09Z | http://arxiv.org/abs/2303.17953v1 | # Superconducting topological Dirac semimetals: \(P6/m\)-Si\({}_{6}\) and \(P6/m\)-NaSi\({}_{6}\)
###### Abstract
We theoretically propose that hexagonal silicon-based crystals, \(P6/m\)-Si\({}_{6}\) and \(P6/m\)-NaSi\({}_{6}\), are topological Dirac semimetals with superconducting critical temperatures of 12 K and 13 K, respectively, at ambient pressure. Band inversion occurs with the Fu-Kane topological invariant \(\mathbb{Z}_{2}=1\), even in the absence of spin-orbit coupling. The Dirac nodes protected by \(C_{6}\) crystal rotational symmetry remain gapless with spin-orbit coupling. Using first-principles calculations, we find pressure-induced topological phase transitions for \(P6/m\)-Si\({}_{6}\) and \(P6/m\)-NaSi\({}_{6}\) with critical external pressures of 11.5 GPa and 14.9 GPa, respectively. Above the critical pressures, the Dirac bands are gapped with \(\mathbb{Z}_{2}=0\), while the superconducting states and the crystal symmetries are retained. Our results may shed light into a search for silicon-based topological materials with superconductivity.
## I Introduction
Semiconducting silicon becomes indispensable in electronics due to its versatile features such as the ease of electron or hole doping in a wide range, high-temperature stability, non toxicity, and natural abundance. It is not uncommon to modify phases of solids by varying crystal structures and/or applying external stimuli. There has been a great effort in fabricating silicon in different condensed matter phases, especially a superconducting phase. Some metallic silicon phases were proposed to be superconducting under high pressures [1; 2; 3; 4; 5]. For doped silicon clathrates and boron-doped cubic silicon, superconductivity was observed at ambient pressure [6; 7; 8]. Recently, hexagonal silicon-based crystals \(P6/m\)-Si\({}_{6}\) and \(P6/m\)-NaSi\({}_{6}\), which can be synthesized similarly to Ref. [9], were predicted to show superconductivity (transition temperature of 12 K and 13 K, respectively) at ambient pressure [10].
Topology in the reciprocal space plays an important role in quantum materials properties due to topological protection. Based on symmetries, gapped and gapless quantum phases including a superconducting phase can be topologically classified [11]. For example, three-dimensional Dirac semimetals which can exist in the presence of time-reversal and inversion symmetries, are categorized into two classes [12]. In the first class, band inversion occurs with a nontrivial Fu-Kane \(\mathbb{Z}_{2}\) topological invariant [13] and associated Fermi arc surface states, and Dirac nodes are protected by crystal rotational symmetry [15; 44]. This class is henceforth referred to as a topological Dirac semimetal phase, following Ref. [12]. In the second class, Dirac nodes are protected by nonsymmorphic space group [16] without band inversion.
Most topological materials are not based on silicon. A topological Dirac semimetal phase was discovered in a silicon-containing compound, CaAl\({}_{2}\)Si\({}_{2}\) (space group No. 164, point group \(D_{3d}\)) [17; 18] in the presence of spin-orbit coupling (SOC). Silicon-based crystals \(Cmcm\)-AHT-Si\({}_{24}\) and \(Cmcm\)-VFI-Si\({}_{36}\) have been proposed to be topological nodal line semimetals _only_ when SOC is excluded [19]. With SOC, the nodal lines are gapped with a small energy gap of 1 meV [19].
Recently, superconductivity was observed in three-dimensional topological Dirac semimetals such as Cd\({}_{3}\)As\({}_{2}\)[20; 21; 22], Au\({}_{2}\)Pb[23; 24], KZnBi[25], PdTe\({}_{2}\)[26; 27], and BaSn\({}_{3}\)[28]. To the best of our knowledge, silicon-based topological Dirac semimetals with superconductivity have not been proposed or synthesized yet. Superconductivity in topological Dirac semimetals is appealing considering the possibility to create odd-parity or spin-triplet Cooper pair interaction by doping and/or breaking time-reversal symmetry [29; 30; 31]. Furthermore, superconductors with a nontrivial Fu-Kane \(\mathbb{Z}_{2}\) topological invariant were proposed to be platforms for realization of Majorana zero modes [32] whose braiding is used for topological quantum computation [33; 34]. Families of superconductors with a nontrivial \(\mathbb{Z}_{2}\) invariant showed promising experimental signatures of Majorana zero modes at the ends of magnetic vortices [35; 36].
In this work, we propose that \(P6/m\)-Si\({}_{6}\) and \(P6/m\)-NaSi\({}_{6}\) are topological Dirac semimetals with superconductivity at ambient pressure even when SOC is included. We find that the silicon-based crystals undergo a topological phase transition driven by hydrostatic pressure without any structural changes. Above slightly different critical pressures, both silicon-based crystals become topologically trivial in the superconducting state.
## II Methods
We employ density functional theory (DFT) with projector augmented wave (PAW) [37] pseudopotentials and the Perdew-Burke-Ernzerhof generalized gradient approximation (PBE-GGA) [38] for the exchange-correlation functional, as implemented in the VASP software [39]. A plane-wave basis set with a kinetic energy cutoff of 500 eV is used. We use \(\Gamma\)-centered \(\mathbf{k}\)-point meshes of \(8\times 8\times 20\) for bulk \(P6/m\)-Si\({}_{6}\) and \(P6/m\)-NaSi\({}_{6}\) and \(8\times 8\times 1\) for finite slabs. Starting with the geometries from Ref. [10], we further optimize the atomic coordinates and lattice constants until the residual forces are less than 0.01 eV/A with and without pressure. The optimized bulk hexagonal lattice parameters for \(P6/m\)-Si\({}_{6}\) (\(P6/m\)-NaSi\({}_{6}\)) are \(a=6.81\) (6.76) A and \(c=2.50\) (2.44) A without pressure, which are close to those in Ref. [10]. The relaxed atomic coordinates can be found in Appendix A.
We compute the Fu-Kane \(\mathbb{Z}_{2}\) topological invariant [13] from the parity of all valence or occupied bands at the time-reversal invariant momentum (TRIM) points as well as by using the Wilson loop method [40; 41]. To apply the Wilson loop method and to investigate the surface states, we first construct a tight-binding Hamiltonian for the Si \(s+p\) bands by generating 24 (33) maximally localized Wannier functions for \(P6/m\)-Si\({}_{6}\) (\(P6/m\)-NaSi\({}_{6}\)), using WANNIER90 code [42; 56] (see Appendix B). Then, we calculate the surface Green's function of the semi-infinite system based on the Wannier-function tight-binding model, using WannierTools code[41]. Irreducible representations of bands at the high symmetry \(\mathbf{k}\) points and along the \(\Gamma\)-A axis are obtained using Quantum Espresso [43].
## III Results and discussion
The atomic structure of \(P6/m\)-NaSi\({}_{6}\) (space group no. 175, point group \(C_{6h}\)) is shown in Fig. 1(a), where the stacking along the \(c\) axis is identical for all atomic layers. The structure of \(P6/m\)-Si\({}_{6}\) is obtained by simply removing Na atoms from that of \(P6/m\)-NaSi\({}_{6}\). (see Appendix A.) The Na atoms can be easily removed by the degassing process, since the migration barrier of Na atoms along the cylindrical holes is only 0.48 eV [10]. This barrier is 0.17 eV lower than that for the \(Cmcm\) phase [9].
Let us first discuss electronic and topological properties without pressure and then with pressure. Fig. 1(c) shows the electronic structure of \(P6/m\)-Si\({}_{6}\) at symmetric \(\mathbf{k}\) points with SOC in the absence of pressure. Mostly \(p\)-orbital character is dominant near the Fermi level. However, \(s\)-orbital character is locally dominant in the \(k_{z}=\pi/c\) plane, _i.e._, along the A-L-H-A lines. At the L point, bands at \(-0.86\) eV and \(-1.04\) eV (relative to the Fermi level) have strong \(s\)-orbital and \(p_{z}\)-orbital characters, respectively, while bands at \(-1.36\) eV have characters of mixed \(s\) and \(p_{x}+p_{y}\) orbitals. (see Appendix C.) Strong Si \(p\)-orbital characteristics also appear in most of the \(\mathbf{k}\) space except for the \(k_{z}=\pi/c\) plane for \(P6/m\)-NaSi\({}_{6}\), as presented in Fig. 2.
For \(P6/m\)-Si\({}_{6}\) at 0 GPa, the parity analysis at the TRIM points gives \(\mathbb{Z}_{2}=1\) (see Appendix B.) which results from the band inversion at the L points, as shown in Fig. 1(c). (Note that a unit cell of \(P6/m\)-Si\({}_{6}\) has 24 valence electrons.) At the L points, the 23th and 24th (25th and 26th) bands at \(-1.36\) eV (\(-1.04\) eV) have \(-\) (+) parity, as depicted in Fig. 1(c). The band inversion is not induced by SOC because \(\mathbb{Z}_{2}=1\) without SOC.
The analysis of irreducible representations at the L point indicates that the opposite parity for the two band
Figure 1: (a) Top view of the atomic structure of \(P6/m\)-NaSi\({}_{6}\) with a unit cell. Isostructure \(P6/m\)-Si\({}_{6}\) can be obtained by removing Na atoms from \(P6/m\)-NaSi\({}_{6}\). (b) First Brillouin zone for \(P6/m\)-Si\({}_{6}\) and \(P6/m\)-NaSi\({}_{6}\). SOC-included DFT band structures of \(P6/m\)-Si\({}_{6}\) at (c) 0 GPa and (d) 15 GPa, where the Fermi level is set to zero. All bands are at least doubly degenerate. Relative proportions of Si \(s\) and \(p\)-orbital characters are encoded in colors. The Dirac points are indicated as arrows and the parity of a few occupied bands are shown.
pairs originates from the horizontal mirror symmetry \(\sigma_{h}\). The irreducible representation and eigenvalues of \(\sigma_{h}\) of the bands at the L point are listed in Table 1. The 23th/24th bands belong to irreducible representation \(B_{u}\) (where an eigenvalue of horizontal mirror reflection \(\sigma_{h}\) is \(+1\)), while the 25th/26th bands belong to irreducible representation \(B_{g}\) (where an eigenvalue of \(\sigma_{h}\) is \(-1\)). We also confirm that \(\mathbb{Z}_{2}=1\) using the Wilson loop method. (see Appendix B.) Even when the parity of all occupied bands below the Fermi level is counted, we find that \(\mathbb{Z}_{2}=1\) persists.
For \(P6/m\)-Si\({}_{6}\) with SOC in the absence of pressure, along the \(\Gamma\)-A direction, we find several possible band crossing points in the vicinity of the Fermi level. To identify gapless Dirac nodes, we zoom-in the bands near the crossing points with numerical accuracy of 10 \(\mu\)eV, as shown in Fig. 3. The bands at \(i\), \(ii\), and \(iv\) are gapped with 0.07-28.9 meV, while the crossing points \(iii\) and \(v\) are gapless within numerical accuracy. We also analyze irreducible representations \(\Lambda_{i}\) of the bands near the crossing points, considering that double group \(C_{6}\) is applied to the bands along the \(\Gamma\)-A direction. This symmetry analysis [Figs. 3(b)-(e)] agrees with the numerical result. The Dirac nodes along the \(k_{z}\) axis are protected by the crystal sixfold rotational symmetry \(C_{6}\). The gapless protected Dirac nodes in conjunction with our analysis of the \(\mathbb{Z}_{2}\) invariant, suggest that \(P6/m\)-Si\({}_{6}\) is a topological Dirac semimetal.
For \(P6/m\)-NaSi\({}_{6}\), it is tricky to compute the \(\mathbb{Z}_{2}\) in
\begin{table}
\begin{tabular}{c c c c c} band \# & irrep & \(C_{2}\) & \(\sigma_{h}\) & \(I\) \\ \hline
1, 2 & \(B_{u}\) & \(-1\) & \(+1\) & \(-1\) \\
3, 4 & \(A_{g}\) & \(+1\) & \(+1\) & \(+1\) \\
5, 6 & \(B_{g}\) & \(-1\) & \(-1\) & \(+1\) \\
7, 8 & \(B_{u}\) & \(-1\) & \(+1\) & \(-1\) \\
9, 10 & \(A_{u}\) & \(+1\) & \(-1\) & \(-1\) \\
11, 12 & \(A_{g}\) & \(+1\) & \(+1\) & \(+1\) \\
13, 14 & \(B_{g}\) & \(-1\) & \(-1\) & \(+1\) \\
15, 16 & \(A_{u}\) & \(+1\) & \(-1\) & \(-1\) \\
17, 18 & \(B_{u}\) & \(-1\) & \(+1\) & \(-1\) \\
19, 20 & \(A_{g}\) & \(+1\) & \(+1\) & \(+1\) \\
21, 22 & \(A_{u}\) & \(+1\) & \(-1\) & \(-1\) \\
23, 24 & \(B_{u}\) & \(-1\) & \(+1\) & \(-1\) \\ \hline
25, 26 & \(B_{g}\) & \(-1\) & \(-1\) & \(+1\) \\ \end{tabular}
\end{table}
Table 1: Irreducible representation (irrep), point group symmetries, and their eigenvalues of bands at L point, for \(P6/m\)-Si\({}_{6}\) at 0 GPa. \(C_{2}\) is two-fold rotation operator with respect to the \(z\)-axis. \(I\) and \(\sigma_{h}\) are two-fold inversion and horizontal mirror plane operators, respectively. The highest occupied band is 24th band.
Figure 3: SOC-included DFT bands for (a) \(P6/m\)-Si\({}_{6}\) at 0 GPa, between \(\Gamma\) and A. (b)-(e) enlarged bands near 5 possible band crossing points (\(i\)-\(v\)) for (a). \(\Lambda_{i}\) represent irreducible representations of double group \(C_{6}\) along the \(\Gamma\)-A direction. Points \(iii\) and \(v\) are gapless Dirac nodes (when \(\Lambda_{7,8}\) bands and \(\Lambda_{9,10}\) bands meet each other).
Figure 2: Electronic band structure obtained from first-principles calculations using DFT. First-principles band structures of \(P6/m\)-NaSi\({}_{6}\) at pressure (a) 0 GPa and (b) 15 GPa, including spin-orbit coupling. According to the relative weight of the Si \(s\) character and the Si \(p\) character, the band color was encoded as shown in the legend. The band structure calculation at pressure 15 GPa shows the band structure just before band inversion occurs. The situation of band inversion that occurs with changes in pressure can be seen, especially in the L-H section.
variant by counting the parity of \(N\) bands at the TRIM points, where \(N\) is the number of valence electrons, since \(N\) is now an odd number due to the Na atom. (In Ref. [13], each degenerate pair was counted only once for centrosymmetric metals.) In order to circumvent this, we take into account all occupied bands below the Fermi level at the TRIM points in our calculation of the \(\mathbb{Z}_{2}\) invariant. Note that there are different numbers of occupied bands at different TRIM points. For \(P6/m\)-NaSi\({}_{6}\) at 0 GPa, the product of the parity values of all occupied bands at each L point is positive, while the corresponding products at the other 5 TRIM points are negative. (See Appendix B.) This gives rise to \(\mathbb{Z}_{2}=1\) and the band inversion also occurs at the L points, as shown in Fig. 2(a). Therefore, similarly to \(P6/m\)-Si\({}_{6}\), \(P6/m\)-NaSi\({}_{6}\) is also a topological Dirac semimetal. The gapless Dirac nodes [crossing points \(iii\) and \(v\) in Fig. 4(d)] are found below the Fermi level.
The nontrivial topology of three-dimensional Dirac bands in \(P6/m\)-Si\({}_{6}\) and \(P6/m\)-NaSi\({}_{6}\) indicates that there are nontrivial surface states. We calculate surface states using the surface Green's function of the semi-infinite system described earlier, considering two surface types: surfaces parallel and perpendicular to the \(\Gamma\)-A direction, \(i.e.\), (100) and (001) surfaces, respectively. Fig. 5 shows the calculated local density of states which is an imaginary part of the surface Green's function, for the (100) and (001) surfaces. At the chemical potential, corresponding to one of the Dirac node energies, for the (100) surface, two arc-shaped (left and right sides of hourglass) surface states connect the two Dirac node projection points (with bulk characteristics) indicated as solid dots along the \(k_{z}\) axis in Fig. 5(b). This is expected because each Dirac node consists of degenerate Weyl nodes with opposite chirality in topological Dirac semimetals [44; 45]. For the (001) surface, the surface states appear as a point [Fig. 5(d)]. In this case, the two Dirac nodes along the \(k_{z}\) axis are projected onto the same point, \(i.e.\), the origin, in the \(k_{x}\)-\(k_{y}\) plane. Thus, point-like surface states are expected rather than arc-shaped surface states. The features of the surface states for the (100) and (001) surfaces agree with those of prototype topological Dirac semimetals Na3Bi[44] and Cd\({}_{3}\)As\({}_{2}\)[15].
Now with pressure up to 15 GPa, the \(P6/m\) crystal symmetry is still maintained for \(P6/m\)-Si\({}_{6}\) and \(P6/m\)-NaSi\({}_{6}\). Since the effect of pressure is similar for both crystals, as shown in Figs. 1 and 2, we mainly discuss \(P6/m\)-Si\({}_{6}\). At 15 GPa, the amount of changes of the band structure depends on orbital characters. Specifically, the bands of \(s\)-orbital character are sensitive to pressure, while the bands of \(p\)-orbital character are somewhat rigid [Fig. 1(d)]. The band structure near the Fermi level changes significantly only in the \(k_{z}=\pi/c\) plane [Fig. 1(c) vs (d)]. At the L points, the \(s\)-orbital band is shifted by 0.51 eV, whereas the highest-energy occupied \(p\)-orbital band is shifted by 0.16 eV.
While the topological invariant is insensitive to small local perturbations such as disorder or impurities, pressure-induced deformation can be used to control the electronic structure [46; 47; 48; 49]. For \(P6/m\)-Si\({}_{6}\), under high pressure, the parity of the 23th/24th bands is reversed
Figure 5: Electronic structure or local density of states of surface states along the two-dimensional symmetric \(\mathbf{k}\) points (a,c), in the \(k_{y}\)-\(k_{z}\) plane (b) and in the \(k_{x}\)-\(k_{y}\) plane (d) for the semi-infinite (100) and (001) surfaces of \(P6/m\)-Si\({}_{6}\), respectively. Horizontal dashed lines in (a) and (c) represent the Dirac point energies, respectively. Filled dots in (b) represent the projection points of the Dirac nodes onto the \(k_{y}\)-\(k_{z}\) plane. In (a)-(d), bright color indicates higher density of states at the surface.
Figure 4: (a) DFT bands in \(P6/m\)-NaSi\({}_{6}\) at 0 GPa, between \(\Gamma\) and A. (b)-(d) enlarged bands near 5 possible band crossing points (\(i\)-\(v\)) for (a). Points \(iii\) and \(v\) are gapless Dirac nodes (when \(\Lambda_{7,8}\) bands and \(\Lambda_{9,10}\) bands meet each other).
to that of the 25th/26th bands at the L point [See the bands in the energy window between \(-1.4\) and \(-1.0\) eV in Fig. 1(d)], and so the bands with \(-\) parity become higher in energy than the bands with \(+\) parity, which removes the band inversion. This leads to a topological phase transition from nontrivial \(\mathbb{Z}_{2}=1\) to trivial \(\mathbb{Z}_{2}=0\). The critical external pressure for the topological phase transition in \(P6/m\)-Si\({}_{6}\) is \(11.5\) GPa. [See Fig. 6(a).] The gapless Dirac nodes which exist below the critical pressure now open up a gap above the critical pressure, as shown in Fig. 1(d), due to the topological phase transition. In the case of \(P6/m\)-NaSi\({}_{6}\), a similar topological phase transition occurs at \(14.9\) GPa, which is somewhat higher than the critical pressure of \(P6/m\)-Si\({}_{6}\).
Theoretically, \(P6/m\)-Si\({}_{6}\) and \(P6/m\)-NaSi\({}_{6}\) were proposed to be superconductors at ambient pressure[10]. Fig. 6(b) shows the superconducting critical temperature \(T_{c}\) as a function of hydrostatic pressure. While \(T_{c}\) decreases monotonically as pressure (\(P\)) increases, both \(P6/m\)-Si\({}_{6}\) and \(P6/m\)-NaSi\({}_{6}\) are superconducting within the pressure range we consider, \(0\leq P\leq 15\) GPa. Therefore, we propose that \(P6/m\)-Si\({}_{6}\) (\(P6/m\)-NaSi\({}_{6}\)) is a superconducting topological Dirac semimetal below \(11.5\) (\(14.9\)) GPa. We emphasize that \(P6/m\)-Si\({}_{6}\) and \(P6/m\)-NaSi\({}_{6}\) consist of light elements with atomic number \(14\) or less, based on silicon, in contrast to reported superconducting topological Dirac semimetals such as Cd\({}_{3}\)As\({}_{2}\)[20; 21; 22], Au\({}_{2}\)Pb[23], KZnBi[25], PdTe\({}_{2}\)[26; 27], and BaSn\({}_{3}\)[28] which consist of heavy elements.
Due to the nesting vectors along \(k_{z}\), it turns out that there is a modulation in Si-Si bond lengths (out-of-hexagonal plane) formed in the \(z\) direction. Indeed, we found a small oscillation of the out-of-plane bond length in the slab geometry. (see Appendix D.) The modulation is strongest on the surface and decreases near the center of the slab.
We calculated the Fermi surfaces from the two crystal structures, \(P6/m\)-Si\({}_{6}\) and \(P6/m\)-NaSi\({}_{6}\), as presented in Fig. 7. \(P6/m\)-Si\({}_{6}\) and \(P6/m\)-NaSi\({}_{6}\) can provide Dirac cones near or at the Fermi energy and show an anisotropic conducting channel due to their anisotropic bonding nature. Evidence of our proposed pressure-induced topological phase transition in \(P6/m\)-Si\({}_{6}\) and \(P6/m\)-NaSi\({}_{6}\) may be explored by transport experiments. The three-dimensional Dirac nodes and resultant arc-shaped surface states are supposed to induce a unique field-direction dependence in Shubnikov-de Haas oscillations [50]. The chiral anomaly associated with Weyl nodes suggests the following interesting properties: (i) A large negative magnetoresistance is found when an external magnetic field is parallel to a current direction [51; 52]; (ii) Thermoelectric properties depends on the relative direction between an external magnetic field and a temperature gradient [53]; (iii) The giant planar Hall effect is expected when a current direction is not parallel to a magnetic field direction in plane [54; 55]. By measuring variations of the above properties upon external pressure, the proposed topological phase transition can be experimentally probed, similarly to Ref. [49].
## IV Conclusions
In summary, we have proposed that \(P6/m\)-Si\({}_{6}\) and \(P6/m\)-NaSi\({}_{6}\) crystals are superconducting topological Dirac semimetals at ambient pressure with \(\mathbb{Z}_{2}=1\). The two gapless bulk Dirac nodes appear at \(-0.4832\) eV and \(-0.9546\) eV for the former and at \(-1.1740\) eV and \(-1.1770\) eV for the latter below the Fermi level. With hole doping, the Fermi level may be lowered to one of the two Dirac node energies for experimental signatures of the topological phase. The gapless Dirac nodes are protected by the crystal rotational symmetry in the presence of time-reversal symmetry. The topological Dirac semimetal phase for the crystals becomes topologically
Figure 7: Fermi surfaces of (a) \(P6/m\)-Si\({}_{6}\) at \(0\) GPa, (b) \(P6/m\)-Si\({}_{6}\) at \(15\) GPa, (c) \(P6/m\)-NaSi\({}_{6}\) at \(0\) GPa, and (d) \(P6/m\)-NaSi\({}_{6}\) at \(150\) GPa are shown, respectively. The Fermi surface is the surface of constant energy in the first BZ which separates occupied from unoccupied electron states at zero temperature. Electronic band structure obtained from first-principles calculations using DFT.
Figure 6: (a) \(\mathbb{Z}_{2}\) topological invariants for \(P6/m\)-Si\({}_{6}\) and \(P6/m\)-NaSi\({}_{6}\) are shown as a function of external hydrostatic pressure. Critical external pressures of the topological phase transitions for \(P6/m\)-Si\({}_{6}\) and \(P6/m\)-NaSi\({}_{6}\) are \(11.5\) GPa and \(14.9\) GPa, respectively. (b) Superconducting critical temperatures \(T_{c}\) of \(P6/m\)-Si\({}_{6}\) and \(P6/m\)-NaSi\({}_{6}\), obtained from Ref. [10] are shown as a function of external pressure. In (a) and (b), the filled (empty) symbols indicate the \(\mathbb{Z}_{2}=1\) (\(\mathbb{Z}_{2}=0\)) phase.
trivial beyond a critical external pressure. This pressure-induced topological phase transition retains superconductivity and the original crystal symmetry group. The coexistence of a topological Dirac semimetal state and a superconducting state in the present crystals, along with a pressure-induced topological phase transition, will provide an interesting platform to study the interplay between topology in electronic structure and superconductivity.
## V Acknowledgments
We thank Sohrab Ismail-Beigi for helpful discussions. This work also used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by the National Science Foundation grant number ACI-1548562, by using computer time on the Comet supercomputer as enabled by XSEDE allocation MCA08X007. I.H.L was supported by the National Center for Materials Research Data (NCMRD) through the National Research Foundation of Korea (NRF) funded by the Ministry of Science and ICT (NRF-2021M3A7C2089748).
|
2309.12864 | The Importance of Worst-Case Memory Contention Analysis for
Heterogeneous SoCs | Memory interference may heavily inflate task execution times in Heterogeneous
Systems-on-Chips (HeSoCs). Knowing worst-case interference is consequently
fundamental for supporting the correct execution of time-sensitive
applications. In most of the literature, worst-case interference is assumed to
be generated by, and therefore is estimated through read-intensive synthetic
workloads with no caching. Yet these workloads do not always generate
worst-case interference. This is the consequence of the general results
reported in this work. By testing on multiple architectures, we determined that
the highest interference generation traffic pattern is actually hardware
dependant, and that making assumptions could lead to a severe underestimation
of the worst-case (in our case, of more than 9x). | Lorenzo Carletti, Gianluca Brilli, Alessandro Capotondi, Paolo Valente, Andrea Marongiu | 2023-09-22T13:38:25Z | http://arxiv.org/abs/2309.12864v1 | # The Importance of Worst-Case Memory
###### Abstract
Memory interference may heavily inflate task execution times in Heterogeneous Systems-on-Chips (HeSoCs). Knowing worst-case interference is consequently fundamental for supporting the correct execution of time-sensitive applications. In most of the literature, worst-case interference is assumed to be generated by, and therefore is estimated through read-intensive synthetic workloads with no caching.
Yet these workloads do not _always_ generate worst-case interference. This is the consequence of the general results reported in this work. By testing on multiple architectures, we determined that the highest interference generation traffic pattern is actually hardware dependant, and that making assumptions could lead to a severe underestimation of the worst-case (in our case, of more than 9\(\times\)).
## I Introduction
Heterogeneous on-chip Systems (HeSoC) combine the benefits of low power consumption Systems on Chip (SoC) with the ability to use accelerators to execute specialized workloads efficiently. Commercial-off-the-shelf (COTS) HeSoCs often make use of GP-GPU or FPGA as accelerators, combined with multi-core general-purpose _host_ CPUs. This provides both specialization and flexibility.
These systems typically rely on a shared-memory organization, where the aforementioned compute units are interconnected through a shared bus to the main system DRAM, as shown in Fig. 1. As the number of compute engines grows in the chip, the main memory is subject to increasing contention.
This potentially causes the tasks executing on the various units to experience decreased bandwidth and, as a consequence, an increased execution time due to mutual interference [1]. This is particularly problematic for time-sensitive applications. The problem has been extensively studied before [2, 3, 4, 5, 6], and several different approaches to mitigate the effects of memory interference have been developed [8, 9, 10, 11, 12].
In order to prove the validity of an interference mitigation solution, it's important to have a proper understanding of the timing effects (i.e., slowdown) a workload under test experiences in the worst case, and showing that a particular approach can handle such a scenario. We define as worst-case the program which is slowed down the most by other tasks causing DRAM interference.
Some previous works [12, 4, 6] make the assumption that read-only synthetic benchmarks, which are programs executing memory operations at the highest speed possible, have to be the ones which either:
1. Cause the highest amount of DRAM interference for other concurrently running tasks.
2. Are the most slowed down by the effects of DRAM interference.
Using such concepts without a proper study of the hardware is particularly problematic, since they could lead to a wrong characterization of the worst-case, which, in turn, could mean that certain conditions might cause slowdowns greater than expected for important time-sensitive applications.
### _Contributions_
Our research analyzed the effects of different synthetic memory-intensive benchmarks running alongside other tasks (Polybench) on two separate HeSoCs: the FPGA-based Xilinx ZU9EG and the GPU-based NVIDIA TX2. Our results prove that:
1. The type of program causing the highest amount of interference is actually hardware-dependant.
2. The type of program most slowed down by the effects of DRAM interference is not guaranteed to be read-only synthetic benchmarks.
## II Synthetic benchmarks
Synthetic benchmarks are programs which can be fine-tuned to emulate different kinds of programs based on
Fig. 1: Shared memory architectural template.
certain parameters. For our tests on DRAM interference generation, we decided on using the following configurations:
* _READ_MISS_: Loads only, at the maximum possible speed. What was believed to be the worst-case in Section I. It produces a _100 % read_ traffic.
* _MEMSET_: A series of consecutive stores, executed at the maximum possible speed. It produces a _100 % write_ traffic.
* 50 % write_ traffic.
These synthetic benchmarks were then set to run on three CPU cores on the two platform as _Background Interference Generators_. On the remaining core, another task (either a Polybench, or one of the Synthetic Benchmarks) was set to run, and the amount of slowdown it experienced was measured.
## III Experimental Evaluation
We ran our experiments on both the NVIDIA TX2 (Fig. 2) and the Xilinx ZU9EG (Fig. 3) for varying levels of interference intensity (**THR%**). The _Above_ region (red) identifies Polybench which always experience a worse slowdown than READ_MISS. The _Crossing_ region (yellow) is for the Polybench which are subject to a worse slowdown than READ_MISS only at certain points. Finally, the _Below_ region (green) is for Polybench which are never more slowed down than READ_MISS, which is what previous literature actually accounted for. The results definitively prove that read only synthetic benchmarks (_READ_MISS_) are not:
1. The ones which cause the highest amount of interference, as can be seen by _MEMSET_'s highest slowdown being around 12x on the ZU9EG, compared to _READ_MISS_'s 3x. On the TX2, _MEMCPY_ causes the highest amount of interference, instead. It's important
Fig. 3: **Xilinx ZU9EG. Execution time increase of the three synthetic benchmarks (curves) and the Polybench benchmarks (solored areas) running on the core under test with increasing interference from the other cores (THR%). The workload executed by the interference cores is indicated above the plot.**
Fig. 2: **NVIDIA TX2. Execution time increase of the three synthetic benchmarks (curves) and the Polybench benchmarks (solored areas) running on the core under test with increasing interference from the other cores (THR%). The workload executed by the interference cores is indicated above the plot.**
to note that there may be even worse interference patterns, that we have not yet found.
2. The programs which are slowed down the most. Depending on the memory access pattern of the interference, different synthetic benchmarks become the worst case. Not only that, but there are certain Polybench which reach even higher degrees of slowdown for all the different interference causing traffic patterns. This, however, is due to Cache events, and not exclusively DRAM interference.
For the ZCU102, the combined effect of these factors makes it so the real worst-case is actually subject to a 12x slowdown factor, instead of the 1.3x which could be observed when using _READ_MISS_ as both the benchmark under test, and the program causing the interference.
## IV Conclusion
Our research makes it clear that proper worst-case characterization is important when developing memory contention mitigation techniques. While being focused on DRAM interference is important, Cache events must be also accounted for, as they can cause regular tasks to be subject to more interference than the most memory intensive ones (Red/Yellow regions in Fig. 2 and 3). On the Xilinx ZU9EG (Fig. 3), the situation is particularly egregious, as the worst-case for our tests is actually more than 9\(\times\) worse than _READ_MISS_. All of these evaluations have been done with the accelerators turned off, in order to create a valid comparison of the effects of bad worst-case characterization on different HeSoCs. However, we have observed that programs can experience higher degrees of slowdown when the platform-specific hardware is used. For example, on the Xilinx ZU9EG, when the FPGA cores are used to generate traffic, the total slowdown a program can experience can be greater than 60\(\times\) (Fig. 4) if the CPU cores are also executing _MEMSET_. Any proposal which fails to account for these kind of scenarios cannot realistically state to have covered the worst-case slowdown which a program can experience on these kinds of platforms.
|
2303.17947 | Mass varying dark matter and its cosmological signature | Nontrivial dark sector physics continues to be an interesting avenue in our
quest to the nature of dark matter. In this paper, we study the cosmological
signatures of mass-varying dark matter where its mass changes from zero to a
nonzero value in the early Universe. We compute the changes in various
observables, such as, the linear matter power spectrum and the cosmic microwave
background anisotropy power spectrum. We explain the origin of the effects and
point out a qualitative similarity between this model and a warm dark matter
cosmology with no sudden mass transition. Finally, we do a simple analytical
study to estimate the constraint on the parameters of this model from the
Lyman-$\alpha$ forest data. | Anirban Das, Subinoy Das, Shiv K. Sethi | 2023-03-31T10:22:58Z | http://arxiv.org/abs/2303.17947v2 | # Cosmological Signatures of Mass Varying Dark Matter
###### Abstract
Nontrivial dark sector physics continues to be an interesting avenue in our quest to the nature of dark matter. In this paper, we study the cosmological signatures of mass-varying dark matter where its mass changes from zero to a nonzero value in the early Universe. We compute the changes in various observables, such as, the matter and the cosmic microwave background anisotropy power spectrum. We explain the origin of the effects and point out a qualitative similarity between this model and a warm dark matter cosmology with no sudden mass transition. We also do a simple frequentist analysis of the linear matter power spectrum to estimate the constraint on the parameters of this model from latest cosmological observation data.
+
Footnote †: preprint: SLAC-PUB-17709
## I Introduction
Though the presence of dark matter has been confirmed through its gravitational effect, the particle nature of DM remains a complete mystery. Combined with a cosmological constant (\(\Lambda\)), the simple hypothesis of a cold, collisionless dark matter (CDM) that may or may not interact with ordinary Standard Model (SM) particles is consistent with all cosmological observations to date, on scales ranging from individual galaxies [1], to galaxy clusters [2; 3], to cosmological scales as probed by large scale structure [4; 5] and cosmic microwave background (CMB) measurements [6; 7; 8].
During the course of last several decades, myriad laboratory experiments have been performed to look for any nongravitational interaction of DM. However, none of these have yielded any conclusive evidence for its presence. Together they have put stringent limits on the conventional DM theories [9], and have compelled us to theorize novel DM models with nontrivial particle physics phenomena in the dark sector. Such effort have also led us to venture beyond the _vanilla_ DM models and design experiments that are better optimized to look for observable signatures such models [10; 11; 12; 13]. On the observation frontier, several galactic scale astrophysical anomalies, such as the _missing satellites_[14; 15] (also see [16]), _core vs. cusp_[17; 18] (later termed as _diversity problem_[19]), and _too big to fail_[20] problems have raised questions about the simple CDM models, and drawn attention to particle physics models beyond this paradigm.
In this paper we explore one such avenue of dark matter physics. We ask the question - can the mass of dark matter particle be dynamical with cosmic time? In particular, we explore the scenario where DM species was made of massless and hence relativistic particles in the early universe, but after a gradual phase transition at a certain redshift, its constituents acquire mass- eventually forming the CDM population in the Universe. The particle physics aspect of mass varying DM (MVDM) has been explored before in the literature. For example, there are existing models where the mass of dark matter particle is dependent on a scalar field which rolls over a potential [21], or where a fermionic dark matter is coupled to a scalar field with a simple Yukawa-like interaction [22; 23]. In the above studies, the particle phenomenology was discussed, but any detailed study on its cosmological implication was missing.
In this work, we compute the effects of MVDM in cosmological observables, such as, the linear matter power spectrum and the CMB anisotropy power spectra. We find that the massless phase of MVDM before the transition creates a _suppression_ in the linear matter power spectrum that is also reflected in the CMB power spectra. We explain the origin of the effects and point out a qualitative similarity between this model and a warm dark matter cosmology with no sudden mass transition. As our model deviates from the standard \(\Lambda\)CDM scenario at small length scales, we compare our results with the matter power spectrum inferred from the Sloan Digital Sky Survey (SDSS) Lyman-\(\alpha\) data from their fourteenth data release. This data already constrains a large part of the new model parameter space. It is instructive to note that we do not adapt a specific particle physics model but rather focus on an empirical model of time variation of dark matter mass. Depending on the specific model, there might be additional signatures of nonstandard dark sector phenomena.
The plan of the paper is as follows. We describe the background evolution of MVDM and compute the new effects in the background level observable in Sec. II. In Sec. III, we solve the Boltzmann equations of the perturbation quantities and compute the changes in the matter and CMB power spectra, followed by statistical comparison of our results with the SDSS Lyman-\(\alpha\) data. In Sec. V, we give a brief discussion about the particle physics models of this scenario. We conclude in Sec. VI.
Background evolution
We will assume MVDM to be a fermionic thermal species with a temperature \(T\) to keep the model as generic as possible. In this case, the evolution of its background quantities, like energy density, are controlled by its time-varying mass \(m(z)\). For the time variation of the mass, we consider a phenomenological model of formation of cold DM of mass \(M\) from a massless radiation-like species at a redshift \(z_{t}\). Specifically, we take the following form of \(m(z)\) to make the transition between the two epochs smooth,
\[m(z)=\frac{M}{2}\left[1-\tanh\left(\frac{z-z_{t}}{\Delta z}\right)\right]\,. \tag{1}\]
Here, \(M\) is the final mass of MVDM, and \(\Delta z\) is the duration of the transition in redshift space. The exact nature and the duration of the transition depend on the underlying particle physics model [22; 23]. In this work, we will only consider fast transition, i.e. \(\Delta z\ll z_{t}\).
The phase space distribution \(f(q)\) of MVDM is given by a Fermi-Dirac distribution with a temperature \(T\). For our work, we choose \(q\) to be the comoving momenta of the particle. We also define \(\epsilon=\sqrt{q^{2}+m(a)^{2}a^{2}}\) to be the energy of the particle; \(a=1/(1+z)\) is the scale factor of the Universe. (For choice of momenta and energy see e.g. [24]). As \(m(z)\) is zero before the redshift of transition, \(z_{t}\), MVDM behaves as radiation. After \(z_{t}\), it could behave as radiation or matter depending on its final mass and temperature at that time. It is relativistic if \(m(z)/T\ll 3\), eventually becoming matter-like or nonrelativistic when \(m(z)/T\gtrsim 3\). The total energy density of MVDM is given by an energy integral over the phase space distribution:
\[\rho_{\rm MVDM}=a^{-4}\int dq\ d\Omega\ q^{2}\ f(q/T)\ \epsilon\,, \tag{2}\]
The energy density at the current epoch is matched to the best-fit energy density of the CDM particle by Planck. As the particle is non-relativistic at the current epoch \(\rho_{\rm MVDM}\propto T^{3}M\). The final energy density of MVDM depends only this combination of mass and temperature and, as discussed later, the evolution of perturbations depend on only \(m(a)/T\). This allows us to express our results in terms of the ratio \(M/T\). We compare our results with WDM models with the same \(M/T\). In Fig. 1, we show the evolution of the energy density for two different values of \(z_{t}\). In the rest of the paper, we will fix MVDM temperature to \(T=T_{\gamma}/10\).
In this work, we will only consider the scenario \(M/T_{t}\gg 1\) which implies that the DM is becomes instantly nonrelativistic when its mass turns nonzero. We do not discuss the other case when \(M/T_{t}\ll 1\) because it is qualitatively more similar to a warm dark matter model. In passing, we want to compare the present scenario with the ballistic dark matter model considered in Refs. [25] which also had a relativistic to nonrelativistic phase transition in the dark sector. However, in that case, the particles were tightly coupled and behaved like a fluid in the relativistic phase. Hence it has characteristic features that are distinct from the present model.
### Extra relativistic degrees of freedom
In the early Universe before \(z_{t}\), the MVDM acts as radiation and would add to the total relativistic energy density in addition to the photons and neutrinos. This can be quantified as the extra relativistic degrees of freedom \(\Delta N_{\rm eff}\) defined as
\[\Delta N_{\rm eff}=\frac{\rho_{\rm MVDM}}{\rho_{\nu}^{\rm th}}\,, \tag{3}\]
where \(\rho_{\nu}^{\rm th}\) is the thermal energy density of a single neutrino species. Because the Universe was radiation dominated before \(z\simeq 3400\), any extra relativistic energy would have changed the rate of expansion of the Universe. This would have affected the production of light elements during the epoch of big bang nucleosynthesis (BBN) and could also affect the angular power spectra of the fluctuations in the cosmic microwave background (CMB) coming from the epoch of recombination. Very precise observational data from these two era helps us constrain \(\Delta N_{\rm eff}\). The fit to the data of the light element abundance in the Universe yields \(N_{\rm eff}=2.85\pm 0.3\)[26].
The energy density \(\rho_{\rm MVDM}\) evolves as \(\sim(1+z)^{3}\) after \(z_{t}\), and as \(\sim(1+z)^{4}\) before \(z_{t}\). For a fast transition, we can ignore the duration of the transition. Then the energy density has a jump by a factor \(\approx n_{\rm MVDM}/T_{t}^{4}\) or
Figure 1: The background density as a function of redshift for mass \(M=1\) keV, and two different transition redshifts \(z_{t}=3\times 10^{5}\) and \(10^{5}\). The density evolution of WDM of the same mass is also shown for comparison. The vertical light gray-shaded band shows the time when the WDM particles become nonrelativistic.
\(M/T_{t}\) where \(T_{t}\) is the MVDM temperature at the transition. Hence, \(\rho_{\rm MVDM}(z)\) prior to \(z_{t}\) can be written as
\[\rho_{\rm MVDM}(z)\approx\rho_{\rm MVDM}(z=0)\,\frac{T_{t}}{M}\,\frac{(1+z)^{4} }{1+z_{t}}\,. \tag{4}\]
Here we have normalized the energy density by fixing its today's value \(\rho_{\rm MVDM}(z=0)\) to the observed CDM density \(\Omega_{\rm c}h^{2}=0.12\)[27]. For this estimate we neglect the current \(\Lambda\)-dominated epoch. Using Eq.(4) in (3) gives
\[\Delta N_{\rm eff}\approx\frac{\rho_{\rm MVDM}(z=0)}{\rho_{\nu}^{\rm th}(z=0 )}\frac{T_{t}}{M}\,\frac{1}{1+z_{t}}\,, \tag{5}\]
during big bang nucleosynthesis (BBN). At present time, neutrino energy density is miniscule relative to DM. In fact, \(\rho_{\rm MVDM}(z=0)/\rho_{\nu}(z=0)\simeq 10^{5}\), and, as will be shown shortly, we will mostly consider \(z_{t}\gtrsim 10^{5}\). Therefore, for MVDM that are nonrelativistic at \(z_{t}\) (i.e. \(T_{t}/M\ll 1\)) will have \(\Delta N_{\rm eff}\lesssim 0.01\) and will not contribute significantly to the extra relativistic degrees of freedom in the early Universe during BBN. Moreover, as we are interested in transition epochs much earlier than the recombination and the MVDM density is fixed to the CDM density today, the \(\Delta N_{\rm eff}\) by definition vanishes during recombination. Hence, this model is not constrained by any of the existing bounds on \(\Delta N_{\rm eff}\) in most of the relevant parameter space.
## III Matter and CMB power spectra
In addition to changing the expansion rate of the Universe through extra radiation, MVDM will affect the evolution of the fluctuations in the Universe because of its time-varying mass \(m(z)\). The extended period of MVDM as radiation before \(z_{t}\) affects the evolution of the linear perturbations. Below, we discuss these effects using the Boltzmann equations.
The perturbation evolution equations can be obtained from the Boltzmann hierarchy [24],
\[\dot{\Psi}_{0} = -\frac{qk}{\epsilon}\Psi_{1}-\dot{\phi}\frac{d\ln f_{0}}{d\ln q}\,,\] \[\dot{\Psi}_{1} = \frac{qk}{3\epsilon}(\Psi_{0}-2\Psi_{2})-\frac{\epsilon k}{3q} \psi\frac{d\ln f_{0}}{d\ln q}\,, \tag{6}\] \[\dot{\Psi}_{\ell} = \frac{qk}{(2\ell+1)\epsilon}\left[\ell\Psi_{\ell-1}-(\ell+1)\Psi _{\ell+1}\right]\,,\quad\ell\geq 2\,.\]
Here, \(\Psi_{\ell}\) are the \(\ell\)-th multipole of the perturbations to the phase space distribution function, \(\phi\) and \(\psi\) are the metric perturbations, \(f_{0}\) is the unperturbed Fermi-Dirac distribution, and \(\epsilon=\sqrt{q^{2}+a^{2}m(a)^{2}}\) as mentioned earlier. Macroscopic variables such as density contrast, bulk velocity, and anisotropic stress can be constructed by integrating Eq. (II) over comoving momenta. The nonvanishing mass prohibits us from performing the phase space integral (over comoving momentum \(q\)) analytically. Another variable in our study is the unperturbed temperature, \(T\), in the Fermi-Dirac distribution function. From Eq. (II) it can be shown that the relevant variables are \(q/T\) and \(\epsilon/T\) or, at late times when the particles are non-relativistic, the evolution of the system of equations is determined by \(m(z)/T\). We modify the Boltzmann solver code CLASS to add a new species with time-varying mass \(m(z)\), and compute the linear matter power spectrum and the CMB anisotropy power spectra [28].
In the left panel of Fig. 2, we show the linear matter power spectra for a number of values of \(z_{t}\) with \(\Delta z=1\) and \(M=1000\) eV, and the right panel shows the ratio of those spectra to that in \(\Lambda\)CDM to clearly show the differences. The main feature in the new power spectrum is the suppression of power at small scale. The _cut-off scale_\(k_{t}\), below which this happens, is the scale that entered the horizon at redshift \(z_{t}\). For a mode \(k>k_{t}\), MVDM was still relativistic when it entered the horizon and was free streaming. This inhibits the growth of structures at those scales resulting in a power suppression. Note that this is qualitatively similar to the analogous feature observed in warm dark matter power spectrum [29].
However, the amount of suppression depends on the transition redshift \(z_{t}\) parameter in our model. The Boltzmann hierarchy (Eq. (II)) is an expansion in \(q/\epsilon\). As the particle enters the non-relativistic phase, \(q\simeq ma\) and \(q\ll ma\) at later epochs. This allows us to truncate the hierarchy in Eq. (II) and obtain the corresponding fluid equations (e.g. see Refs. [24; 29; 30]). The free streaming scale at the time of this transition determines the cut-off scale \(k_{t}\) below which the perturbations are wiped out.1
Footnote 1: The free streaming length is inversely proportional to the the thermal energy density \(\rho_{\rm MVDM}(z=0)/\rho_{\rm MVDM}(z=0)\).
Figure 2: Ratio of the linear matter power spectrum to \(\Lambda\)CDM for different values of \(z_{t}\). See text for the explanation of different features. For comparison, we also show the ratio in WDM cosmology with same mass. The temperature is assumed to be \(T=T_{\gamma}/10\).
Unlike the WDM model, for which this transition occurs with the onset of the nonrelativistic era, and hence its perturbations are driven solely by the mass of the particle, \(z_{t}\) is responsible for the start of this phase in our case. This is an important distinction between the two models.
Footnote 1: The \(\Lambda\)CDM matter power spectrum is the same as in the standard CDM model, but the \(\Lambda\)CDM matter power spectrum is the same as in the standard CDM model.
Later transition means the MVDM was relativistic for longer time, which in turn means they could wash out structures at larger length scales, i.e. smaller \(k\) modes. This is evident from Fig. 2 as the case \(z_{t}=10^{5}\) case has a smaller cut-off scale \(k_{t}\) relative to the WDM model with the same mass. Therefore, it shows that the mass of DM is not the only parameter that controls the cut-off scale, as opposed to the WDM cosmology.
Another feature of the ratio of power spectra in Fig. 2 is the oscillations around \(k\gtrsim 0.1\,h\,{\rm Mpc}^{-1}\). This is a result of a phase shift in the matter power spectrum relative to \(\Lambda\)CDM [31, 32]. Any free streaming relativistic species travels at the speed of light that is greater than the sound speed in the photon-baryon bath before recombination. As a result, they drag the metric perturbations via gravity in a radiation dominated Universe. This in turn creates a phase shift in the acoustic oscillations in the thermal bath manifesting itself in the observable density fluctuations in the Universe today. Such phase shifts have been observed in both CMB anisotropy power spectrum and the baryonic acoustic oscillation (BAO) in the matter power spectrum [32, 33, 34]. Its presence (or lack of it) has been used to look for other new physics scenarios, including neutrino self-interaction [35, 36, 37]. In the present scenario, the MVDM generates this phase shift during its relativistic phase prior to \(z_{t}\) due to its small but nonzero \(\Delta N_{\rm eff}\). Because the \(\Lambda\)CDM matter power spectrum has the BAO oscillations in the range \(0.1\lesssim k\lesssim 1\,h\,{\rm Mpc}^{-1}\), the phase shift manifests itself in that BAO oscillations [32].
The temperature and polarization anisotropy power spectra of the CMB are also modified by these effects. In Figs. 3 and 4, we show the relative changes to the TT and EE power spectra. For comparison, we also the spectrum for WDM model with the same DM mass. The most prominent feature is the power suppression at smaller angular scales which is direct consequence of the power suppression in the matter power spectrum below the free streaming scale. This is qualitatively similar to the WDM model. Like the matter power spectrum, the amount of power suppression is greater for later transition redshift. The wiggles in Figs. 3 and 4 are due to the phase shift mentioned earlier.
## IV Constraints from Lyman-\(\alpha\) forest data
The Lyman-\(\alpha\) forest correspond to the absorption features on the redward side of the rest frame Lyman-\(\alpha\) radiation from distant quasars by the intervening neutral hydrogen gas clouds. The density and amplitude of these features tell us about fluctuations in the neutral hydrogen density of the mostly ionized diffuse IGM in the post-reionization era in the redshift range \(2.5\leq z\leq 6\)
Figure 4: The relative change in the CMB EE angular power spectrum for the same parameters as in Fig. 3.
Figure 3: The relative change in the CMB TT angular power spectrum for \(M=1000~{}{\rm eV},\Delta z=1\) and two different \(z_{t}\). For comparison, the WDM spectrum with the same mass is also shown.
This in turn tells us about the DM structure formation at those redshifts. Hydrodynamical simulations have shown that the observed fluctuations correspond to mildly non-linear density contrast (\(\delta<10\)) of the underlying density field (e.g. [38; 39; 40; 41; 42] and references therein). The Lyman-\(\alpha\) data allows one to probe the fluctuations of the density field at scales as small as the Jeans' scale of the IGM (\(k\simeq 5\)-\(7\,\mathrm{Mpc}^{-1}\)) in the redshift range \(2.5<z<6\). The CMB and galaxy data probe much larger scales \(k\lesssim 0.1\,\mathrm{Mpc}^{-1\,2}\). Therefore, the Lyman-\(\alpha\) data is particularly suited for our study as the our results deviate significantly from the \(\Lambda\)CDM (and WDM) model at small scales (Figs. 3, 4, and 5).
In this section, we provide estimates of the limits on \(M\) and \(z_{t}\) from the SDSS Lyman-\(\alpha\) data. We use Lyman-\(\alpha\) absorption data from the fourteenth data release (DR14) of the fourth generation Sloan Digital Sky Survey (SDSS-IV) [43] that was translated to a constraint on linear matter power spectrum in Ref. [44]. This procedure requires a fiducial cosmological model. The method followed in Ref. [44] involves calculating a scale and redshift-dependent bias factor to relate 1D and 3D flux power spectra based on fiducial cosmological model given by 2013 bestfit model of the Planck collaboration. In our study, we used a fiducial model given by the latest Planck 2018 data without massive neutrino. However, we expect the difference arising from the choice of such a model to be much smaller as compared to the resulting error bars on the extracted 3D matter power spectrum \(P(k)\). In Fig. 5, we show a few matter power spectra for different values of \(M/T\) and \(z_{t}\).
We carry out a likelihood analysis to compare the extracted 3D matter power spectra with our model predictions. In Fig. 6 we show \(2\sigma\) (\(95\%\) confidence level) exclusion region in the plane of \(M/T\) and \(z_{t}\) using the eBOSS DR14 Ly-\(\alpha\) data. In the limit of small \(M/T\), the MVDM is still relativistic when \(m(z)\) becomes nonzero. This scenario is identical to a WDM model (labeled by a gray-shade in Fig. 6) which we do not study in this paper. The shape of the exclusion region follows from the discussion in the previous section. For \(10\lesssim M/T\lesssim 3\times 10^{3}\), the lightness of MVDM and large free streaming length creates more power suppression at small scales than allowed by the data. However, for larger \(M/T\gtrsim 3\times 10^{3}\) the free streaming length decreases, and this space is allowed, except for \(z_{t}\lesssim 5\times 10^{4}\) where the transition is so delayed that the radiation phase of MVDM before \(z_{t}\) creates large changes in the power spectrum.
## V Qualitative discussion on particle physics scenario
As stated in the introduction, the goal of the this paper is to study the effect of mass varying dark matter particle on the matter and CMB power spectra. To this end, we have taken an model independent approach, where we modeled the time variation \(m(z)\) with an empirical function which captures the smooth rise of dark matter mass from zero to a finite value. In this section we qualitatively discuss how time dependent dark matter mass can arise from particle interaction and point out to the previous work.
If a dark matter particle mass is dependent on a scalar field which rolls over a potential (as originally proposed in [21]),the particle mass is generated by the expectation value of a scalar field which does not have a stable vacuum state. As the universe expands, the density of particles decreases, leading to an increase in the vacuum expectation value of the scalar (and hence the mass of the particle). A similar scenario (though in different particle physics context) was introduced for mass varying neutrino models, where a fermionic particle can have dynamical mass due to its interaction with a scalar field which may also play the role of dark energy [45; 46]. Adopting the same mechanism, recently a mass varying dark matter model was introduced [22]. In both of these cases, the dark matter mass evolves smoothly from almost zero to higher value as a function of redshift making the transition in equation of state from radiation to pressureless matter. However, there are other models where the transition from radiation to matter is abrupt [47; 48], and the corresponding signature is cosmology is quite different. Generally a phase transition in a very light scalar field
Figure 5: The matter power spectrum for three different sets of values of \(M\) and \(z_{t}\) with \(\Delta z=1,T=T_{\gamma}/10\). The eBOSS DR14 Ly-\(\alpha\) data is shown in gray [43; 44].
sector is responsible for such abrupt change in equation of state. After the transition, the scalar starts oscillating coherently around a minimum of a quadratic potential and starts behaving like dark matter [49; 50; 51].
## VI Conclusion
We have studied the cosmological signatures of a mass-varying dark matter (MVDM) model where the mass \(m(z)\) of DM makes a transition from zero to a finite value \(M\) at a redshift \(z_{t}\). We consider scenarios in which the transition is rapid, i.e. \(\Delta z/z\ll 1\) and the MVDM is non-relativistic when \(m(z)\) becomes nonzero. We have computed the linear matter power spectrum and the CMB angular power spectra for these models.
The effects on the matter and CMB power spectra are qualitatively similar to that of a WDM model. In both cases, the free streaming of MVDM with large thermal velocity impedes structure formation at small scales resulting in matter power suppression. However, relative to a WDM of same mass, MVDM yields greater suppression and impact larger scales (or a smaller cut-off scale \(k_{t}\)). This is because the particle is massless before the transition redshift \(z_{t}\). Therefore, unlike the WDM model, both the mass \(M\) and transition redshift \(z_{t}\) determine the relevant scales at which the power is suppressed and the amount of suppression.
Similarly, the CMB temperature and polarization anisotropy power spectra receive changes in this model that are broadly similar to the WDM case but distinct from it. This is clearly seen in Figs. 3 and 4. In addition to the suppression of power, there is a phase shift in the oscillations of the CMB spectra. This could constitute a tell-tale signature of any new radiation and/or secret interaction in the early Universe [31; 33]. Near future CMB experiments, that plan to measure the polarization anisotropies more precisely, can attempt to detect such phase shift in the power spectra.
As the expected departure of our model from the standard cases is more prominent at small scales, we compare our result against the SDSS Lyman-\(\alpha\) data. In the current work, we do not attempt to pinpoint the unique features of our model. The degeneracy between \(M\) and \(z_{t}\) might make this task difficult. For example, a small scale power suppression might be explainable either by WDM or MVDM of larger mass but later \(z_{t}\). Further work is needed to find any qualitative differences between the two models and quantify them. In a future work, we plan to statistically analyse this model in more detail, and quantify its differences from the WDM model.
###### Acknowledgements.
The work of A.D. was supported by the U.S. Department of Energy under contract number DE-AC02-76SF00515 and Grant Korea NRF-2019R1C1C1010050. A.D. gratefully acknowledges the hospitality at APCTP during the completion of this work. SD acknowledges SERB DST Government of India grant CRG/2019/006147 for supporting the project.
|
2301.13380 | Automated Time-frequency Domain Audio Crossfades using Graph Cuts | The problem of transitioning smoothly from one audio clip to another arises
in many music consumption scenarios, especially as music consumption has moved
from professionally curated and live-streamed radios to personal playback
devices and services. we present the first steps toward a new method of
automatically transitioning from one audio clip to another by discretizing the
frequency spectrum into bins and then finding transition times for each bin. We
phrase the problem as one of graph flow optimization; specifically
min-cut/max-flow. | Kyle Robinson, Dan Brown | 2023-01-31T03:05:48Z | http://arxiv.org/abs/2301.13380v1 | # Automated time-frequency domain audio crossgrades using graph cuts
###### Abstract
The problem of transitioning smoothly from one audio clip to another arises in many music consumption scenarios; especially as music consumption has moved from professionally curated and live-streamed radios to personal playback devices and services. Classically, transitioning from one song to another has been reliant on either pre-mixed transitions on recorded digital or physical media, hardware or software crossfading on the playback device, or professional transitions by a host or disk jockey (DJ). While options for software crossfading are ubiquitous on music streaming platforms and media players alike, these transitions pale in quality when compared to those manually applied by an audio engineer or DJ who can harmonically and rhythmically align tracks--and importantly--manually apply equalizer (EQ) filters during transitions. The application of EQ filters specifically allow for different transitions in different audio spectrums. For example, the bass register of one track can be made to replace the bass register of another track before transitioning the higher frequencies. Typically the task of deciding how and where to apply transitions in the frequency domain has been completed manually using a limited number of EQ filters.
There is much research on creating, sorting, and extending playlists so as to have tracks naturally flow into each other, as well as on determining optimal times to transition between similar tracks [1, 3, 4, 6, 8]. Both of these research areas play a key role in synthesising a human DJ. To our knowledge, however, all of these approaches still rely on classical methods of transitioning tracks using amplitude in the time domain (crossfading).
Through the application of an existing visual texture extension algorithm borrowed from computer vision, we present the first steps toward a new method of automatically transitioning from one audio clip to another by discretizing the frequency spectrum into bins and then finding transition times for each bin. [5].
We begin by phrasing the problem of transitioning from one song to another as a graph optimization problem: the graph represents the two songs in the transition range, and a cut happens when we transition from one song to the other at a particular time point. To obtain these representations we first apply a short term fourier transform (STFT) to each song, and then convert the resulting complex time-frequency mapped amplitude values into real decibel values. In order to align the tracks we apply rudimentary tempo matching and beat alignment using the libROSA Python library, and overlap the tracks by a number of beats [7]. We call the resulting STFT transformed data of the first and second song's overlapped segments matrix \(A\) and matrix \(B\)
Figure 1: This spectrogram shows overlapped segments of two music tracks after being combined and reconstructed along a per-frequency seam (bright yellow). The tracks were beat and tempo matched, then overlapped by 64 beats.
respectfully, as seen in Figure 2. Next, we define a simple loss function:
\[w_{i,j}^{k,l}(A,B)=||A_{i,j}-B_{i,j}||+||A_{k,l}-B_{k,l}|| \tag{1}\]
We apply the loss function to each adjacent time-frequency bin and use the resulting values to assign weights to edges in an undirected graph with dimensions equal to \(A\) and \(B\). Finally, the resulting graphs left-most nodes are anchored to a source node, and the right-most nodes are anchored to a sink node. Figure 3 shows a representation of the completed flow graph. In order to obtain a min-cut, we apply the Boykov-Kolmogorov algorithm [2]. The indices of this min-cut are the seam where each frequency bin transitions.
In order to apply the found transition to the audio tracks, we concatenate the complex time-frequency song representations found earlier along the seam, and apply an inverse STFT to obtain the final audio transition seen in Figure 1.
The work here presents an initial foray into automatically transitioning between songs in the time-frequency domain. The loss function described in 1 does not well characterize the inherent qualities found in music, but there is good reason to believe such a cost function can be found through further development. On harmonically similar tracks with similar tempi, the current implementation produces acoustically pleasing results.
|
2309.07058 | QCD phase diagram and equation of state in background electric fields | The phase diagram and the equation of state of QCD is investigated in the
presence of weak background electric fields by means of continuum extrapolated
lattice simulations. The complex action problem at nonzero electric field is
circumvented by a novel Taylor expansion, enabling the determination of the
linear response of the thermal QCD medium to constant electric fields -- in
contrast to simulations at imaginary electric fields, which, as we demonstrate,
involve an infrared singularity. Besides the electric susceptibility of QCD
matter, we determine the dependence of the Polyakov loop on the field strength
to leading order. Our results indicate a plasma-type behavior with a negative
susceptibility at all temperatures, as well as an increase in the transition
temperature as the electric field grows. | Gergely Endrodi, Gergely Marko | 2023-09-13T16:18:30Z | http://arxiv.org/abs/2309.07058v1 | # QCD phase diagram and equation of state in background electric fields
###### Abstract
The phase diagram and the equation of state of QCD is investigated in the presence of weak background electric fields by means of continuum extrapolated lattice simulations. The complex action problem at nonzero electric field is circumvented by a novel Taylor expansion, enabling the determination of the linear response of the thermal QCD medium to constant electric fields - in contrast to simulations at imaginary electric fields, which, as we demonstrate, involve an infrared singularity. Besides the electric susceptibility of QCD matter, we determine the dependence of the Polyakov loop on the field strength to leading order. Our results indicate a plasma-type behavior with a negative susceptibility at all temperatures, as well as an increase in the transition temperature as the electric field grows.
## I Introduction
The phase structure of Quantum Chromodynamics (QCD) in the presence of background electromagnetic fields is an essential attribute of the fundamental theory of quarks and gluons and, accordingly, a subject of active theoretical research. The electromagnetic response of the QCD medium is relevant for a range of physical situations, e.g. the phenomenology of heavy-ion collisions, the description of neutron star interiors or the evolution of our universe in its early stages, see the reviews [1; 2]. If in these settings the electromagnetic fields are sufficiently long-lived compared to the strong scale, it is appropriate to consider QCD matter in a background magnetic or electric field in equilibrium.
Before equilibration, electric fields \(E\) induce a dynamical response via the electrical conductivity of the medium [3]. The subsequently emerging equilibrium necessarily involves - in contrast to the case of magnetic fields \(B\) - an inhomogeneous charge distribution \(n(x)\) in the thermal medium while having constant temperature \(T\) everywhere [4]. The distribution is uniquely fixed by the requirement that pressure gradients and electric forces cancel each other and thus no currents flow [5]. The equilibrium system is therefore described by a _local canonical_ statistical ensemble, where \(n(x)\) is held fixed. It differs from the grand canonical ensemble parameterized by chemical potentials, employed usually at \(E=0\). This aspect renders comparisons between equilibrium systems at \(E>0\) and \(E=0\), e.g. by means of lattice simulations, problematic.
Moreover, the proper definition of the equilibrium state at \(E>0\) requires infrared regularization (e.g. a finite spatial volume \(V\)) that prevents charges to be accelerated to infinity. As we have demonstrated recently within perturbative QED [6], the \(E\to 0\) and \(V\to\infty\) limits of this setup do not commute at nonzero temperature. This renders approaches based on Schwinger's exact \(E>0\) infinite-volume propagator [7] and infrared-regularized weak-field expansions in the manner of Weldon [8] inherently different. For a certain physical setting, the boundary conditions determine which is the appropriate limit to consider. The generalization of these ideas to the case of QCD enables one to explore the impact of background electric fields on strongly interacting matter as well as the associated phase diagram: our objectives in the present letter.
The impact of magnetic fields on the QCD crossover [9; 10] and the corresponding phase diagram is well understood and has been studied extensively on the lattice [11; 12; 13; 14; 15], as well as within models and effective theory approaches (for a recent review, see Ref. [16]). In contrast, electric fields render the QCD action complex, hindering standard lattice simulations. Alternative approaches include Taylor-expansions [17; 18; 19; 20], calculations at imaginary electric fields [21; 22; 23; 24; 25; 26] and simulations with electric fields that couple to the isospin charge of quarks [27]. Still, there are no existing results for the QCD equation of state nor the phase diagram. The latter has only been studied within effective theories like the linear \(\sigma\) model [28], variants of the Nambu-Jona-Lasinio (NJL) model [29; 30; 31; 32] and the Euler-Heisenberg effective action [33]. These calculations are all based on the Schwinger propagator.
In this letter, we determine the QCD equation of state and the phase diagram on the lattice for the first time for weak background electric fields. The complex action problem is circumvented via a Taylor-expansion: this corresponds to the Weldon-type regularization of the electrically polarized thermal medium and is the proper description of a finite system, where equilibration takes place in the presence of a weak electric field. The expansion is based on the method we developed in Refs. [34; 6], and resembles the analogous approach for background magnetic fields [35; 36; 37]. Besides the leading coefficient - the electric susceptibility of QCD matter - we also determine the leading series of the Polyakov loop. Using this observable, we construct the phase diagram and demonstrate that the transition temperature increases as \(E\) grows - contrary to existing model predictions, e.g. [31]. Finally, we demonstrate that lattice simulations at nonzero imaginary electric fields cannot be used to directly calculate the electric susceptibility due to the singular change of ensembles between \(E=0\) and \(iE\neq 0\). Some of our preliminary results have already been presented in Ref. [34].
Lattice setup
QCD matter in thermal equilibrium is a medium that can be polarized by weak background electromagnetic fields. The associated static linear response is characterized by the electric and magnetic susceptibilities (we employ the same notation as in Ref. [6]). These are defined via the matter free energy density \(f\),
\[\xi_{b}=-\left.\frac{\mathrm{d}^{2}f}{\mathrm{d}(eE)^{2}}\right|_{E=0}\,, \qquad\chi_{b}=-\left.\frac{\mathrm{d}^{2}f}{\mathrm{d}(eB)^{2}}\right|_{B=0}\,. \tag{1}\]
Here, the subscript \(b\) indicates that both susceptibilities contain ultraviolet divergent terms that must be subtracted via additive renormalization, see below. The elementary charge \(e\) is included so that we can work with the renormalization group invariants \(eE\) and \(eB\).
The matter free energy density can be rewritten using the partition function \(\mathcal{Z}\) of the system. Using the rooted staggered formalism of lattice QCD, it is given by the Euclidean path integral over the gluon links \(U\),
\[\mathcal{Z}=\int\mathcal{D}U\,e^{-\beta S_{g}}\prod_{f}\det[\not{D}(q_{f})+m_{ f}]^{1/4}\,, \tag{2}\]
where \(\beta=6/g^{2}\) is the inverse gauge coupling and \(m_{f}\) denotes the quark masses with \(f=u,d,s\) running over the quark flavors. The simulations are done in a periodic spatial volume \(V=L^{3}\) with linear size \(L\). Note that \(\mathcal{Z}\) corresponds to the grand canonical ensemble; its relation to the canonical one at \(E>0\) is discussed in App. A. In Eq. (2), \(S_{g}\) is the gluon action (in our discretization, the tree-level improved Symanzik action) and \(\not{D}_{f}\) is the staggered Dirac operator (including a twofold stout smearing of the links) that contains the quark charges \(q_{u}/2=-q_{d}=-q_{s}=e/3\). The quark masses are set to their physical values as a function of the lattice spacing \(a\)[38]. Further details of the action and of our simulation algorithm are given in Refs. [12; 39].
The electromagnetic vector potential \(A_{\nu}\) enters the Dirac operator in the form of temporal parallel transporters \(u_{\nu,f}=\exp(iaq_{f}A_{\nu})\) multiplying the gluon links \(U_{\nu}\). We choose a gauge where \(A_{0}(x_{1})\) represents the electric field and \(A_{2}(x_{1})\) the magnetic field (both pointing in the \(x_{3}\) direction). While magnetic fields are identical in Minkowski and Euclidean space-times, the vector potential relevant for the electric field undergoes a Wick rotation so that \(A_{4}=iA_{0}\), similarly to the case of a chemical potential \(\mu\). Finally we mention that in our setup, quarks do not couple to dynamical photons but only to the external gauge field. The independent thermodynamic variable is the field \(E\) that enters the Dirac operator, analogously to the situation for magnetic fields [40].
## III Observables
As we demonstrated in Ref. [6], the susceptibilities of Eq. (1) are related to derivatives of the electromagnetic vacuum polarization tensor with respect to spatial momenta. For our gauge choice, these relations read in terms the Euclidean polarization tensor \(\Pi_{\mu\nu}\),
\[\xi_{b}=-\frac{1}{2}\left.\frac{\partial^{2}\Pi_{44}(k)}{\partial k_{1}^{2}} \right|_{k=0},\quad\chi_{b}=\frac{1}{2}\left.\frac{\partial^{2}\Pi_{22}(k)}{ \partial k_{1}^{2}}\right|_{k=0}, \tag{3}\]
with a spatial momentum \(k=(k_{1},0,0,0)\). In other words, the zero momentum limit is considered at vanishing time-like frequency, reflecting the static nature of the susceptibilities. The negative sign for \(\xi_{b}\) in (3) appears due to the Wick rotation of the electric field. We highlight that the equilibrium systems at different values of \(E\) exhibit different charge profiles \(n(x_{1})\), and this implicit \(E\)-dependence is taken into account properly in Eq. (3) for the calculation of \(\xi_{b}\)[6]. In fact, without this contribution, \(\xi_{b}\) would diverge in the \(k_{1}\to 0\) limit.
The vacuum polarization tensor is defined as the correlator
\[\Pi_{\mu\nu}(k)=\int\mathrm{d}^{4}x\,e^{ikx}\left\langle j_{\mu}(x)j_{\nu}(0) \right\rangle\,, \tag{4}\]
of the electromagnetic current \(j_{\mu}=\sum_{f}\frac{q_{f}}{e}\bar{\psi}_{f}\gamma_{\mu}\psi_{f}\), for which we use the conserved (one-link) staggered vector current. It is convenient to evaluate (3) in coordinate space, where the bare susceptibilities become [6; 34]
\[\xi_{b}=-\big{\langle}G_{44}^{(2)}\big{\rangle},\qquad\chi_{b}=\big{\langle} G_{22}^{(2)}\big{\rangle}, \tag{5}\]
containing the second moment of a partially zero-momentum projected two-point function
\[G_{\mu\nu}^{(2)} =\int_{0}^{L/2}\!\mathrm{d}x_{1}\,x_{1}^{2}\,G_{\mu\nu}(x_{1}), \tag{6}\] \[G_{\mu\nu}(x_{1}) =\int\mathrm{d}x_{2}\,\mathrm{d}x_{3}\,\mathrm{d}x_{4}\,\,j_{\mu} (x)j_{\nu}(0)\,. \tag{7}\]
The Grassmann integral over quark fields is understood to be implicitly carried out on the right hand side of the last equation.
Both susceptibilities undergo additive renormalization. This originates from the multiplicative divergence in the electric charge \(e\)[41; 36; 42]. Being temperature-independent, the divergence cancels in
\[\xi=\xi_{b}(T)-\xi_{b}(T=0)\,,\quad\chi=\chi_{b}(T)-\chi_{b}(T=0)\,, \tag{8}\]
which sets \(\xi=\chi=0\) at zero temperature. In fact, at \(T=0\) Lorentz invariance ensures that \(\big{\langle}G_{22}^{(2)}\big{\rangle}=\big{\langle}G_{44}^{(2)}\big{\rangle}\), implying that the bare magnetic and electric susceptibilities coincide up to a minus sign. To renormalize the electric susceptibility, we can therefore employ the existing results for \(\chi_{b}(T=0)\) from Ref. [36].
Next we consider the bare Polyakov loop operator,
\[P_{b}=\frac{1}{V}\int\mathrm{d}^{3}\mathbf{x}\,\operatorname{Re}\operatorname{ Tr}\,\prod_{x_{1}}U_{4}(x)\,. \tag{9}\]
Its expectation value is related to the free energy of a static, electrically neutral color charge and is often taken as a measure of deconfinement. Just as for \(\xi_{b}\), the contribution of the equilibrium charge profile needs to be taken into account for the \(E\)-dependence of the Polyakov loop as well. As we show in App. A, the proper second-order expansion of \(\left\langle P_{b}\right\rangle\) is given by the correlator
\[\varphi_{E}^{n}\equiv\left.\frac{\mathrm{d}^{2}\!\left\langle P_{b}\right\rangle ^{n}}{\mathrm{d}(eE)^{2}}\right|_{E=0}=\frac{V}{T}\left[-\big{\langle}P_{b}\,G _{44}^{(2)}\big{\rangle}+\big{\langle}P_{b}\big{\rangle}\!\big{\langle}G_{44}^ {(2)}\big{\rangle}\right]\,, \tag{10}\]
where the superscript \(n\) on the left denotes that the derivative is evaluated along the equilibrium condition specified by the local charge profiles. Analogously, the magnetic derivative of \(\left\langle P_{b}\right\rangle\) can be obtained by replacing \(-G_{44}^{(2)}\) by \(G_{22}^{(2)}\) in Eq. (10), although in that case nontrivial charge distributions do not appear.
The bare Polyakov loop is subject to multiplicative renormalization [43],
\[P(a,T,E)=P_{b}(a,T,E)\cdot\left(\frac{P_{\star}}{P_{b}(a,T_{\star},E=0)}\right) ^{T_{\star}/T}\,, \tag{11}\]
where the renormalization factor is independent of the background field and has been determined for our lattice spacings in Ref. [44]. This renormalization fixes \(P=P_{\star}\) at \(T=T_{\star}\) and \(E=B=0\). In our renormalization scheme we choose \(T_{\star}=162\) MeV and \(P_{\star}=1\).
## IV Results: Susceptibility
We have measured the zero-momentum projected correlator (7) for a broad range of temperatures on \(N_{t}=6,8,10\) and \(12\) lattice ensembles. Finite volume effects were checked using \(16^{3}\times 6\) and \(24^{3}\times 6\) lattices. We report on the details of the measurements and the analysis in App. B. The correlator is convolved with the quadratic kernel according to Eq. (6) to find the bare electric susceptibility \(\xi_{b}\), and its renormalization (8) is carried out by subtracting the zero-temperature contribution.
The negative of the so obtained \(\xi\) is plotted in the upper panel of Fig. 1. A continuum extrapolation is performed via a multi-spline fit of all data points, taking into account \(\mathcal{O}(a^{2})\) lattice artefacts. The systematic error of the fit is estimated by varying the spline node points and including \(\mathcal{O}(a^{4})\) discretization errors in the fit at low temperatures. For all temperatures we observe \(\xi<0\), translating to an electric permittivity below unity - a characteristic feature of plasmas [45]. At high \(T\), our results may be compared to the high-temperature limit calculated for non-interacting quarks of mass \(m\)[6],
\[\xi^{\mathrm{free}}\xrightarrow{T\to\infty}-\sum_{f}(q_{f}/e)^{2}\frac{N_{c} }{12\pi^{2}}\cdot\left[\log\frac{T^{2}\pi^{2}}{m^{2}}-2\gamma_{E}-1\right]\,, \tag{12}\]
where \(N_{c}=3\) is the number of colors [46]. In full QCD, the quark mass is replaced by a QED renormalization scale \(\mu_{\mathrm{QED}}\) that can be determined at \(T=0\) and is found to be \(\mu_{\mathrm{QED}}=115(6)\) MeV [36], close to the mass of the lightest charged hadron i.e. the pion. Moreover, QCD corrections are included by taking into account \(\mathcal{O}(\alpha_{s})\) effects in the prefactor, the QED \(\beta\)-function [47; 36]. The associated thermal scale is varied between \(\pi T\) and \(4\pi T\) for error estimation. The so obtained curve lies very close to our results at high temperature, as visible in the lower panel of Fig. 1, where we also show the corresponding results for \(\chi\) from Ref. [36].
## V Results: Phase Diagram
Next we turn to the Polyakov loop. Its leading expansion is given by Eq. (10), containing the correlator of the bare observable with \(-G_{44}^{(2)}\). This quantity is plotted in the upper panel of Fig. 2 for our \(N_{t}=6\) lattices, revealing negative values for the complete range of tem
Figure 1: Upper panel: the negative of the renormalized electric susceptibility as a function of the temperature for four lattice spacings (colored symbols) and a continuum extrapolation (orange band). Lower panel: continuum extrapolated magnetic (green) and electric (orange) susceptibilities (solid) compared to leading-order perturbation theory (dashed).
peratures, i.e. a reduction of the Polyakov loop by the electric field. Finite volume effects are found to be small, although the results at low temperature have large statistical uncertainties. Using the results for the Polyakov loop at \(E=0\)[44] and the multiplicative renormalization factor from Eq. (11), we construct the \(E\)-dependence of \(\langle P\rangle\), see the lower panel of Fig. 2. The Polyakov loop is known to exhibit a smooth temperature-dependence, so that a precise determination of its inflection point is cumbersome already at \(E=0\). As an alternative, we associate the transition temperature \(T_{c}\) with the point where \(\langle P\rangle=1\) holds. Defined in this manner, the lower panel of Fig. 2 clearly shows that \(T_{c}\) is increased by \(E\).
We repeat this analysis for all four lattice spacings. Our results for the transition temperature are shown in Fig. 3, confirming the significant enhancement of \(T_{c}\) as the electric field grows. We perform the continuum extrapolation by a quadratic fit of \(T_{c}(E)\) taking into account \(\mathcal{O}(a^{2})\) lattice artefacts. To estimate the systematic error, we vary the fit range and also allow a quartic term in the fit. The fits are found to be stable for the region \(E\lesssim 0.3\ \text{GeV}^{2}\)[48]. The curvature of the transition line is found to be
\[\kappa_{E}\equiv\left.\frac{\partial^{2}T_{c}(E)}{\partial(eE)^{2}}\right|_{E= 0}=0.37(9)\ \text{GeV}^{-3}\,. \tag{13}\]
Furthermore, we find the transition to get stronger as \(E\) grows, revealed by an enhancement of the slope of the Polyakov loop as a function of \(T\), see Fig. 2. However, due to the large uncertainties at low temperatures, we cannot make a quantitative statement about this aspect.
## VI Results: Imaginary Electric Fields
Finally, we consider lattice simulations at constant imaginary electric fields \(iE\). In a finite periodic volume at nonzero temperature, the allowed field values are quantized as \(ieE=6\pi T/L\cdot N_{e}\) with the 'flux' quantum \(N_{e}\in\mathds{Z}\)[49]. This setup does not correspond to the analytic continuation of the local canonical ensemble as described in the introduction. Nevertheless, it involves a global constraint: the total electric charge in the periodic volume vanishes. As a consequence, this setup is independent of the global imaginary chemical potential \(i\mu\). Indeed, including any \(i\mu\neq 0\) can be canceled in the gauge field by a mere coordinate translation \(x_{1}\to x_{1}-\mu/E\). This is in stark contrast to the situation at \(E=0\), where a dependence on \(i\mu\) is naturally present.
To discuss this issue, we neglect gluonic interactions in the following. In this simplified setting, we can calculate the free energy density directly via exact diagonalization of the Dirac operator [50]. In the right side of Fig. 4 we show the results for \(\Delta f=f-f(E=\mu=0)\) obtained on a \(200^{3}\times 20\) lattice with quark mass \(m/T=0.08\). As expected, \(f\) is found to be independent of the imaginary chemical potential in the whole range \(0\leq i\mu\leq\pi T\) at any \(iE\neq 0\). The comparison to a larger volume \(300^{3}\times 20\) shows that the smallest allowed electric field value approaches zero in the thermodynamic limit, but \(\lim_{iE\to 0}f(iE)\neq f(iE=0)\). Instead, the data points rather accumulate towards the average of \(f\) over all possible imaginary chemical potential values - i.e. a canonical setup where the total charge is constrained to zero. Altogether, we conclude that the dependence of \(f\) on \(iE\) is singular at \(E=0\) in the thermodynamic limit, ren
Figure 2: Upper panel: the leading expansion coefficient of the bare Polyakov loop as a function of the temperature as obtained on our \(24^{3}\times 6\) (red) and \(16^{3}\times 6\) (green) ensembles. Lower panel: renormalized Polyakov loop at nonzero electric fields, constructed from the leading Taylor series. The crossing point with the dashed yellow line is identified with \(T_{c}\).
Figure 3: Transition temperature as a function of the electric field for different lattice spacings (colored symbols) and a continuum extrapolation (yellow band). Higher-order effects in \(eE\) become non-negligible for \(eE\gtrsim 0.3\ \text{GeV}^{2}\), indicated by the dashed section of the fits.
dering simulations with homogeneous imaginary electric fields unsuited for the evaluation of \(\xi\).
In addition, the left side of Fig. 4 shows \(\Delta f\) for oscillatory imaginary electric fields with the profile \(E(x_{1})=E\sqrt{2}\cos(2\pi nx/L)\). In this case, the role of the infrared regulator is played by the wave number \(n\) and not by the volume. Moreover, here \(E\) is a continuous variable but \(n\in\mathds{Z}\) is discrete. This setup does not fix the overall charge and, therefore, maintains the dependence of \(f\) on \(i\mu\). Indeed, the results reveal a continuous behavior as a function of \(iE\) and \(i\mu\). However, as visible in the plot, the results again approach a singular behavior as the infrared regulator is removed: the curves collapse to a set of \(i\mu\)-independent nodepoints approaching the \(iE=0\) axis. In particular, the curvature of \(f\) with respect to \(iE\) diverges for \(n\to 0\). Thus, the homogeneous limit of the setup with oscillatory imaginary fields reproduces what we have already seen for the homogeneous case.
## VII Discussion
In this letter we studied the thermodynamics of QCD at nonzero background electric fields \(E\) via lattice simulations with physical quark masses. To avoid the complex action problem at \(E>0\), we employed a leading-order Taylor-expansion. This approach is more complicated than the analogous expansion in a chemical potential, because the impact of \(E\) on the equilibrium charge distribution needs to be taken into account [6]. Our results, measured on four different lattice spacings and extrapolated to the continuum limit, demonstrate two main effects. First, that QCD matter is described by a negative electric susceptibility at all temperatures. Second, that the QCD transition, as defined in terms of the Polyakov loop, is shifted to higher temperatures as the electric field grows, leading to the phase diagram in Fig. 3. Furthermore, we showed that lattice simulations employing imaginary electric fields cannot be used to directly assess these aspects due to a singular behavior around \(E=0\).
We mention that the susceptibility and the phase diagram are both encoded by the thermal contributions to the real part of the free energy density. These are therefore not impacted by Schwinger pair creation, which is related to the imaginary part of \(f\) and is known to be independent of the temperature [51; 52]. In other words, the equilibrium charge profile and the polarization of the medium are related to the distribution of thermal charges and not of those created from the vacuum via the Schwinger effect.
Finally we point out that calculations within the PNJL model [31], employing the Schwinger propagator, predict the opposite picture for the phase diagram as compared to our findings. Whether the same tendency holds for the Weldon-type regularization within this model, is an open question calling for further study. Besides this aspect, the PNJL model is known to miss important gluonic effects in the presence of electromagnetic fields and fails to correctly describe the phase diagram at \(B>0\)[16]. It would be interesting to see whether improvements that were found to correct these shortcomings of the model in the magnetic setting [53] also work in the \(E>0\) case.
_Acknowledgments_. This research was funded by the DFG (SFB TRR 211 - project number 315477589). The authors are grateful to Andrei Alexandru, Bastian Brandt, David Dudal, Duifje van Egmond and Urko Reinosa for enlightening discussions.
## Appendix A Expansion of the Polyakov loop
Here we construct the Taylor expansion of the Polyakov loop expectation value in the background electric field. We generalize the analogous calculation for the free energy density [6] to the expectation value \(\langle P_{b}\rangle\).
In the presence of the electric field, the equilibrium charge density profile \(n(x_{1})\) varies in the \(x_{1}\) direction (the coordinate system is chosen so that \(-L/2\leq x_{1}\leq L/2\)). We consider the implications of such an equilibrium using a homogeneous background field generated by the vector potential \(A_{0}(x_{1})=Ex_{1}\), regularized by the finite system size (assuming open boundary conditions). Moreover, the field is assumed to be weak so that the system can be thought of as a collection of subsystems at different \(x_{1}\) with approximately constant density. These are characterized by a canonical free energy density \(f\) parameterized by the local density, instead of the usual grand canonical free energy density \(\Omega\), parameterized by the chemical potential. The latter is given in terms of the lattice partition function (2) as \(\Omega=-T/V\log\mathcal{Z}\). The two free energy definitions are related by a local Legendre
Figure 4: Free energy density as a function of homogeneous (right side) and oscillatory (left side) imaginary electric fields. The results for different imaginary chemical potentials correspond to the set of curves in the left (\(i\mu\) grows from \(0\) to \(\pi T\) from the bottom to the top), while they lie on top of each other on the right.
transformation [6],
\[f=\frac{1}{L}\int\mathrm{d}x_{1}\left[\Omega-\mu\frac{\partial\Omega}{\partial\mu} \right]_{\mu=\bar{\mu}(x_{1})} \tag{10}\]
The local chemical potential is fixed by the requirement that diffusion and electric forces cancel, i.e. \(\bar{\mu}(x_{1})=-eEx_{1}\). This choice corresponds to a globally neutral system, where the volume average of the chemical potential vanishes.
Including the bare Polyakov loop in the action with a coefficient \(\alpha\) and taking the derivative of (10) with respect to \(\alpha\) at \(\alpha=0\) results in
\[\left\langle P_{b}\right\rangle^{n}=\frac{1}{L}\int\mathrm{d}x_{1}\left[ \left\langle P_{b}\right\rangle-\mu\,\frac{\partial\langle P_{b}\rangle}{ \partial\mu}\right]_{\mu=\bar{\mu}(x_{1})}\,, \tag{11}\]
giving the expectation value of the Polyakov loop in the local canonical ensemble. Taking the second total derivative of (11) with respect to \(eE\), and evaluating it at \(E=0\) (implying \(\bar{\mu}=0\)), we obtain for (10)
\[\varphi_{E}^{n}=\frac{1}{L}\int\mathrm{d}x_{1}\left[\varphi_{E}-\varphi_{\mu} \cdot x_{1}^{2}\right]\,, \tag{12}\]
with
\[\varphi_{E}=\left.\frac{\partial^{2}\langle P_{b}\rangle}{\partial(eE)^{2}} \right|_{E=0},\qquad\varphi_{\mu}=\left.\frac{\partial^{2}\langle P_{b}\rangle }{\partial\mu^{2}}\right|_{\mu=0}\,. \tag{13}\]
The Polyakov loop operator \(P_{b}\) does not depend explicitly on the electric field nor on the chemical potential. The derivatives of \(\langle P_{b}\rangle\) therefore merely involve the derivative of the weight in the path integral (2).
Let us first discuss \(\varphi_{\mu}\). The chemical potential multiplies the volume integral of \(j_{4}\) in the Euclidean action (before integrating out fermions), therefore
\[\varphi_{\mu}=\int\mathrm{d}^{4}y\,\mathrm{d}^{4}z\left[\langle P_{b}j_{4}(y )j_{4}(z)\rangle-\langle P_{b}\rangle\left\langle j_{4}(y)j_{4}(z)\right\rangle \right]\,, \tag{14}\]
where we used that \(\left\langle j_{4}(y)\right\rangle=0\) due to parity symmetry. Substituting the integration variable \(z\) by \(u=z-y\), exploiting the translational invariance of the correlators and using the definition (7) of the projected correlator, we arrive at
\[\varphi_{\mu}=\frac{V}{T}\int\mathrm{d}u_{1}\left[\langle P_{b}G_{44}(u_{1}) \rangle-\langle P_{b}\rangle\left\langle G_{44}(u_{1})\right\rangle\right]\,. \tag{15}\]
Next, we turn to \(\varphi_{E}\). This time, the Euclidean action contains the four-volume integral of \(ieA_{4}(y_{1})\cdot j_{4}(y_{1})\) with \(A_{4}(y_{1})=-iE\,y_{1}\). The second derivative therefore becomes
\[\varphi_{E}=\int\mathrm{d}^{4}y\,\mathrm{d}^{4}z\,y_{1}z_{1}\left[\langle P_ {b}j_{4}(y)j_{4}(z)\rangle-\langle P_{b}\rangle\left\langle j_{4}(y)j_{4}(z) \right\rangle\right]\,, \tag{16}\]
We proceed by rewriting \(y_{1}z_{1}=-(z_{1}-y_{1})^{2}/2+(y_{1}^{2}+z_{1}^{2})/2\) and use that the second term can be replaced by \(y_{1}^{2}\) as it multiplies a factor that is symmetric under the exchange of \(y_{1}\) and \(z_{1}\) under the integrals. With the same variable substitution as above, the use of translational invariance of the correlators this time gives
\[\varphi_{E}= -\frac{V}{T}\int\mathrm{d}u_{1}\frac{u_{1}^{2}}{2}\left[\langle P _{b}G_{44}(u_{1})\rangle-\langle P_{b}\rangle\left\langle G_{44}(u_{1}) \right\rangle\right]\] \[+\frac{1}{L}\int\mathrm{d}y_{1}\,y_{1}^{2}\cdot\varphi_{\mu}\,. \tag{17}\]
The second term in (17) is clearly divergent in the thermodynamic limit. Coming back to (12), we see that this infrared singular term exactly cancels in \(\varphi_{E}^{n}\), rendering the curvature of the Polyakov loop expectation value finite when evaluated along the equilibrium condition involving the inhomogeneous charge profile. Finally, employing the \(u_{1}\leftrightarrow-u_{1}\) symmetry of the \(E=0\) system, we end up with Eq. (10) of the main text, involving the second moment \(G_{44}^{(2)}\) defined in Eq. (6).
There is one more aspect regarding the dependence of the Polyakov loop on \(E\) that deserves mentioning. In lattice simulations with constant imaginary electric fields \(iE\) at nonzero temperature, the Polyakov loop was observed to develop a local phase proportional to the local vector potential \(\arg P_{b}(x_{1})\propto ieEx_{1}/T\)[26] (see also the analogous study [54]). This results from the preference of local Polyakov loops towards different center sectors for different \(x_{1}\). Together with the quantization condition for the imaginary electric flux, this corresponds to a topological behavior of the Polyakov loop angle winding around the lattice. Thus, the volume-averaged \(P_{b}\) vanishes in these \(iE\neq 0\) simulations, as opposed to its nonzero value at \(E=0\), showing the singular change of relevant ensembles as the electric field is switched on. Again, we conclude that simulations with homogeneous imaginary electric fields cannot be used for a direct comparison to the \(E=0\) system.
## Appendix B Correlators
Here we discuss the determination of the correlator \(G_{44}(x_{1})\) and the bare electric susceptibility in more detail. The density-density correlator is calculated using \(\mathcal{O}(1000)\) random sources located on three-dimensional \(x_{1}\)-slices of our lattices. We take into account both connected and disconnected contributions in the two-point function. More details regarding the implementation can be found in [35]. We note that the same two-point function is required, at zero temperature, for the calculation of the hadronic contribution to the muon anomalous magnetic moment, see e.g. Ref. [55].
In Fig. 5 we show the zero-momentum projected density-density correlator \(\langle G_{44}\rangle\) as a function of the coordinate at \(T\approx 176\) MeV. For comparison, the current-current correlator \(\langle G_{22}\rangle\), relevant for the magnetic response, is also included. A substantial difference is visible, reflecting the absence of Lorentz symmetry at this high temperature. It is interesting to note the systematic
oscillation of \(\langle G_{44}(x_{1})\rangle\) between even and odd distances - related to the use of staggered fermions - which is however absent for \(\langle G_{22}(x_{1})\rangle\).
To assess finite volume effects, we consider the convolution (6) and truncate it at a distance \(x_{1}^{\rm max}\). The so obtained truncated susceptibility approaches the full susceptibility at \(x_{1}^{\rm max}=L/2\) and is plotted in Fig. 6 for two different volumes. On the \(24^{3}\times 6\) lattice, the plot shows that contributions coming from the middle of the lattice volume are exponentially small, as expected. Moreover, at \(x_{1}^{\rm max}=L/2\), the results obtained on the two volumes agree with each other within errors. The dominant systematic error for the determination of our final result is found to come from the continuum extrapolation, which is discussed in the main text.
|
2302.14430 | Tracking Fast by Learning Slow: An Event-based Speed Adaptive Hand
Tracker Leveraging Knowledge in RGB Domain | 3D hand tracking methods based on monocular RGB videos are easily affected by
motion blur, while event camera, a sensor with high temporal resolution and
dynamic range, is naturally suitable for this task with sparse output and low
power consumption. However, obtaining 3D annotations of fast-moving hands is
difficult for constructing event-based hand-tracking datasets. In this paper,
we provided an event-based speed adaptive hand tracker (ESAHT) to solve the
hand tracking problem based on event camera. We enabled a CNN model trained on
a hand tracking dataset with slow motion, which enabled the model to leverage
the knowledge of RGB-based hand tracking solutions, to work on fast hand
tracking tasks. To realize our solution, we constructed the first 3D hand
tracking dataset captured by an event camera in a real-world environment,
figured out two data augment methods to narrow the domain gap between slow and
fast motion data, developed a speed adaptive event stream segmentation method
to handle hand movements in different moving speeds, and introduced a new
event-to-frame representation method adaptive to event streams with different
lengths. Experiments showed that our solution outperformed RGB-based as well as
previous event-based solutions in fast hand tracking tasks, and our codes and
dataset will be publicly available. | Chuanlin Lan, Ziyuan Yin, Arindam Basu, Rosa H. M. Chan | 2023-02-28T09:18:48Z | http://arxiv.org/abs/2302.14430v1 | Tracking Fast by Learning Slow: An Event-based Speed Adaptive Hand Tracker Leveraging Knowledge in RGB Domain
###### Abstract
3D hand tracking methods based on monocular RGB videos are easily affected by motion blur, while event camera, a sensor with high temporal resolution and dynamic range, is naturally suitable for this task with sparse output and low power consumption. However, obtaining 3D annotations of fast-moving hands is difficult for constructing event-based hand-tracking datasets. In this paper, we provided an event-based speed adaptive hand tracker (ESAHT) to solve the hand tracking problem based on event camera. We enabled a CNN model trained on a hand tracking dataset with slow motion, which enabled the model to leverage the knowledge of RGB-based hand tracking solutions, to work on fast hand tracking tasks. To realize our solution, we constructed the first 3D hand tracking dataset captured by an event camera in a real-world environment, figured out two data augment methods to narrow the domain gap between slow and fast motion data, developed a speed adaptive event stream segmentation method to handle hand movements in different moving speeds, and introduced a new event-to-frame representation method adaptive to event streams with different lengths. Experiments showed that our solution outperformed RGB-based as well as previous event-based solutions in fast hand tracking tasks, and our codes and dataset will be publicly available.
## 1 Introduction
Hand tracking, or hand pose estimation, is a critical topic in the realization of touchless gesture-based human-computer interaction (HCI). With the continuous development of better deep-learning models, most of the current hand tracking algorithms are based on frames recorded by RGB or depth cameras [1][2][3]. However, due to the limitations of this hardware with low temporal resolution (usually 30-60 fps), captured images of hands in rapid motion are blurred and limit the performance of hand tracking algorithms.
One previous possible solution considers RGBD cameras with high frequency. However, RGB cameras with high temporal resolution usually require a high illumination environment, limiting their application scenarios. The large-scale data flow of such RGB cameras also challenges data transmission, storage, and computation system. Depth cameras based on binocular vision have a similar problem, while time-of-flight (ToF) depth sensors with a high frame rate are not available. Another possible solution applies additional data generated by other high-frequency sensors. [4] proposed a multi-modality method, fusing 100Hz gyroscope data and 30Hz RGBD stream, but the accuracy enhancement brought by gyroscope data was limited. This proposed method is relatively impractical because it also requires extra sensor(s) fixed on hands as well as calibration steps before starting estimating hand pose.
In this work, we utilize an event camera to track hands at different motion speeds. Compared to normal RGB cameras recording frames in a fixed frequency, event cameras could respond to local brightness changes in an asynchronous way, enabling both high temporal resolution and high dynamic range, which makes it optimal hardware to track hands in diverse real-world environments. EventHands [5] is a pioneer learning-based method to track hand motion in event
streams, which constructed a synthetic dataset, developed representation method to transfer event streams to images, and trained a CNN consisting of ResNet18 and MANO model to solve the problem. However, the domain gap between real and synthetic datasets caused a performance drop when validating the model on real data. Consequently, a dataset collected in the real environment is of vital importance for event hand tracking.
However, obtaining the 3D annotation is a challenging task. Annotation methods like data glove and VICON system would change the appearance of hands, affect the captured event stream and thus are not suitable for this problem. Another option is to apply an RGB binocular stereo vision system to get the 3D label of streams recorded by an event camera, which allow us to utilize the knowledge of developed RGB-based method, but give rise to the problem raised at the beginning: how to track hands in fast motion via RGB camera with limited temporal resolution?
As shown in Fig. 1, we sidestepped this problem by applying a deep-learning model trained on a hand tracking dataset with slow motion, which enabled the event hand tracker to leverage the knowledge of RGB-based hand tracking solution, to work on fast hand tracking tasks. To narrow the domain gap between slow and fast motion data, 1) in the training procedure we augmented the slow motion data by utilizing event stream of different length to generate event frame as well as randomly suppress the noise in the frame; 2) in the predicting procedure, we figured out a speed adaptive segment method to ensure similar motion range in event frame; 3) we figured out a new representation method, Locally Normalized Event Count and Surface (LNECS), which could preserve the time information and reduce the impact of different noise distribution of event stream of varying length. It is worth noting that the above methodology can be applied not only to fast hand tracking tasks but also to other fast-moving object detection or tracking tasks, especially when the annotations of the fast-moving object are impossible or expensive to obtain. We also construct the first event-based hand tracking dataset collected in real environment to train and test our solution. Experiments showed that our solution outperformed RGB-based solutions in fast hand tracking tasks. In summary, our contributions are:
* We created an Event-based Speed Adaptive Hand Tracker, a solution that uses a deep learning model trained on a hand tracking dataset with slow motion to perform fast hand tracking tasks. This approach allows us to utilize the knowledge from RGB-based hand tracking solutions for event-based fast hand tracking. The same methodology can also be applied to other fast-moving object detection or tracking tasks.
* We figured out three methods to reduce the domain gap between the training data for slow motion and the prediction data for fast motion. These methods include data augmentation techniques, the speed adaptive segment method, and the LNECS representation method.
Figure 1: Our fast hand tracking solution is based on the idea _tracking fast by learning from slow_ to sidestep the difficulty in obtaining 3D annotations of hands in fast motion. We applied 2 RGB cameras and 1 event camera to record slow hand motion as trainset, trained an event student network with supervising information from RGB-based teacher method, and enabled the event student network to solve speed adaptive hand tracking task in the prediction procedure. To narrow the gap between the trainset of slow motion and testest of fast motion, we figured out two data augment methods, a speed adaptive event stream segmentation method and a event-to-frame representation method.
* We constructed and released the first event-based hand tracking dataset collected in the real world.
## 2 Related Work
### 3D Hand Tracking Methods
Most of the existing 3D hand tracking methods were based on RGBD frames, which could be divided into two categories: discriminative model and generative model. Discriminative models estimated hand pose directly from the input frame. To enhance tracking performance, feature fusion operations were widely conducted. Region ensemble network (REN) [6] divided the feature map output by a CNN into four parts and fused the features to compute the final result. TriHorn-Net [7] first computed the 2D heat map and then fused the 2D feature map, attention map, and depth features to regress the depth value of each joint. Generative models first learned the distributions of the dataset and applied the learned distribution to train a discriminative model. Generalized feedback loop [8] applied a synthesizer CNN to learn the data distribution and to generate a depth image from the initially predicted hand pose and then utilized an updater CNN to compare the input image and generated image to refine the initial hand pose. All the above-mentioned methods are based on RGBD frames without motion blur, thus lacking the ability to deal with fast-moving hand tracking.
Due to the differences between the data structures of the event stream and the RGBD frame, the above-mentioned models cannot take the event stream as input directly. Some methods could transfer the event stream to gray frames [9][10], and SLAM methods could reconstruct depth frames or point clouds from the event stream [11]. However, the performances of these models could be affected by the domain gap. Extra consumption of bandwidth, computation, and storage without a custom solution for event data would eliminate the advantages of event cameras in real-time processing.
### Event-based Hand Tracking
There are several ways to utilize the event stream recorded by the event camera, such as 1) reconstruction of gray images from the event stream, which gives up the temporal information and will cause a huge burden in bandwidth, storage, and computation; 2) conversion of event stream to point cloud, which is not easily understood by the neural network; and 3) event-to-frame representation, which reserves the sparse representation and was adopted in this paper.
Event-to-frame representation includes event occurrence image (EOI), event count image (ECI) [12][13], surfaces of active events (SAE) [14], time-surfaces [15], hierarchy of time surfaces [15], locally-normalized event surfaces (LNES) [5]. The representations adopted in this paper LNECWS was related to ECI and LNES, enabling the model to distinguish hand motion from background noise.
Existing event-based hand tracking methods could be divided into two groups: 1) unsupervised methods like non-rigid 3D tracking, and 2) supervised methods based on synthetic datasets like EventHands [5]. The former method [16]
Figure 2: (a): We applied 2 RGB cameras and 1 event camera to collected the dataset with 4 different illumination conditions: without extra light, with extra light from the front/bottom/left side of the hand; (b): The dataset was collected under 6 different indoor backgrounds.We recorded hand with and without an electronic bracelet performing American Sign Language and some random poses to ensure variety of the dataset.
deformed a non-rigid model to the desired shape fitting the stuff in the event stream, thus needing accurate initialization and complex computation process, and facing the problem of lacking robustness. The latter reported method employed a synthetic dataset, which was easy to construct compared to the real-world dataset, to train the model, but suffered performance drop in real-world applications due to the distribution gap between real and synthetic datasets.
## 3 Our Solution
To skip annotations for fast hand tracking dataset, the main idea of our solution is to employ a neural networks trained on a slow motion dataset, which allow us to leverage the knowledge of developed RGB-based hand tracking method, to work on fast hand tracking task. The primary challenge of this solution is bridging the domain gap between the training data for slow motion and the prediction data for fast motion. For event segments with the same time length, the motion range in fast and slow hand tracking dataset is different, so a speed adaptive event segment method is needed. In other words, for event segments with similar motion ranges, the time length of event segments in slow and fast hand tracking data will be different, which lead to different noise level as the number of noise events is proportional to the time length. To overcome the domain gap of motion range, we augmented the slow motion data by utilizing event stream of different length to generate event frame and figured out the speed adaptive segment method; to process segments with different time length and noise level, we randomly suppressed the noise in segments of slow motion trainset and created a new representation method, Locally Normalized Event Count and Surface (LNECS).
In this section, we present the details of our real-world dataset (Sec. 3.1), as well as the new representation method for handling event segments of varying lengths and noise levels (Sec. 3.2). We also demonstrate the methods we used to augment the slow motion data and incorporate knowledge from the RGB domain into our training process (Sec. 3.3) and describe how the trained model track hands at various motion speeds (Sec. 3.4).
### Dataset
As shown in Fig. 2, we have considered the following scenarios which are common in real-life hand tracking: 1) Background: We collected the dataset with 6 different indoor backgrounds, which vary from simple to complex. 2) Illumination: Due to the character of the event camera that responds to changes in brightness, different illumination conditions could lead to different outputs of the event camera. We considered 4 illumination conditions while collecting the dataset, which include normal illumination conditions and illumination conditions with extra light from the front/bottom/left side of the hand. Additionally, different illumination conditions would lead to different shadows on the background, which is also challenging for hand tracking algorithms. 3) Self-occlusion and global rotation: Since self-occlusion and global rotation are two major challenges in hand tracking, for all the event streams, we captured poses in American Sign Language (ASL) as well as some random poses to ensure the dataset contains challenging poses with self-occlusion. We also rotated the hand to capture different aspects of each pose. 4) Hand decorations: The decorations on the hand could also affect the performance of hand tracking algorithms. In our dataset, we collected hands with or without an electronic bracelet.
We collected the dataset using an event camera (CeleX5) and two RGB cameras. To ensure the consistency of the data, we calibrated the cameras and matched the timestamp of the event stream and two RGB videos The dataset with slow hand motion contains 40 event streams (5 backgrounds \(\times\) 4 illumination conditions \(\times\) 2 with or without a bracelet), and dataset with fast hand motion contains 4 event streams (1 background \(\times\) 2 illumination conditions \(\times\) 2 with or without a bracelet). In total, we recorded about 3.5 hours of event data, and the dataset will be publicly available.
### Representation
The output of the event camera is a set of events, which could be denoted as \(\mathcal{E}=\{e_{i}\}_{i=0}^{N},e_{i}=(x_{i},y_{i},p_{i},t_{i})\), where \(e_{i}\) denotes the event, \(x_{i}\) and \(y_{i}\) denotes the coordinates of the event, and \(p_{i}\) denotes the polarity, and \(t_{i}\) denotes the timestamp. This data format is not suitable for convolutional neural networks, and thus we need to transfer them to frame format. Existing event-to-frame representation methods, such as Event Occur Image (EOI) and Event Count Image (ECI), detected the occurrence or counted the number of the events at each pixel within a time interval, which lost the time information, and some improved methods like, Surfaces of Active Events (SAE) and Locally-Normalized Event Surface (LNES) assigned the value of each pixel as the timestamp of the latest event at the pixel, which, however, are easily affected by noise, especially caused by strobe flash of alternate-current-powered illumination. To solve such problem, we formulated the Locally-Normalized Event Count and Surface (LNECS) based on ECI and LNES. The representation LNECS is denoted with the following formulas:
\[LNES(x,y,p)=\frac{max\{t_{i}|(x_{i},y_{i},p_{i})=(x,y,p)\}-min\{t_{i}\}}{max\{t_{i} \}-min\{t_{i}\}}\]
\[EC(x,y,p)=|\{e_{i}|(x_{i},y_{i},p_{i})=(x,y,p)\}|\]
\[LNEC(x,y,p)=\frac{EC(x,y,p)}{max(EC(x,y,p))}\]
Then \(LNECS\) and \(LNECWS\) can be represented as:
\[LNECS=LNES\oplus LNEC\]
where \(\oplus\) denotes concatenate operation in the polarity channel.
Figure 3 shows a demo of different representations of event streams of a hand moving from left to right. A shown in the upper and middle figure, LNES preserves time information but suffers from noise close to hand and LNEC overcomes the influence of noise but loses time information. Our proposed LNECS combined these representations and integrated both advantages
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{Segment Standard} & \multirow{2}{*}{Data Augment} & \multirow{2}{*}{Representation} & \multicolumn{2}{c}{Fast Motion} & \multicolumn{2}{c}{Slow Motion} \\ & & & AUCp 2D & AUCp 3D & AUCp 2D & AUCp 3D \\ \hline
10k event number & No & LNECS & 0.780 & 0.661 & 0.908 & 0.842 \\ \hline
10k event number & Yes & LNES & 0.831 & 0.716 & 0.890 & 0.821 \\
10k event number & Yes & LNEC & 0.837 & 0.717 & 0.897 & 0.828 \\
10k event number & Yes & LNECWS & 0.836 & 0.722 & 0.891 & 0.822 \\ \hline
1k pixel number & Yes & LNECS & 0.789 & 0.664 & 0.904 & 0.839 \\
2k pixel number & Yes & LNECS & 0.801 & 0.674 & 0.912 & 0.843 \\
5k pixel number & Yes & LNECS & 0.759 & 0.628 & 0.910 & 0.841 \\
50ms time length & Yes & LNECS & 0.255 & 0.131 & 0.905 & 0.834 \\
20ms time length & Yes & LNECS & 0.791 & 0.658 & 0.753 & 0.617 \\
5k event number & Yes & LNECS & 0.814 & 0.698 & 0.887 & 0.821 \\
20k event number & Yes & LNECS & 0.829 & 0.718 & 0.899 & 0.832 \\
50k event number & Yes & LNECS & 0.767 & 0.636 & **0.913** & **0.845** \\ \hline event number 10k & Yes & LNECS & **0.840** & **0.727** & 0.893 & 0.825 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Ablation study on slow and fast motion test data. We report the AUCp-2D and the AUCp-3D (higher values are better, bold italic font denotes best numbers).
Figure 3: A demo of different representations with two channels. Upper: LNES; middle: LNEC; bottom: LNECWS.
### Training Procedure
As shown in Fig. 1, there are two branches in the training procedure. In the RGB branch, we applied a binocular stereo vision system as the teacher method. An RGB-based hand tracking algorithm initially estimated hand poses in the frames from two RGB cameras, and then, the 3D pose is calculated based on binocular stereo vision algorithm and used as the supervising information of event branch. In the event branch, we augmented the training data to enable the event student network for speed adaptive hand tracking tasks. The parameters of event student network were updated using supervision from the RGB branch after the supervised 3D poses were transformed to event camera space and undergo corresponding data augment transformation. The loss function is calculated by the following function.
\[\mathcal{L}=\|f_{s}(T_{X}(x_{e}))-T_{Y}(T_{cam}(f_{t}(x_{r})))\|_{2}\]
where \(f_{t}\) denotes the RGB teacher method, \(f_{s}\) denotes the event student network, \(x_{e}\) denotes the event stream segment, \(x_{r}\) denotes the RGB videos, \(T_{X}\) and \(T_{Y}\) denotes data augment operation and \(T_{cam}\) denotes the transform operation from RGB to even camera space.
The data augmentation process includes 3 operations: 1) view augmentation such as rotation and crop, 2) applying event segment of different length, and 3) randomly suppressing noise. The first operation enhances the robustness of the model and the other two operations align the domain of slow and fast hand tracking data. According to the character of event camera, the pixel output an event when detecting brightness changes. A long period of slow motion event segment would be similar to a short period of fast motion event segment regardless the timestamp. Based on this character, we could stimulate fast motion data using longer event segment of slow motion. However, in practice we found that the number of noise events is usually proportional to the time length and thus the noise distribution in augmented data and real fast motion data is different. To align this distribution, we applied random noise suppression augmentation. If the number of events in a pixel and its neighbors is less than a random threshold, the events in this pixel will be regarded as noise and removed. This noise suppression method can be easily implemented by the following functions and required very little computation resource.
\[F^{\prime}(x,y,p)=1(EC(x,y,p)*E_{\sigma}>\epsilon_{r})\times F(x,y,p)\]
where \(F(x,y,p)\) and \(F^{\prime}(x,y,p)\) respectively denote the original frame and frame after noise suppression, \(E_{\sigma}\) denotes an average filter kernel with size \(\sigma\) and \(\epsilon_{r}\) is a random threshold varying with training sample.
### Prediction Procedure
According to the character of event camera, whose pixels respond to local brightness changes independently, fast hand motion would generate more events than slow motion during the same time interval. We utilized this character and figured out our speed adaptive segment method. Instead of dividing event stream into segments with the same time interval, our speed adaptive segment method determine the length of event segments by event number. In this way, we ensure the consistency of motion range in a representation frame despite speed changes.
## 4 Experiment
### Implement Details
We employed MediaPipe [17] as the hand tracking model in RGB teacher method and ResNet-18 as the event student network and tested our solution on event streams of slow and fast motion. The resolution of our event camera CeleX5 is
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline Solution & Segment Standard & Representation & \multicolumn{2}{c}{Our Fast Testset} & \multicolumn{2}{c}{Our Slow Testset} & \multicolumn{1}{c}{EventHands Testset} \\ & & & AUCp\_2d & AUCp\_3d & AUCp\_2d & AUCp\_3d & AUCp\_2d \\ \hline EventHands & event number & LNES & 0.652 & 0.577 & 0.741 & 0.652 & \\ EventHands & time length & LNES & 0.033 & 0.029 & 0.038 & 0.032 & 0.654 \\ Ours & event number & LNECS & **0.840** & **0.727** & 0.893 & 0.825 & 0.665 \\ Ours & event number & LNES & 0.831 & 0.716 & 0.890 & 0.821 & 0.738 \\ Ours & event number & LNECS & 0.837 & 0.717 & **0.897** & **0.828** & **0.773** \\ Ours & event number & LNECWS & 0.836 & 0.722 & 0.891 & 0.822 & 0.767 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison between our solution and EventHands.
\(1280\times 800\) and we generate the frame for the event student network with size \(240\times 150\). We use an Adam optimizer with an initial learning rate of 10-4 and a batch size of 128. We train the model for 50 epochs, reducing the learning rate by a factor of 10 when the validate loss value did not decay in the last 5 epochs.
We first introduce our evaluation dataset and metrics. Then we present ablation studies to evaluate our design choices and compare our solution against RGB-based solutions.
### Test Data and Metrics
The test data includes a slow motion dataset with 8 slow-motion event streams as well as a fast motion dataset 4 fast-motion event streams. The annotations of slow hand motion were obtained via a 30Hz RGB binocular stereo vision system, and annotations of fast hand motion were obtained via a 240Hz system. Notably, the annotation obtained by 240Hz system aims to validate performance on the fast hand tracking task of models trained on the slow hand tracking dataset, but this does not mean that our solution could only deal with hand motion with speed clear under 240Hz RGB camera. We obtained the 3D ground truth based on initial hand pose estimate from the two RGB videos by MediaPipe [17], and manually exclude the inaccurate results.
For metrics, we applied palm-normalized percentage of correct keypoints (\(PCK_{p}\)) which defines a candidate keypoint to be correct if it falls within a given distance normalized by palm length around the ground truth. and the area under the \(PCK_{p}\) curve (\(AUC_{p}\)) to make the result comparable under different modalities. Specifically, we calculated the distance between the wrist and middle finger MCP annotations as the palm length.
### Ablation Study
#### 4.3.1 Speed Adaptive Segmentation Method
We compared the results of different segmentation methods using various standards, including the same pixel number standard, the same time length standard, and our proposed same event number standard. Our findings revealed that the performance of our proposed same event number standard was more sensitive to changes in the event number when applied to fast test data. The largest performance difference among slow motion samples with different event numbers was 0.892 and 0.913, while the difference among fast motion samples was much larger, at 0.674 and 0.840. Additionally, the solution achieved its best performance on slow samples with 50,000 events and on fast samples with 10,000 events. Furthermore, the performance difference was minimal when using test data samples with 10,000 events, indicating that the minimal domain gap between slow and fast motion data is reached with fewer events in data samples.
For fixed temporal length segments, we generated a dataset comprising event segments with a length of 20ms and 50ms, on which the model achieved its best performance for fast and slow test data, respectively. However, results showed that the solution using this segmentation standard always suffered a performance drop, indicating that this standard is unable to narrow the domain gap between slow and fast motion data.
The standard of using the same pixel number required generating event frames that contained the same number of pixels where the event occurred. The solution that applied this standard obtained the best performance on slow test data, but was not competitive on fast motion test data. A possible reason for this is the influence of different noise distributions, which leads to a larger motion range in fast motion frames to reach the same pixel number as slow motion frames, which contain more noise pixels. This standard also requires more complex operations during implementation, as it necessitates repeatedly generating event frames and checking pixel numbers, which requires loop and condition operations, while our proposed standard can segment data samples from event stream directly.
#### 4.3.2 Event-to-Frame Representation
We compared different event-to-frame representation methods and results showed that our LENCS representation outperformed other representations. Despite the representations introduced in Sec 3.2, we also test the representation Locally-Normalized Event Count Weighted Surface (LNEWCS), which can be denoted as:
\[LNEWCS(x,y,p)=LNES(x,y,p)\times LNEC(x,y,p)\]
This representation also preserves the time information and avoid the influence of noise as shown in Fig. 3. The reason for the unsatisfying result is it only contains 2 channels, which is the same as LNES and LNEC, while LNECWS contains 4 channels, indicting that the time and event count information cannot be compressed into one channel.
#### 4.3.3 Data Augmentation
In Sec. 3.3, we introduced our data augmentation method. Augmentation does not help on slow motion data since there is no domain gap to be bridged. On fast motion data, using data augmentation significantly improves the quality of the predictions.
### Comparison with EventHands
EventHands is the pioneer deep-learning based hand tracking solution applying event camera, which enabled a ResNet-18 trained on synthetic data to deal with fast hand tracking tasks in the real world. We utilized the released trained model, test data as well as the evaluation code of EventHands and compared the performance of our solution and EventsHands on both testset. Notably, the backbone in both solutions is ResNet-18 and thus would not affect the comparison. The EventHands divided the event stream into segments of 100ms, but this segmentation methods lead to unsatisfying performance on our test data, so we also tested the performance of the trained model in EventHands with our speed adaptive segmentation method and selected the best result. The EventHands testset was captured with a different event camera, which was less sensitive than our camera and output less events for similar motion range. As a result, our solution obtained the best performance with event segments of 4,000 events on the EventHands testset.
As shown in Table. 3.4, our solution outperformed EventHands in both testset, indicating that the idea of tracking fast by learning from slow is better than the idea of tracking real by learning from synthetic. Besides, as discussed in Sec 3.2, LNECS combined the time information and event number information, which enable the deep learning model to learn and distinguish the background noise in the dataset. However, in the testset of EventHands, noise often appeared at the previous position of the hand, which did not appear in our dataset and cannot be stimulated by our data augmentation methods. This gave rise to the most severe performance drop of solutions applying LNECS than other representations. In contrast, solution employing LNEC is less likely to be influenced by noise, and obtained the best performance on EventHands testset.
### Comparison with RGB-based Method
We also captured 30Hz rgb videos for fast hand tracking test data, whose annotations were obtained in the same way as the way to obtain annotations for event stream. We tested two RGB based State-Of-The-Arts solutions Mediapipe [17] and Minimal-hand [18].
As shown in Table. 3, our event based method outperformed RGB based State-Of-The-Arts. Notably, the annotations of fast motion dataset are obtained from 240Hz RGB video processed by Mediapipe, which means that the performance of Mediapipe on slow motion tasks with no blur porblen is 1. The performance drop of RGB-based method is 0.265 on 2D task and 0.356 on 3D task, while the performance drop of our event-based solution is 0.053 on 2D task and 0.098 on 3D task.
## 5 Conclusion and Future Work
We have developed an innovative solution called the Event-based Speed Adaptive Hand Tracker. This system employs a deep learning model that has been trained using a dataset of hand tracking in slow motion, enabling it to accurately track fast-moving hands. This approach allows us to leverage the knowledge gained from RGB-based hand tracking technique and the same methodology can also be adapted for other types of fast-moving object detection or tracking.
To achieve this, we employed three methods to bridge the gap between the training data for slow motion and the prediction data for fast motion. These methods include data augmentation, speed adaptive segmentation and NECS representation. Additionally, we have created and released the first-ever event-based hand tracking dataset that was collected in real-world environments. Results showed that our proposed solution enhanced the performance in fast hand tracking tasks, and outperformed other RGB-based methods as well as previous event-based method.
\begin{table}
\begin{tabular}{l c c} \hline \hline Method & AUCp2D & AUCp3D \\ \hline Mediapipe & 0.735 & 0.644 \\ Minimal-hand & 0.508 & 0.345 \\ Ours(MNECS) & **0.840** & **0.727** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison of performance of our solution and RGB-based solution on fast hand tracking task.
For future work, our current method only applied slow motion data in the training procedure, and utilizing unlabeled fast motion data and domain adaptive method would be a possible way to further improve the method. Besides, the performance of our method is limited by the performance of applied RGB model and adding synthetic data in the training process may also help to enhance the prediction accuracy.
|
2309.06346 | Construction of envelopes of holomorphy and QFT | Methods of continuation of holomorphic functions of several complex variables
are investigated within the axiomatic framework of Araki, Haag, and Kastler in
local quantum field theory. The motivation comes from the analysis of a mass
gap in an energy-momentum spectrum without vacuum vector. The main conclusion
is some non-restrictedness property in a mass gap situation. Prior to that,
some results on holomorphic functions related to a mass-gap-situation are
obtained and investigated. | Ulrich Armbrüster | 2023-09-12T16:07:07Z | http://arxiv.org/abs/2309.06346v1 | # Construction of envelopes of holomorphy and QFT
###### Abstract
Methods of continuation of holomorphic functions of several complex variables are investigated within the axiomatic framework of Araki, Haag, and Kastler in local quantum field theory. The motivation comes from the analysis of a mass gap in an energy-momentum spectrum without vacuum vector. The main conclusion is some non-restricted property in a mass gap situation. Prior to that, some results on holomorphic functions related to a mass-gap-situation are obtained and investigated.
###### Contents
* 1 Introduction
* 2 The axiomatic framework
* 3 Holomorphic functions in QFT
* 4 Techniques of holomorphic continuation
* 4.1 Compilation of Some Known Results
* 4.2 Application to specific cases
* 4.3 Envelope of holomorphy and hyperboloids
* 5 Application of holomorphic continuation to a mass gap situation
* 6 Outlook
* 7 Table of figures
## 1 Introduction
In this article we are discussing investigations on the energy-momentum spectrum within the algebraic formulation of quantum field theory. Unlike the Wightman approach, only observables represented by bounded operators play a role in the algebraic formulation. This limitation is not restrictive since instead of the usual self-adjoint operators only their spectral projectors, which are always in the closure of the representation of the observable algebra, are considered. Even in the slightly more general case of a closeable operator, one can still consider only bounded operators by polar decomposition of the closure into an isometric and a self-adjoint operator. This approach makes it mathematically easier to handle the operators as one does not need to pay attention to their respective domains.
The fundamental physical principles of the algebraic approach are isotonicity, translation invariance, Einstein causality, and the spectral condition. The axioms required in this article are described in section 2 and are based on the works of Araki, Haag, and Kastler in [11] and [1]. Starting with these axioms together with the spectrum condition for the energy-momentum operator, one can construct functions with holomorphic Fourier transform that are zero outside the spectrum. Holomorphic continuation of these Fourier transforms then expands the area of these functions being equal to zero and thus allows to make statements on the shape of the spectrum
from the outside, including properties of a mass gap. It uses theorems by Pflug, the Edge-of-the-Wedge-Therem, Double Cone Theorem, and the Jost-Lehmann-Dyson-Formula, among others which have already been described and used in [1].
## 2 The axiomatic framework
In this section, the framework of algebraic quantum field theory will be defined the mentioned axioms.
Let \(M\) be the (1+3)-dim. Minkowski space with indefinite scalar product \(ab:=a_{0}b_{0}-\sum_{i=1}^{3}a_{i}b_{i}\).
**Axiom 1** To every bounded open region \(O\subset M\) a \(C^{*}\)-algebra \(\mathcal{A}(O)\) is assigned with the **isotony** property:
\[O_{1}\subset O_{2}\Longrightarrow\mathcal{A}\left(O_{1}\right)\subset \mathcal{A}\left(O_{2}\right)\]
The measurable observables in \(O\) are precisely the self-adjoint elements of \(\mathcal{A}(O)\). The norm closure \(\mathcal{A}:=\overline{\cup\mathcal{A}(O)\mid O\subset\mathcal{M}:\text{open}}\), bounded is called the \(C^{*}\)-inductive limit. \(\mathcal{A}\) is then again a \(C^{*}\)-algebra.
The **causality** principle from special relativity is formulated here as
**Axiom 2**\(O_{1}\subset O_{2}^{\prime}:=\left\{a\in M\mid(a-b)^{2}<0:\forall b\in O_{2}\right\}\)
\[\Longrightarrow\mathcal{A}\left(O_{1}\right)\subset\mathcal{A}\left(O_{2} \right)^{\prime}:=\left\{x\in\mathcal{B}(\mathcal{H})\mid[x,y]:=xy-yx=0: \forall y\in\mathcal{A}\left(O_{2}\right)\right\}\]
Observable quantities from spacelike separated regions should commute.
The symmetry group typically acting is the proper orthochronous Poincare group. However, only translations are required as **symmetry** here.
**Axiom 3** The translation group canonically isomorphic to \(M\) (which is denoted by \(M\) again) acts as a group of automorphisms \(\alpha_{a}\) on \(\mathcal{A}\), such that for each bounded open subset \(O\subset M\):
\[\alpha_{a}\mathcal{A}(O)=\mathcal{A}(O+a)\quad\forall a\in M\]
**Definition 2.1**: _A triple \(\left\{\pi,\mathcal{H},U\right\}\) consisting of a non-degenerate representation \(\pi\) of \(\mathcal{A}\) on the Hilbert space \(\mathcal{H}\) together with a continuous unitary representation \(U\) of \(M\) on \(\mathcal{H}\) is called a **covariant representation** if it respects the automorphisms \(\alpha\), i.e., if it satisfies:_
\[U(a)\pi(x)U(a)=\pi(\alpha_{a}x)\quad\forall x\in A,a\in M.\]
According to Stone's theorem, there exist self-adjoint operators \(P_{0},\ldots,P_{3}\) such that \(U\) can be written as \(U(a)=e^{i(P,a)}\); where \((P,a)=(a,P)\) denotes the operator \(a_{0}P_{0}-a_{1}P_{1}-a_{2}P_{2}-a_{3}P_{3}\). The spectral decomposition of \(P\) looks as follows:
\[P=\int_{M}p\;dE(p)\]
with the projection-valued spectral measure \(E(\cdot)\) on \(M\).
This means for \(U\):
\[U(a)=\int_{M}e^{ipa}dE(p).\]
The following notations are therefore used interchangeably:
\[\operatorname{spec}U=\operatorname{spec}P=\operatorname{supp},dE.\]
**Axiom 4** The algebra \(A\) allows for a faithful covariant representation \(\pi,\mathcal{H},U\) in such a way that the spectral measure associated with \(U\) satisfies:
\[\operatorname{supp}\,dE\subset\overline{V^{+}}\;(\text{{spectrum condition}})\]
**Definition 2.2**: _A set \(\{\mathcal{A}(O),\mathcal{A},M,\alpha\}\) is called a **theory of local observables** if Axioms 1-4 are fulfilled._
By the axioms, \(U\) is not uniquely determined. However, in order to introduce the energy-momentum operator through \(U\), this uniqueness is required. For this purpose, one defines:
**Definition 2.3**: _Let \(U\) be defined by \(U(a)=e^{i(P,a)}\) such that Axioms 1-4 are satisfied. Then \(U\) is called **minimal** if for any \(U^{\prime}\) with \(U^{\prime}(a)=e^{i\left(P^{\prime},a\right)}\), which also satisfies Axioms 1-4, the following holds:_
\[(x,P)\leq(x,P^{\prime})\quad\forall x\in\overline{V^{+}}.\]
_Here, the \(\leq\) sign refers to the order of operators, that is \(T_{1}\leq T_{2}\) if and only if \(T_{2}-T_{1}\) is positive, i.e., \((\Psi,(T_{2}-T_{1})\,\Psi)\geq 0\quad\forall\Psi\) in the domain of \(T_{2}-T_{1}\)._
According to [1], for any theory of local observables there exists a uniquely determined minimal \(U\) that satisfies Axioms 1-4.
The joint spectrum of the energy-momentum operator \(P=(P_{0},\ldots,P_{3})\) introduced by \(U(a)=e^{i(P,a)}\) is then a Lorentz-invariant set. The proof of this can be found in [1]. Thus, the spectrum condition (Axiom 4) always requires non-negative energy.
For our case, we even demand the strong spectrum condition:
\[\operatorname{spec}U\subset 0\cup\overline{V_{\mu}^{+}}\]
i.e., no particles with mass \(<\mu\) should occur. A theory in which the strong spectrum condition holds is called a theory with **mass gap**.
In the closed forward light cone (in \(R^{n},n>2\)), there are the following Lorentz-invariant sets:
a) 0
b) \(\left\{p\mid p^{2}=0,p_{0}\geq 0\right\}\)
c) \(\left\{p\mid p^{2}=m^{2}\,\,\,\text{for a fixed}\,\,\,m,p_{0}>0\right\}\)
d) arbitrary unions of the sets mentioned in a), b), c).
If we restrict ourselves to massive particles with charge \(Q\neq 0\) (i.e., outside the vacuum sector), possibilities a) and b) are eliminated for \(\operatorname{spec}\,U\). Therefore, \(\operatorname{spec}\,U\) is a union of hyperboloids.
## 3 Holomorphic functions in QFT
In this section, some specific functions are introduced which will later be expanded to complex-valued domains and used for holomorphic continuation.
However, some notions and terminologies from functional analysis are needed. They can be found, for example, in [10].
If \(\boldsymbol{U}(a)=\int e^{ipa}dE(p)\), then one associates the (orthogonal) spectral projection \(E(S)\) to a Borel subset \(S\subset M\). If \(U(a)\) belongs to a von Neumann algebra \(\mathcal{N}\), then \(E(S)\in\mathcal{N}\) for every \(S\). For \(\Psi\in\mathcal{H}\), we denote by \(\operatorname{supp}\,\Psi\) the smallest closed set \(S\subset\operatorname{spec}\,U\) such that \(E(S)\Psi=\Psi\).
**Lemma 3.1**: _The (continuous) functions \(F_{x,\Psi}^{+}\), \(F_{x,\Psi}^{-}\) defined by_
\[F_{x,\Psi}^{+}(a)=(\Psi,\pi\left(x^{*}\right)U(a)\pi(x)\Psi)\quad\text{ and }\quad F_{x,\Psi}^{-}(a)=(\Psi,\pi\left(\alpha_{a}x\right)\pi\left(x^{*} \right)U(a)\Psi)\,,\]
_are bounded and can therefore be regarded as distributions in \(S^{\prime}(M)\). For their Fourier transforms \(\mathcal{F}F_{x,\Psi}^{+}\equiv\widetilde{F_{x,\Psi}^{+}}\), \(\mathcal{F}F_{x,\Psi}^{-}\equiv\widetilde{F_{x,\Psi}^{-}}\), we have:_
1. \(\operatorname{supp}\widetilde{F_{x,\Psi}^{+}}\subset\operatorname{spec}U\)__
2. \(\operatorname{supp}\widetilde{F_{x,\Psi}^{-}}\subset 2\operatorname{supp}\Psi- \operatorname{spec}U\)__
**Proof** i) \(F_{x,\Psi}^{+}\), \(F_{x,\Psi}^{-}\in S^{\prime}(M)\) since
\[\left|\int_{M}\left(\Psi,\pi\left(x^{*}\right)U(a)\pi(x)\Psi\right)\rho(a)da \right|\leq\|\Psi\|^{2}\|\pi(x)\|^{2}\int_{M}\rho(a)da<\infty\quad\forall\rho \in\mathcal{S}(M)\]
The proof for \(F_{x,\Psi}^{-}\) is analogous.
ii) Let \(S\subset M\) be a Borel set with \(S\cap\operatorname{spec}U=\emptyset\). Then \(E(S)=0\). For \(\rho\in\mathcal{S}(M)\) and \(\Phi=\pi(x)\Psi\), we have:
\[(\mathcal{F}(\Phi,U(.)\Phi),\rho)=((\Phi,U(.)\Phi),\tilde{\rho})=\int_{M} \tilde{\rho}(a)(\Phi,U(a)\Phi)da=\int_{M}\tilde{\rho}(a)\int_{\operatorname {spec}U}e^{ipa}(\Phi,dE(p)\Phi)da=\]
\[=\int_{\operatorname{spec}U}\int_{M}\tilde{\rho}(a)e^{ipa}da(\Phi,dE(p)\Phi)= (2\pi)^{2}\int_{\operatorname{spec}U}\rho(p)(\Phi,dE(p)\Phi)=0,\text{ if }\operatorname{supp}\,\rho\subset S.\]
This shows that \(\operatorname{supp}\,\widetilde{F_{x,\psi}^{+}}\subset\operatorname{spec}U\).
The argument for \(F_{x,\Psi}^{-}\) is analogous:
\[\left(\mathcal{F}\left(F_{x,\psi}^{-}\right),\rho\right)=\int_{M }\left(\Psi,\pi\left(\alpha_{a}x\right)\pi\left(x^{*}\right)U(a)\Psi\right) \tilde{\rho}(a)da\] \[=\int_{M}\left(\Psi,U(a)\pi(x)U(-a)\pi\left(x^{*}\right)U(a)\Psi \right)\tilde{\rho}(a)da\] \[=\int_{M}\int_{\operatorname{spec}U}\int_{\operatorname{spec}U} \int_{\operatorname{spec}U}e^{iap}e^{-iaq}e^{iax}\left(\Psi,dE(p)\pi(x)dE(q) \pi\left(x^{*}\right)dE(r)\Psi\right)\tilde{\rho}(a)da\] \[=\int_{\operatorname{spec}U}\int_{\operatorname{spec}U}\int_{ \operatorname{spec}U}\left[\int_{M}e^{ia(p-q+r)}\tilde{\rho}(a)da\right] \left(\Psi,dE(p)\pi(x)dE(q)\pi\left(x^{*}\right)dE(r)\Psi\right)\] \[=\int_{\operatorname{spec}U}\int_{\operatorname{spec}U}\int_{ \operatorname{spec}U}\rho(p-q+r)\left(\pi\left(x^{*}\right)dE(p)\Psi,dE(q)\pi \left(x^{*}\right)dE(r)\Psi\right).\]
Given \(dE(p)\Psi=0\) if \(p\notin\operatorname{supp}\Psi\), and \(dE(q)=0\) if \(q\notin\operatorname{spec}U\), then the value of the integral above is zero if \(\operatorname{supp}\rho\cap\left(\operatorname{supp}\Psi+\operatorname{supp} \Psi-\operatorname{spec}U\right)=\emptyset\). This was just the claim. \(\blacksquare\)
**Lemma 3.2**: _If \(S\subset M\) is a Borel set with \(E(S)\neq 0\), then the set \(\mathcal{K}(S):=\{\pi(x)\Psi\mid x\in\mathcal{A}\left(D_{-t,t}\right)\) for some \(t\in V^{+}\), \(\operatorname{supp}\,\,\Psi\subset S\}\) is dense in \(\mathcal{H}\)._
**Proof** For any \(x\in\mathcal{A}(O)\) with arbitrary \(O\subset M\), it is contained in some \(\mathcal{A}\left(D_{-t,t}\right)\) for sufficiently large \(t\). The \(\pi(x)\) considered in \(\mathcal{K}(S)\) are thus dense in \(\pi(\mathcal{A})\) with respect to norm convergence, and therefore also with respect to strong convergence. According to von Neumann's density theorem (see e.g.[1]), \(\pi(\mathcal{A})\) is dense in the von Neumann algebra \(\pi(\mathcal{A})^{\prime\prime}\) (strong convergence), with \(\pi(\mathcal{A})^{\prime\prime}\) being the bicommutant of \(\pi(\mathcal{A})\). The \(\Psi\) occurring in \(\mathcal{K}(S)\) are precisely those located in \(E(S)\mathcal{H}\). However, \(\overline{\pi(\mathcal{A})^{\prime\prime}E(S)\mathcal{H}}=F\mathcal{H}\), where \(F\) denotes the central carrier of \(E(S)\) (with \(U(a)\) always containing \(E(S)\) in \(\pi(\mathcal{A})^{\prime\prime}\)), see again [1]. Since \(\pi\) is a factor representation, we have because of \(E(S)\neq 0\)**:**\(F=\mathbb{1}\). This proves the claim. \(\blacksquare\)
## 4 Techniques of holomorphic continuation
In this section we first summarize several known results on holomorphic continuation as used in QFT. We will then use these results to calculate concrete envelopes of holormorophy for domains that are specific examples of energy-momentum spectrums.
### Compilation of Some Known Results
A domain of holomorphy is a connected open set \(G\subset\mathbb{C}^{n}\) for which there exists a function \(f\) that is holomorphic in \(G\), but cannot be holomorphically continued through any boundary point of \(G\); that is, for every power series expansion of \(f\) around a point \(z\in G\) that converges in a poly-cylinder \(\Delta(z,R)\) with polyradius \(R\), it holds that \(\Delta(z,R)\subset G\).
A holomorphy domain \(G^{*}\supset G\), into which every holomorphic function in \(G\) can be holomorphically continued, is called the simple envelope of holomorphy of \(G\). However, not every domain has a simple holomorphic envelope; in general, one obtains a Riemannian domain over \(\mathbb{C}^{n}\) as a holomorphic envelope. In both cases, \(H(G)\) denotes the holomorphic envelope of \(G\).
**Definition 4.1**: _A holomorphic function \(f\) on the domain \(G\) is said to have **polynomial growth** if there exist \(N\in\mathbb{N}\) and \(c>0\) such that for all \(z\in G\),_
\[|f(z)|\leq c\left(\Delta_{G}(z)\right)^{-N},\]
where \(\Delta_{G}\) is defined for \(z\in G\) as
\[\Delta_{G}(z):=\min\left\{\operatorname{dist}(z,\partial G),\left(1+\|z\|^{2 }\right)^{-1/2}\right\}.\]
With these notations, the following holds:
**Theorem 4.2** (Pflug): _Let \(G\subset G^{\prime}\) be domains with \(H\left(G^{\prime}\right)\subset\mathbb{C}^{n}\). If every holomorphic function on \(G\) with bounded growth can be holomorphically continued to \(G^{\prime}\), then every holomorphic function on \(G\) can be holomorphically continued to \(G^{\prime}\) (and hence also to \(H\left(G^{\prime}\right)\))._
**Proof** According to [10], the statement holds for the class of functions defined by
\[|f(z)|\leq c\left(\widetilde{\Delta_{G}}(z)\right)^{-N}\]
where \(\widetilde{\Delta_{G}}(z)=\left(1+\|z\|^{2}\right)^{-1/2}\min\{1,\operatorname {dist}(z,\partial G)\}\). However, the class of functions described by \(\Delta_{G}\) introduced in [10] coincides with the class described by \(\widetilde{\Delta_{G}}\), as it always holds that:
\[\left(\Delta_{G}(z)\right)^{2}\leq\widetilde{\Delta_{G}}(z)\leq\Delta_{G}(z).\]
\(\blacksquare\)
**Definition 4.3**: _A 2-dimensional **analytic surface** is a set \(F\subset\mathbb{C}^{n}\) for which, for every point \(z_{0}\in F\), there exists a domain \(U\subset\mathbb{C}\) and a vector-valued function \(h_{z_{0}}:U\longrightarrow\mathbb{C}^{n}\) such that:_
1. \(z_{0}=h_{z_{0}}\left(\lambda_{0}\right)\) _for some_ \(\lambda_{0}\in U\)__
2. \(h_{z_{0}}\) _is holomorphic in_ \(U\)__
3. \(\{z=h_{z_{0}}(\lambda)\mid\lambda\in U\}\) _represents_ \(F\) _near_ \(z_{0}\)__
4. _The vector-valued function_ \(\frac{d}{d\lambda}h_{z_{0}}\) _does not vanish anywhere in_ \(U\)_._
By the implicit function theorem, such an analytic surface \(F\subset\mathbb{C}^{n}\) can always be regarded as a complex submanifold of \(\mathbb{C}^{n}\).
The concept of an analytic surface is now used to explicitly specify a domain that is larger than the original domain and still lies entirely within its envelope of holomorphy.
**Theorem 4.4** (Weak continuity theorem): _Let \(\left(G_{\alpha}\right)\alpha\in\mathbf{N}\) be a sequence of connected sets that are open in 2-dimensional analytic surfaces \(F\alpha\), and let \(\overline{G_{\alpha}}\subset F_{\alpha}\) always hold. Suppose \(G\subset\mathbb{C}^{n}\) is a domain (with \(\partial G_{\alpha}=\overline{G_{\alpha}}\backslash G_{\alpha}\)) such that:_
1. \(G_{\circ}\subset G\)__
2. \(\lim\limits_{\alpha\rightarrow\infty}G_{\alpha}=:S_{0}\)__
3. \(\lim\limits_{\alpha\rightarrow\infty}\partial G_{\alpha}=:T_{0}\subset\subset G\)__
_._
4. \(S_{0}\) _is bounded_
_Then, \(S_{0}\subset H(G)\)._
The convergence of \(\left(G_{\alpha}\right)\) and \(\left(\partial G_{\alpha}\right)\) is to be understood as follows:
One says that the sequence of sets \(\left(A_{k}\right),A_{k}\subset\mathbb{C}^{n}\), converges to the set \(A\subset\mathbb{C}^{n}\) (\(\lim k\to\infty Ak=A\)) if \(A\) consists precisely of the limits of all convergent sequences \(\left(a_{k}\right)\) in \(\mathbb{C}^{n}\) with \(a_{k}\in A_{k}\).
The proof of the weak continuity theorem can be found, for example, in [21] and [17]. What is essential to it is that \(2\)-dimensional analytic surfaces satisfy the maximum principle with respect to the moduli of holomorphic functions. In other words, for any holomorphic function \(f\) on the bounded set \(\overline{G_{\alpha}}\) holds:
\[\sup_{z\in\overline{G_{\alpha}}}|f(z)|=\sup_{z\in\partial G_{\alpha}}|f(z)|\]
**Definition 4.5**: _In the following, the scalar product of two vectors in \(\mathbb{R}^{n}\) always refers to the indefinite Minkowski product: \(ab=a_{0}b_{0}-a_{1}b_{1}-\ldots-a_{n-1}b_{n-1}\). Also, for \(a,b\in\mathbb{C}^{n}\), the Minkowski scalar product with this calculation rule should always be understood as \(ab\). The **forward light cone** is the set \(V^{+}:=\left\{x\in\mathbb{R}^{n}\mid x^{2}>0,x_{0}>0\right\}\); \(V^{-}:=-V^{+}\) denotes the **backward light cone**. The **forward tube** is the set \(T^{+}:=\left\{z=x+iy\in\mathbb{C}^{n}\mid y\in V^{+}\right\}=\mathbb{R}^{n}+ iV^{+}\); \(T^{-}:=-T^{+}=\mathbb{R}^{n}+iV^{-}\) is the **backward tube**._
The transition from real to complex functions is often carried out in quantum field theory by the following statement:
**Theorem 4.6**: _Let \(f^{+},f^{-}\in\mathcal{S}^{\prime}\left(\mathbb{R}^{n}\right)\) be tempered distributions, and let \(a,b\in\mathbb{R}^{n}\) with \(\operatorname{supp}f^{+}\subset a+V^{+},\operatorname{supp}f^{-}\subset b+V^{-}\). Then the Fourier transforms \(\mathcal{F}f^{+},\mathcal{F}f^{-}\) of \(f^{+}\) and \(f^{-}\) are boundary values in the distributive sense of functions that are holomorphic in \(T^{+}\) and \(T^{-}\), respectively; that is, there exist holomorphic functions \(G^{+},G^{-}\) in \(T^{+}\) and \(T^{-}\), respectively, such that for all \(\phi\in\mathcal{S}\left(R^{n}\right)\):_
\[\left(\mathcal{F}f^{+},\phi\right)=\lim_{y\to 0,y\in V^{+}}\int G^{+}(x+ iy)\phi(x)dx\]
_independently of the sequence chosen for \(y\to 0\) (analogous for \(\mathcal{F}f^{-}\))._
The proof can be found, for example, in [1] or [10].
Thus, in certain applications, one deals with functions that are holomorphic in \(T^{+}\) or \(T^{-}\). The following theorem deals with a situation in which limits of such functions coincide in certain real regions:
**Theorem 4.7** (Edge-of-the-Wedge-Theorem): _Let \(f^{+},f^{-}\) be functions that are holomorphic in \(\mathrm{T}^{+}\) and \(T^{-}\), respectively, and let there be a region \(B\subset\mathbb{R}^{n}\) for which \(f^{+}\) and \(f^{-}\) have matching boundary values in the distributive sense. Then there is a function \(f\) and a complex neighborhood \(\tilde{B}\) of \(B\), such that \(f\) is holomorphic on \(T^{+}\cup T^{-}\cup\tilde{B}\) and \(f\mid T^{+}=f^{+}\) as well as \(f\mid T^{-}=f^{-}\)._
**Remark 4.8**: _One can provide further information on the size and shape of the neighborhood \(\tilde{B}\); we set:_
\[\tilde{B}:=\bigcup_{x\in B}\left\{z\mid\|z-x\|<\frac{1}{32}\operatorname{ dist}(x,\partial B)\right\}.\]
_In this situation, the following holds for this \(\tilde{B}\):_
1. \(\tilde{B}\cap\mathbb{R}^{n}=B\)_, no further real points are added._
2. \(x\notin B\Rightarrow\) _there is no_ \(y\in\mathbb{R}^{n}\) _with_ \(x+iy\in\tilde{B}\)_._
3. \(\operatorname{dist}(x,\partial\tilde{B})\) _in_ \(\mathbb{C}^{n}\) _is proportional to_ \(\operatorname{dist}(x,\partial B)\) _in_ \(\mathbb{R}^{n}\) _for real_ \(x\)
The proof of the Edge-of-the-Wedge theorem and this remark can be found in [20].
For the following, we need:
**Definition 4.9**: _Let \(x,y\in\mathbb{R}^{n}\) be two points with \(y\in x+V^{+}\). Then,_
\[D_{x,y}:=\left(x+V^{+}\right)\cap\left(y+V^{-}\right)\]
_denotes the **double cone** spanned by \(x\) and \(y\)._
For the shape of the real coincidence region \(B\) from the Edge-of-the-Wedge Theorem, we have the following:
**Theorem 4.10** (Double Cone Theorem): _Let \(x,y\in B\) be two points that can be connected by a timelike curve (i.e., its tangent in every point is timelike) entirely within the interior of \(B\), and let \(y\in x+V^{+}\). Then, we have:_
\[D_{x,y}\subset H\left(T^{+}\cup T^{-}\cup\tilde{B}\right)\cap\mathbb{R}^{n}.\]
In applications, we always deal with light cone convex regions, such as double cones.
The proof of the Double Cone Theorem can be found, among others, in [20]. It is proven there using the weak continuity theorem. Another proof can be found in [1], following an idea from [1]. There, the Cauchy integral formula is used for the holomorphic continuation.
Both methods will be used here to prove Theorem 4.17.
For certain real coincidence regions \(G\), it is even possible to explicitly determine the holomorphic envelope of \(T^{+}\cup T^{-}\cup\tilde{G}\). The following preparations are used for this purpose.
Let \(M\subset\mathbb{R}^{r}\) be an open set. The hyperboloid
\[\left(x-x^{\prime}\right)^{2}=\lambda^{2},x^{\prime}\in\mathbb{R}^{n},\lambda \in\mathbb{R}^{+}\]
is said to be admissible to \(M\) if
\[M\cap\left\{x\mid\left(x-x^{\prime}\right)^{2}\geq\lambda^{2}\right\}=\emptyset\]
i.e., if \(M\) lies between the branches of the hyperboloid. The set of parameters \((x^{\prime},\lambda)\) corresponding to admissible hyperboloids is denoted by \(N(M)\):
\[N(M):=\left\{\left(x^{\prime},\lambda\right)\in\mathbb{R}^{n}\times\mathbb{R} ^{+}\mid\left(x-x^{\prime}\right)^{2}<\lambda^{2}\;\forall x\in M\right\}\]
It is possible that \(N(M)=\emptyset\). For such sets \(M\), we define the set \(N_{\infty}(M)\) of parameters corresponding to admissible hyperplanes:
\[N_{\infty}(M):=\left\{\left(x^{\prime},a\right)\in\mathbb{R}^{n}\times\left(V ^{+}\cup V^{-}\right)^{-}\mid a\left(x-x^{\prime}\right)<0\;\forall x\in M\right\}\]
These terms can be used to formulate:
**Theorem 4.11** (Jost-Lehmann-Dyson formula): _Let \(G\subset\mathbb{R}^{n}\) be a connected open set bounded by two space-like hyperplanes, that is,_
\[G=\left\{x=\left(x_{0},\tilde{x}\right)\mid f(\tilde{x})<x_{0}<g(\tilde{x})\right\}\]
_with functions \(f,g\) satisfying: \(\left|f\left(\tilde{x}_{1}\right)-f\left(\tilde{x}_{2}\right)\right|\leq\left| \tilde{x}_{1}-\tilde{x}_{2}\right|\) and \(\left|g\left(\tilde{x}_{1}\right)-g\left(\tilde{x}_{2}\right)\right|\leq\left| \tilde{x}_{1}-\tilde{x}_{2}\right|\quad\forall\tilde{x}_{1},\tilde{x}_{2}\in \mathbb{R}^{n-1}\). Furthermore, assume that \(N(G)\neq\emptyset\). Then, for the holomorphic envelope of \(T^{+}\cup T^{-}\cup\tilde{G}\), we have:_
\[H(T^{+}\cup T^{-}\cup\tilde{G})=\mathbb{C}^{n}\setminus\overline{\bigcup_{(x ^{\prime},\lambda)\in N(G)}\left\{z\mid\left(z-x^{\prime}\right)^{2}=\lambda^{ 2}\right\}}.\]
The proof can be found in [11].
**Remark 4.12**: _Suppose that for the domain \(B=\bigcup B_{i}\) (where \(B_{i}\) are the connected components), we always have \(\left(x_{i}-x_{j}\right)^{2}<0\), if \(x_{i}\in B_{i},x_{j}\in B_{j},i\neq j\), i.e., all the connected components are space-like to each other, and each \(B_{i}\) satisfies the conditions of Theorem 4.11. Then, the holomorphic envelope of \(\tilde{B}\cup T^{+}\cup T^{-}\) is obtained using the same procedure as in Theorem 4.11._
A proof of this remark can be found in [10].
In the case where \(N(G)=\emptyset\), we obtain:
**Corollary 4.13**: _Let \(G\subset\mathbb{R}^{n}\) be an open set with \(N(G)=\emptyset\) and suppose that_
\[G=\left(G+V^{+}\right)\cap\left(G+V^{-}\right)\]
_Then:_
\[H(T^{+}\cup T^{-}\cup\tilde{G})=\mathbb{C}^{n}\overline{\bigcup_{(x^{\prime}, a)\in N_{\infty}(G)}\left\{z\mid a\left(z-x^{\prime}\right)=0\right\}}.\]
**Proof** The regions \(G_{\alpha}=G\cap\left\{x|\left|x_{0}\right.\right|<\alpha\right\}\) satisfy the conditions of Theorem 4.11 for \(\alpha>0\) with \(f=-\alpha\), \(g=\alpha\). With \(K\left(G_{\alpha}\right):=H\left(T^{+}\cup T^{-}\cup\tilde{G}_{\alpha}\right)\) and
\(K(G):=\mathbb{C}^{n}\setminus\overline{\bigcup\left(x^{\prime},a\right)\in N _{\infty}(G)\left\{z\mid a\left(z-x^{\prime}\right)=0\right\}}\), we have, according to [10] (Section 33.1):
\[K(G)=\bigcup_{\alpha=1}^{\infty}K\left(G_{\alpha}\right)\]
For each \(\alpha\), \(K\left(G_{\alpha}\right)\) is a domain of holomorphy, and therefore, by the Behnke-Stein theorem (see [10]), \(K(G)\) is also a domain of holomorphy, and it holds that \(H\left(T^{+}\cup T^{-}\cup\tilde{G}\right)=K(G)\).
### Application to specific cases
The results of the first section shall now be applied to specific domains in \(\mathbb{C}^{n}\), \(n\geq 2\). For \(\mu\geq 0\), let \(V_{\mu}^{+}\) be the following set:
\[V_{\mu}^{+}:=\left\{x\in\mathbb{R}^{n}\mid x^{2}>\mu^{2},x_{0}>0\right\}\]
In the case \(\mu=0\), we have \(V_{0}^{+}=V^{+}\).
To prepare for this, it is first necessary to make some observations in the case \(n=2\). Let \(\hat{y}\) denote the timelike vector uniquely determined by a spacelike vector \(y\in\mathbb{R}^{2}\) through
\[\hat{y}^{2}=1,y\hat{y}=0,\hat{y}\in V^{+}\]
given by:
\[\hat{y}=\frac{\operatorname{sgn}y_{1}}{\sqrt{-y^{2}}}\left(y_{1},y_{0}\right).\]
**Theorem 4.14**: _In \(\mathbb{C}^{2}\), for \(\mu\geq 0\):_
\[H\left(\widetilde{V_{\mu}^{+}}\cup T^{+}\cup T^{-}\right)= T^{+}\cup T^{-}\cup\left\{z=x+iy\in\mathbb{C}^{2}\mid y^{2}<0\text{ and }x\hat{y}>\mu\right\}\cup\] \[\cup\left\{z\mid y^{2}=0,y\neq 0,x_{0}>x_{1}\operatorname{sgn}y_{0} \operatorname{sgn}y_{1}\right\}\cup\left\{z\mid y=0,x\in V_{\mu}^{+}\right\}.\]
_In particular, for \(\mu=0\), the case of \(V^{+}\) as a coincidence domain is included._
**Proof** For \(V_{\mu}^{+}\), the assumptions of Corollary 4.13 are satisfied. Therefore, the holomorphic envelope can be calculated using the procedure there. The condition \(a\left(x-x^{\prime}\right)<0\quad\forall x\in V_{\mu}^{+}\) means that the line defined by \(a\left(x-x^{\prime}\right)=0\) does not intersect the region \(V_{\mu}^{+}\). It can be shown that:
\[N_{\infty}\left(V_{\mu}^{+}\right)=\left\{\left(x^{\prime},a\right)\mid a\in \overline{V^{-}},x^{\prime}a<-\mu\sqrt{a^{2}}\right\}\]
According to Corollary 4.13, we have:
\[H\left(\widetilde{V_{\mu}^{+}}\cup T^{+}\cup T^{-}\right)=\mathbb{C}^{n} \backslash\overline{\bigcup_{(x^{\prime},a)\in N_{\infty}\left(V_{\mu}^{+} \right)}\{z\mid\ a\left(z-x^{\prime}\right)=0\}}.\]
Here, \(a\left(z-x^{\prime}\right)\) means that, for spacelike \(y\), we have:
(1) \(a\left(x-x^{\prime}\right)=0\)
(2) \(ay=0\Rightarrow a=-\hat{y}\)
Using (1), we have: \(\hat{y}\left(x-x^{\prime}\right)=0\) or \(x-x^{\prime}=\alpha y\), that is, \(x=\alpha y+x^{\prime}\).
In \(N_{\infty}\left(V_{\mu}^{+}\right)\), all \(x^{\prime}\) lying below the line \(\{\mu\hat{y}+cy\mid c\in\mathbb{R}\}\) are included. Therefore, in the complement of \(H\left(\widetilde{V_{\mu}^{+}}\cup T^{+}\cup T^{-}\right)\), for fixed \(y\), all \(x\) lying below \(\{\alpha y+\mu\hat{y}+cy\}=\{\mu\hat{y}+c^{\prime}y\}\) are included. This means that for \(z\in H\left(\widetilde{V_{i}^{+}}\cup T^{+}\cup T^{-}\right)\), we have \(x\hat{y}>\mu\).
For lightlike \(y\neq 0\), we need to take the closure of the points \(y\) with \(y^{2}<0\) that do not lie in the holomorphic envelope. This means: \(x\hat{y}\leq m\Leftrightarrow x_{0}\left|y_{1}\right|-x_{1}y_{0}\) sgn \(y_{1}\leq m\sqrt{-y^{2}}\). Taking the limit \(y_{1}\longrightarrow y_{0}\neq 0\) yields \(x_{0}-x_{1}\) sgn \(y_{0}\) sgn \(y_{1}\leq 0\), which proves the claim.
For \(y=0\), the assertion follows directly from the definition of \(N_{\infty}\). \(\blacksquare\)
**Corollary 4.15**: _Let \(G\subset\mathbb{R}^{2}\) be a domain such that \(G+V^{+}=G\). Then for any line \(g\) given by \(g(t)=a+ty\) where \(t\in\mathbb{R}\) and \(a,y\in\mathbb{R}^{2}\) with \(y\neq 0\) that intersects \(G\), we have:_
\[g(t+i\tau)\in H(\tilde{G}\cup T^{+}\cup T^{-})\quad\forall\tau\neq 0\]
**Proof** If \(g\) intersects \(G\), then there exists \(b\in G\) such that \(g\) intersects \(b+V^{+}\subset G\). But the holomorphic envelope of \((b+\widetilde{V^{+}})\cup T^{+}\cup T^{-}\) is known.
Suppose \(g(t+i\tau)\) is a point with \(\tau\neq 0\). Then \(\tau y=\operatorname{Im}(g(t+i\tau))\neq 0\). The assertion is clear if \(y^{2}\geq 0\).
If \(y^{2}<0\), then we can translate the real coordinate system by \(-b\) without affecting the imaginary part. Thus, \(b\) is moved to the origin and \(g\) becomes \(g^{\prime}:=g-b\). Let \(x:=\operatorname{Re}(g^{\prime}(t+i\tau))=\operatorname{Re}(g(t+i\tau))-b\). Then there exists \(\tilde{x}\in g^{\prime}\cap V^{+}\) such that \(x=\tilde{x}+\lambda y\) for some \(\lambda\in\mathbb{R}\). Hence, \(x\hat{y}=\tilde{x}\hat{y}>0\) since \(\tilde{x}\in V^{+}\). By Theorem 4.14, we have \(x+iy\in H(\widetilde{V^{+}}\cup T^{+}\cup T^{-})\). Therefore, \(g(t+i\tau)\in H((b+\widetilde{V^{+}})\cup T^{+}\cup T^{-})\subset H(\tilde{G }\cup T^{+}\cup T^{-})\). \(\blacksquare\)
**Definition 4.16**: _Given a set \(M\subset\mathbb{R}^{n}\), we define \(M^{\prime}\) as the set of **spacelike points** with respect to every element of \(M\):_
\[M^{\prime}:=x\in\mathbb{R}^{n}\mid(x-y)^{2}<0\ \forall y\in M\]
_We denote \(R:=0^{\prime}\) as the set of spacelike points._
**Theorem 4.17**: _Any function that is holomorphic in \(\widetilde{V_{\mu}^{+}}\cup\widetilde{D_{0,s}^{-}}\cup T^{+}\cup T^{-}\), \(s\in V^{+}\), can be holomorphically extended to \(R\)._
**Proof:** Due to the rotational symmetry of the original domain, it suffices to consider the case \(n=2\). Let us choose a point \(p\in\partial D_{0,s}^{\prime}\cap R\) (without loss of generality, let \(p_{1}>0\)) and select a line \(g\) with slope magnitude \(<1\) passing through \(p\) and intersecting \(V_{\mu}^{+}\). We also consider the right branch of a hyperbola whose apex is at \(q:=g\cap d\) with \(d:=\{x\mid x_{0}=x_{1}\}\) and whose asymptotes are \(g\) and \(d\). It is clear that by choosing the hyperbola parameter suitably, we can make it pass through any point of the triangle \(\Delta_{pqr}\) with \(r:=d\cap s+\lambda(1,-1),\lambda\in\mathbb{R}\). This hyperbola shall be parameterized by \(K(t),t\in\mathbb{R}\). For a compact \(t\)-interval, \(K(t)\) lies in \(\Delta_{pqr}\), and the tangent to \(K(t)\) always intersects the region \(V_{\mu}^{+}\).
Now consider the family of curves \(K_{\alpha}\), obtained by shifting \(K\) to the right by \(\alpha\) parallel to the direction of the line \(d\). For \(\alpha>\max_{t\in\mathbf{R}}\) (dist \(\left(K(t),D_{0,s}^{\prime}\right)\), we have \(K_{\alpha}\subset D_{0,s}^{\prime}\), and \(K_{0}=K\). By replacing \(t\) with \(t+i\tau\) in the complex plane, we obtain from 4.15 that \(h(t+i\tau,\alpha)\in H\left(\widetilde{V_{\mu}^{+}}\cup T^{+}\cup T^{-}\right)\) for small \(\tau\neq 0\), since the imaginary part points in the direction of the derivative with respect to \(\tau\), which intersects \(V_{\mu}^{+}\) according to the construction. Moreover, \(h(t,\alpha)\in D_{0,s}^{\prime}\subset H\left(\widetilde{V_{\mu}^{+}}\cup \widetilde{D_{0,s}^{-}}\cup T^{+}\cup T^{-}\right)\) outside a compact \(t\)-interval.
Suppose now that \(f\) is holomorphic in \(\widetilde{V_{\mu}^{+}}\cup\widetilde{D_{0,s}^{\prime}}\cup T^{+}\cup T^{-}\). Then we define
\[\phi(t+i\tau,\alpha)=\frac{1}{2\pi i}\oint_{W}\frac{f(h(\rho,\alpha))}{\rho-(t+ i\tau)}d\rho\]
is a holomorphic function inside of \(W\). Here, \(W\) is a closed curve in the \((t+i\tau)\) plane that contains all values of \(t\) for which \(K(t)\notin D_{0,s}^{\prime}\) in its interior. Furthermore, \(W\) is smooth, and it holds that \(h(w,\alpha)\in H\left(\widetilde{V_{\mu}^{+}}\cup T^{+}\cup T^{-}\right)\) if \(w\in W\) and \(Im\,w\neq 0\). Since \(\phi\) coincides with \(f\) for \(\alpha>\max_{t\in\mathbb{R}}\left(\text{dist}\left(K(t),D_{0,s}^{\prime} \right)\right)\), \(\phi\) is a holomorphic extension of \(f\) to \(\bigcup_{\alpha\geq 0}K_{\alpha}\).
Since one can cover \(R\) entirely with hyperbolas of the type described above, it follows that \(R\subset H\left(\widetilde{V_{\mu}^{+}}\cup\widetilde{D_{0,s}^{\prime}}\cup T ^{+}\cup T^{-}\right)\).
Alternatively, one can prove the previous theorem 4.17 using the weak continuity theorem:
Consider a family of curves \(C_{\alpha}\), \(0\leq\alpha\leq\alpha_{0}\), with the following properties:
1. \(C_{\alpha}\subset D_{0,s}^{\prime}\) for \(0<\alpha\)
2. \(C_{0}\cap D_{0,s}^{\prime}\cap R=X\), \(C_{0}\backslash X\subset D_{0,s}^{\prime}\)
3. \(C_{a}\) are real-analytic curves given by the equations
\[x_{j}=x_{j,\alpha}(\xi),\quad j=0,1,\quad 0\leq\alpha\leq\alpha_{0},\quad 0 \leq\xi\leq 1\]
4. Every tangent to \(C_{\alpha}\) lies on a line that intersects \(V_{\mu}^{+}\).
The existence of such a family of curves is clear. One can, for example, take the right branch of a hyperbola (with non-lightlike asymptotes) passing through \(X\) and having the tangent \(\{x\mid x=s+\lambda(-1,1),\lambda\in\mathbb{R}\}\) at \(X\). Then \(C_{0}\) is the part of the hyperbola whose tangents intersect \(V_{\mu}^{+}\), and \(C_{\alpha}\) is the curve obtained by shifting the hyperbola by \(\alpha\) into the interior of \(D_{0,s}^{\prime}\). We now analytically continue the \(C_{\alpha}\) for complex values of \(\lambda=\xi+i\eta\). Then \(x_{j,\alpha}\) is holomorphic in \(A_{\delta}=\{|\eta|<\delta,0\leq\xi\leq 1\}\) for some small \(\delta\). In this way, we have constructed \(2\)-dimensional analytic surfaces \(F_{\alpha}\) containing the curves \(C_{\alpha}\), given by \(z_{j}=x_{j,\alpha}(\lambda)\), \(j=0,1\), \(0\leq\alpha\leq\alpha_{0}\), \(\lambda\in A_{\delta}\). Let \(t(\xi)\) be the tangent to \(C_{\alpha}\) at the point characterized by \(\xi\). Then, by Taylor expanding \(x_{j,\alpha}(\lambda)\) about the point \(\xi\), we have \(\text{Im},z_{j}=\eta,t(\xi)j+O(\eta^{2})\), meaning that for small \(\eta\), the imaginary part of \(F_{\alpha}\) points in the direction of the tangent to \(C_{\alpha}\). Since a spacelike tangent still intersects \(V_{\mu}^{+}\), all points of \(F_{\alpha}\) for \(\eta\neq 0\) still belong to \(H(\widetilde{V_{\mu}^{+}}\cup T^{+}\cup T^{-})\) (as argued in the proof above).
However, since \(C_{\alpha}\subset D_{0,s}^{\prime}\) for \(0<\alpha\), and \(C_{0}\backslash X\subset D_{0,s}^{\prime}\), it follows from the weak continuity theorem 4.4 that:
\[C_{0}\subset H\left(\widetilde{V_{\mu}^{+}}\cup\widetilde{D_{0,s}^{\prime}} \cup T^{+}\cup T^{-}\right).\]
Figure 1: Positioning of the curve \(K\) in the proof of Theorem 4.17
However, there is also a neighborhood \(U(X)\) in \(H\left(\widetilde{V_{\mu}^{+}}\cup\widetilde{D_{0,s}^{-}}\cup T^{+}\cup T^{-}\right)\) for \(X\). Using the double cone theorem 4.10, we see that the entire strip \((U(X)+V^{+})\cap\left(D_{0,s}^{-}+V^{-}\right)\) also lies in \(H\left(\widetilde{V_{\mu}^{+}}\cup\widetilde{D_{0,s}^{-}}\cup\ T^{+}\cup T^{-}\right)\).
If we carry out the procedure indicated here with further points from \(D_{0,s}^{\prime}\cap R\), we obtain the union of some (possibly infinitely many) strips that still lie in \(H\left(\widetilde{V_{\mu}^{+}}\cup\widetilde{D_{0,s}^{-}}\cup T^{+}\cup T^{-}\right)\). Of course, the procedure can also be carried out for points on the boundary of such strips.
It is conceivable that this procedure terminates at a boundary curve that has \(D_{0,s}^{\prime}\cap R\) or a parallel as an asymptote. For this case, consider a line with a space-like direction vector that intersects \(V_{\mu}^{+}\) and lies in a compact piece of \(R\backslash F\) (\(F\) denotes the set of points in \(R\) where a continuation has already been achieved). Now move this line in the direction of the line \(\{x_{0}=x_{1}\}\) until the compact piece in \(R\backslash F\) consists only of boundary points of \(F\). Through these boundary points (possibly only one), we can continue using the method given above.
Overall, we obtain:
\[R\subset H\left(\widetilde{V_{\mu}^{+}}\cup\widetilde{D_{0,s}^{ -}}\cup T^{+}\cup T^{-}\right)\]
The two proof methods presented here correspond to those in the proofs of the double cone theorem 4.10 in [1] and [11] (see also the explanations following this theorem).
**Remark 4.18**: _With the same reasoning as in the proofs of the preceding theorem, one can holomorphically continue from \(\widetilde{V_{\mu}^{+}}\cup\widetilde{D_{a,b}}\cup T^{+}\cup T^{-}\) to \(\widetilde{D_{a,b}}\) in \(\mathbb{C}^{2}\). \(\widetilde{D_{a,b}}\) is obtained from \(D_{a,b}\) for \(a,b\notin\overline{V^{-}}\) by the following construction: The lines that touch \(V_{\mu}^{+}\) and pass through \(a\) and \(b\) are denoted by \(g_{a}\) and \(g_{b}\), respectively. The segment from \(a\) to the point of contact of \(g_{a}\) with \(V_{\mu}^{+}\) is denoted by \(s_{a}\), and the half-line on \(g_{b}\) that moves away from \(V_{\mu}^{+}\) starting from \(b\) is denoted by \(s_{b}\). Then:_
\[\widehat{D_{a,b}}=\left(\left(s_{a}+V^{+}\right)\cap\left(D_{a,b}+V^{-}\right) \right)\cup\left(\left(s_{b}+V^{-}\right)\cap\left(D_{a,b}+V^{+}\right)\right)\]
_If \(a\notin\overline{V^{-}}\), \(b\in\overline{V^{-}}\), then:_
\[\widehat{D_{a,b}}=\left(s_{a}+V^{+}\right)\cap\left(D_{a,b}+V^{-}\right)\]
_For \(a\in\overline{V^{-}}\), \(b\notin\overline{V^{-}}\), we obtain:_
\[\widehat{D_{a,b}}=\left(s_{b}+V^{-}\right)\cap\left(D_{a,b}+V^{+}\right)\]
_However, if \(D_{a,b}\subset V^{+}\), then \(\widehat{D_{a,b}}=D_{a,b}\)._
Figure 2: Position of the curve \(C_{0}\)
**Definition 4.19**: _The mapping \(\phi:\mathbb{C}^{2}\backslash\left\{z^{2}=0\right\}\longrightarrow\mathbb{C}^{2} \backslash\left\{z^{2}=0\right\}\), defined by \(\phi(z)=\frac{-z}{z^{2}}\), is called the transformation of reciprocal radii._
**Lemma 4.20**: \(\phi\) _has the following properties:_
1. \(\phi\circ\phi(z)=z\quad\forall z\in\mathbb{C}^{2}\backslash\left\{z^{2}=0\right\}\)_. Since_ \(\phi\) _is holomorphic, it is also biholomorphic and thus_ \(\phi^{-1}=\phi\)__
2. \(\phi\left(T^{+}\right)=T^{+},\phi\left(T^{-}\right)=T^{-}\)__
3. \(\phi\left(V^{+}\right)=V^{-}\)__
4. \(\phi(R)=R\)__
5. \(\phi\left(D_{\left(-\frac{1}{m},0\right),0}\right)=(m,0)+V^{+}\)__
**Proof \(1)\)**
\[\phi\circ\phi(z)=\phi\left(-\frac{z}{z^{2}}\right)=\frac{-\frac{-z}{z^{2}}}{ \left(\frac{-z}{z^{2}}\right)^{2}}=\frac{z}{z^{2}}\frac{z^{4}}{z^{2}}=z.\]
\(2)\) Let \(z\in T^{+}\). Then
\[\operatorname{Im}\phi(z) =\operatorname{Im}\left(\frac{-z}{z^{2}}\right)=\operatorname{ Im}\frac{-x-iy}{x^{2}-y^{2}+2ixy}=\operatorname{Im}\frac{\left(-x-iy \right)\left(x^{2}-y^{2}-2ixy\right)}{\left(x^{2}-y^{2}\right)^{2}+4(xy)^{2}}\] \[=\frac{1}{\left(x^{2}-y^{2}\right)^{2}+4(xy)^{2}}\left(2x(xy)-y \left(x^{2}-y^{2}\right)\right)\]
and therefore
\[\left(\operatorname{Im}\phi(z)\right)^{2} =\frac{1}{\left(\right)^{2}}\left[4x^{2}(xy)^{2}+y^{2}\left(x^{2 }-y^{2}\right)^{2}-4(xy)(xy)\left(x^{2}-y^{2}\right)\right]\] \[=\frac{1}{\left(\right)^{2}}\left[y^{2}\left(x^{2}-y^{2}\right)^ {2}+4y^{2}(xy)^{2}\right]>0,\text{ since }y^{2}>0\text{ and }z^{2}\neq 0.\]
\(3)\) Let \(x\in V^{+}\). Then
\[(\phi(x))^{2}=\left(\frac{-x_{0}}{x^{2}}\right)^{2}-\left(\frac{-x_{1}}{x^{2} }\right)^{2}=\frac{1}{x^{2}}>0\text{ and }-\frac{x_{0}}{x^{2}}<0,\]
so \(\phi(x)\in V^{-}\).
\(4)\) Let \(x^{2}<0\). Then we have:
\[(\phi(x))^{2}=\left(-\frac{x}{x^{2}}\right)^{2}=\frac{x^{2}}{x^{2}x^{2}}=\frac {1}{x^{2}}<0.\]
\(5)\) It holds that:
\[\frac{-x}{x^{2}}\in(m,0)+V^{+} \Leftrightarrow\frac{-x_{0}}{x^{2}}-m>\left|\frac{x_{1}}{x^{2}} \right|\Leftrightarrow\frac{-x_{0}}{x^{2}}-m>\frac{\left|x_{1}\right|}{x^{2}} \text{ and }x^{2}>0\] \[\Leftrightarrow 0<x^{2}<-\frac{1}{m}\left(x_{0}+\left|x_{1}\right| \right)\Leftrightarrow\left(x_{0}+\frac{1}{2m}\right)^{2}<\left(\left|x_{1} \right|-\frac{1}{2m}\right)^{2}\] \[\Leftrightarrow\left|x_{0}+\frac{1}{2m}\right|<\left|\left|x_{1} \right|-\frac{1}{2m}\right|\Leftrightarrow-\frac{1}{m}+\left|x_{1}\right|<x_{0 }<-\left|x_{1}\right|\] \[\Leftrightarrow(x_{0},x_{1})\in D_{\left(-\frac{1}{m},0\right),0}\]
\(\blacksquare\)
**Remark 4.21**: _(see [11]) If the real coincidence region of an edge-of-the-wedge problem in \(\mathbb{C}^{2}\) consists of two double cones \(D_{(-a,0),0}\) and \(D_{c,d}\), then one can reach the situation \(\big{[}\big{(}\frac{1}{a},0\big{)}+V^{+}\big{]}\cup\phi\left(D_{c,d}\right)\cup T ^{+}\cup T^{-}\) by transforming reciprocal radii. If \(\overline{D_{c,d}}\cap\big{\{}x^{2}=0\big{\}}=\emptyset\), then \(\phi\left(D_{c,d}\right)\) is again a double cone. With Remark 4.18 (case \(\mu=0\)), one obtains holomorphic continuation into the set \(\widehat{\phi}\left(D_{c,d}\right)\). Undoing the transformation, the lines from \(\big{(}\frac{1}{a},0\big{)}\) to the endpoints of the double cone \(\phi\left(D_{c,d}\right)\) turn into hyperbolas (if they do not have a light-like slope). Thus, in general, one obtains holomorphic extendability into a set \(\widehat{D_{c,d}}\) which is bounded by two hyperbolic segments and two light-like line segments. However, if \(D_{c,d}\subset\big{(}D_{(-a,0),0}\big{)}^{\prime}\), then by Remark 4.12, it is certainly true that \(\widehat{D_{c,d}}=D_{c,d}\). No extension is possible if \(\overline{D_{c,d}}\subset\big{(}-\frac{1}{a},0\big{)}+V^{-}\) or \(\overline{D_{c,d}}\subset V^{+}\), since then \(\overline{\Phi\left(D_{c,d}\right)}\subset\big{(}\frac{1}{a},0\big{)}+V^{-}\). A detailed discussion of all possible cases will not be carried out here._
Now, an analogue to Corollary 4.15 will be described for domains \(G\subset\mathbb{R}^{2}\) with the property \(G+W=G\); here, \(W\) denotes the wedge domain \(W:=\big{\{}x\in\mathbb{R}^{2}\mid x^{2}<0,x_{1}>0\big{\}}\).
**Theorem 4.22**: _Let \(G\subset\mathbb{R}^{2}\) be a domain satisfying \(G+W=G\), where \(W\) is the wedge domain. Let \(\widetilde{z}\in\mathbb{R}^{2}\) be a point lying on a lightlike slope that intersects \(\overline{G}\), and let \(m\in\mathbb{R}\) satisfy \(m\neq 0\) if \(\widetilde{z}\in\overline{G}\), or \(m<0\) if \(\widetilde{z}\notin\overline{G}\). Let \(z=(z_{0},z_{1})\in\mathbb{C}^{2}\backslash\mathbb{R}^{2}\) be a point satisfying:_
\[(z-\widetilde{z})^{2}=m\]
_Then, \(z\in H\left(\widetilde{G}\cup T^{+}\cup T^{-}\right)\). Suppose that \(a+tb\) with \(a,b\in\mathbb{R}^{2}\) and \(t\in\mathbb{R}\), and \(b^{2}=0\) are points on a lightlike line that intersects \(\overline{G}\), and let \(\tau\neq 0\). Then, \(a+(t+i\tau)b\in H\left(\widetilde{W}\cup T^{+}\cup T^{-}\right)\)._
**Proof** We apply the reciprocal radius transformation twice to transform \((\mu,0)+V^{+}\) to \((0,\mu)+W\), and examine where the points \(g(t+i\tau),\tau\neq 0\), of a line \(g(t)\) that intersects \((\mu,0)+V^{+}\) are transformed. According to Corollary 4.15, such points lie in the holomorphic envelope.
1st transformation: \(\phi\), defined by \(w=\phi(x)=-\frac{x}{x^{2}};x=-\frac{w}{w^{2}}\). It holds that \(x\in(k,0)+V^{+}\Leftrightarrow w\in D_{\big{(}-\frac{1}{\mu},0\big{)},0}\) (see Lemma 4.20).
2nd transformation: \(\psi\), defined by \(z=\psi(w)=-\frac{tw-\widetilde{w}}{(w-\widetilde{w})^{2}};w=-\frac{z}{z^{2}}+ \widetilde{w}\) with \(\tilde{w}:=\left(-\frac{1}{2\mu},-\frac{1}{2\mu}\right)\). Then, \(w\in D_{\big{(}-\frac{1}{\mu},0\big{)},0}\Leftrightarrow z\in(0,\mu)+W\). The composition \(\psi\circ\phi\) yields:
\[z=-\frac{w-\widetilde{w}}{(w-\widetilde{w})^{2}}=-\frac{-\frac{x}{x^{2}}- \widetilde{w}}{\big{(}-\frac{x}{x^{2}}-\widetilde{w}\big{)}^{2}}=\frac{ \widetilde{w}x^{2}+x}{1+2x\widetilde{w}}.\]
The inverse \((\psi\circ\phi)^{-1}\) is given by:
\[x=\frac{z-\widetilde{w}z^{2}}{1-2z\widetilde{w}}.\]
Calculating yields for \(|m|\neq 1\) :
\[x_{0}=mx_{1}+c\Leftrightarrow\left(z_{0}-\frac{c-\mu}{1-m}\right)^{2}-\left( z_{1}-\frac{c-m\mu}{1-m}\right)^{2}=\mu^{2}\frac{1+m}{1-m}.\]
Letting \(\tilde{z_{0}}:=\frac{c-\mu}{1-m},\tilde{z_{1}}:=\frac{c-m\mu}{1-m},\lambda:= \mu^{2}\frac{1+m}{1-m}\), we obtain:
\[\tilde{z}_{0}=\tilde{z_{1}}-\mu,\ |m|>1\Leftrightarrow\lambda<0,\ |m|<1 \Leftrightarrow\lambda>0.\]
This means:
1) Lines with timelike slope, which intersect all \((\mu,0)+V^{+}\), transform into hyperbolas with \(\lambda<0\), i.e. such hyperbolas must be taken into account when forming the holomorphic envelope.
2) Lines with spacelike slope intersect \((\mu,0)+V^{+}\) only if \(c>\mu\), i.e. \(\tilde{z_{0}}>0\), thus \(\tilde{z}\in(0,\mu)+\overline{W}\), i.e. only \(\tilde{z}\in\overline{G}\) need to be considered here.
3) Lines with lightlike slope are transformed back into lines with lightlike slope under \(\psi\circ\phi\). Those lines that intersect \((\mu,0)+V^{+}\) are transformed into those that intersect \((0,\mu)+W\).
According to Corollary 4.15, all non-real points of lines that intersect \((\mu,0)+V^{+}\) belong to \(H\left(((\mu,0)+V^{+})^{-}\cup T^{+}\cup T^{-}\right)\). However, it holds that \(x\in\mathbb{R}^{2}\Longleftrightarrow z=\psi\circ\phi(x)\in\mathbb{R}^{2}\). This means that all points \(z\notin\mathbb{R}^{2}\) that satisfy one of the described hyperbola or line equations lie in \(H\left(((0,\mu)+W)^{-}\cup T^{+}\cup T^{-}\right)\).
If the transformations are carried out so that \((-\mu,0)+V^{-}\) turns into \((0,\mu)+W\), then points \(\tilde{z}\) with \(\tilde{z}_{0}=-\tilde{z_{1}}+\mu\) as hyperbola centers are obtained analogously. These also provide hyperbolas whose non-real points lie in the holomorphic envelope. This proves the statement.
Another consequence of Corollary 4.15 is the following statement:
**Corollary 4.23**: _Let \(G\subset\mathbb{R}^{2}\) contain a double cone \(D_{a,b}\). Let \(z\in\mathbb{C}^{2}\backslash\mathbb{R}^{2}\) be a point satisfying:_
\[(z-\tilde{x})^{2}=(b-\tilde{x})^{2},\]
_where \(\bar{x}\in D_{a,b}\) with \((\bar{x}-a)^{2}>(\bar{x}-b)^{2}\) (i.e., if \(D_{a,b}\) is divided into two halves by the line connecting the remaining two vertices, then \(\tilde{x}\) lies in the upper half). Then:_
\[z\in H\left(\tilde{G}\cup T^{+}\cup T^{-}\right),\]
**Proof** To simplify the calculation, we restrict ourselves here to the case from Lemma 4.20, point 5: \(a=\left(-\frac{1}{m},0\right),b=0\). Using the transformation of reciprocal radii \(\Phi(z)=-\frac{z}{\tilde{x}^{2}}\), \(D_{a,b}\) is biholomorphically mapped to \((m,0)+V^{+}\). According to Corollary 4.15, the non-real points of lines intersecting \((m,0)+V^{+}\) belong to the envelope of holomorphy of \(((m,0)+V^{+})\cup T^{+}\cup T^{-}\). It can be verified that with \(x=\phi(z)\),
\[(z-\tilde{x})^{2}=\widetilde{x}^{2}\Longleftrightarrow x_{0}=\frac{ \widetilde{x_{1}}}{\widetilde{x_{0}}}x_{1}-\frac{1}{2\widetilde{x_{0}}}.\]
This line intersects the domain \((m,0)+V^{+}\) when \(\left|\frac{\tilde{x}_{0}}{x_{0}}\right|<1\) if \(-\frac{1}{2\widetilde{x_{0}}}>m\), that is, if \(\widetilde{x_{0}}>-\frac{1}{2m}\). Since this is the case when \(\tilde{x}\) lies in the upper half of \(D_{a,b}\), the statement is proven.
**Remark 4.24**: _For symmetry reasons, Corollary 4.23 also holds with the roles of the vertices \(a,b\) of the double cone exchanged. In this case, \(\tilde{x}\) must be in the lower half of \(D_{a,b}\) accordingly._
With the previous considerations, the preparations are completed to prove the following theorem.
**Theorem 4.25**: _For all \(s\in V^{+}\), it holds that:_
**Proof** Again, it is sufficient to consider the case \(n=2\).
Due to Theorem 4.17, any holomorphic function in \(\overrightarrow{V^{+}_{m}}\cup\left(\widetilde{D_{0,s}}\right)^{\prime}\cup T ^{+}\cup T^{-}\) is also holomorphic in \(\bar{R}\).
The statement is proven according to the theorem of Pflug if one can show that any polynomial bounded and holomorphic function in \(\overrightarrow{V^{+}_{m}}\cup\bar{R}\cup T^{+}\cup T^{-}\) can be extended holomorphically to \(H:=\mathbb{C}^{n}\setminus\left\{z\mid z^{2}=\rho\text{ for some }\rho\in\mathbb{R}\text{ with }0\leq\rho\leq m^{2}\right\}\) (and that there exists a holomorphic function that cannot be extended holomorphically any further).
So let \(f\) be holomorphic in \(\overrightarrow{V^{+}_{m}}\cup\bar{R}\cup T^{+}\cup T^{-}\), with polynomial bound of order \(N\):
\[\left|f(z)\right|\leq C(\Delta(z))^{-N}\]
with \(\Delta(z)=\min\left\{\operatorname{dist}(z);\frac{1}{\sqrt{1+|z|^{2}}}\right\}\).
In \(\widetilde{V_{m}^{+}}\cup\widetilde{R}\cup T^{+}\cup T^{-}\), we have \(z^{2}\neq 0\) everywhere, because for \(z=x+iy\), we have:
\[z^{2}=0\Longleftrightarrow xy=0\text{ and }x^{2}=y^{2}\]
But this only happens if \(x^{2}=y^{2}=0\). Such points, however, do not occur in \(\widetilde{V_{m}^{+}}\cup\tilde{R}\cup T^{+}\cup T^{-}\) (see Remark 4.8, 2).
With the transformation of reciprocal radii, \(\widetilde{V_{m}^{+}}\cup\tilde{R}\cup T^{+}\cup T^{-}\) is therefore biholomorphically mapped onto \(U(W)\cup U(R)\cup T^{+}\cup T^{-}\), where \(W:=\left\{x\in\mathbb{R}^{2}\mid x\in V^{-},0<x^{2}<\frac{1}{m^{2}}\right\}\) and \(U(W),U(R)\) are certain complex neighborhoods of \(W\) and \(R\), respectively. Let \(\hat{f}:=f\circ\phi\). Then, \(\hat{f}\) is holomorphic in \(U(W)\cup U(R)\cup T^{+}\cup T^{-}\).
Now, let \(w_{0}\) be a point in \(\partial V^{-}\backslash 0\). Choose a neighborhood \(V\left(w_{0}\right)\) of \(w_{0}\) with the property:
\[w\in U\left(w_{0}\right)\cap\left(U(W)\cup U(R)\cup T^{+}\cup T^{-}\right) \Rightarrow\text{ for }z:=\phi(w)\text{ we have }\Delta(z)=\frac{1}{\sqrt{1+|z|^{2}}}.\]
For such \(z\), \(\frac{1}{\sqrt{1+|z|^{2}}}\) should be smaller than the boundary distance. This is possible since, if \(U\left(w_{0}\right)\) is chosen small enough, \(z\) is far away from the real boundary of \(V_{m}^{+}\) or \(R\), and because of property 3 in Remark 4.8, it is also far away from the boundary (in the complex sense) of \(\widetilde{V_{m}^{+}}\cup\tilde{R}\).
Claim: \(w\longmapsto\hat{f}(w)\left(w^{2}\right)^{N+1}\) is continuously extendable to \(w_{0}\) from \(U\left(w_{0}\right)\cap\left(W\cup R\cup T^{+}\cup T^{-}\right)\) with the value \(0\).
For \(w\in E\left(w_{0}\right)\cap\left(W\cup R\cup T^{+}\cup T^{-}\right)\) and \(z=\phi(w)\), we have:
\[\left|\hat{f}(w)\left(w^{2}\right)^{N+1}\right| =\left|f(z)\left(z^{2}\right)^{-N-1}\right|\leq C(\Delta(z))^{-N} \left|\left(z^{2}\right)^{-N-1}\right|=C\left(\sqrt{1+\|z\|^{2}}\right)^{N} \left|\left(z^{2}\right)^{-N-1}\right|\] \[=C\left(\sqrt{1+\frac{\|w\|^{2}}{\left|w^{2}\right|^{2}}}\right) ^{N}\left|\left(w^{2}\right)^{N+1}\right|=C\left(\sqrt{\left(w^{2}\right)^{2}+ \|w\|^{2}}\right)^{N}\left|w^{2}\right|.\]
The given equation shows that for any sequence in \(U(w_{0})\cap\left(W\cup R\cup T^{+}\cup T^{-}\right)\) that approaches \(w_{0}\), the expression tends towards zero, since \(|w|\) is bounded near \(w_{0}\). This proves the intermediate claim.
Using the Edge-of-the-Wedge Theorem 4.7, it follows that \(f(w)(w^{2})^{N+1}\) is holomorphic in
\(\left(W\cup R\cup\partial V^{-}\backslash 0\right)^{\sim}\cup T^{+}\cup T^{-}\). Moreover, by the Jost-Lehmann-Dyson formula 4.11, \(\hat{f}(w)(w^{2})^{N+1}\) is also holomorphic in the holomorphic domain \(G:=\mathbb{C}^{2}\setminus\left\{w\mid w^{2}\geq\frac{1}{m^{2}}\right\}\).
\(f(z)\left(z^{2}\right)^{-N-1}\) is holomorphic in \(H\), and hence \(f\) is also holomorphic. \(G\) contains \(U(W)\cup U(R)\cup T^{+}\cup T^{-}\) (since \(H\supset\widetilde{V_{m}^{+}}\cup\tilde{R}\cup T^{+}\cup T^{-}\)) and is star-shaped with respect to \(0\) and therefore simply connected. Thus, the continuation of \(\hat{f}(w)\left(w^{2}\right)^{N+1}\) to \(G\) is unique, and hence the continuation of \(f\) to \(H\) is also unique.
Since \(G\) is a domain of holomorphy, one can find a function \(\hat{g}\) that is not holomorphically extendable beyond the boundary of \(G\). The transformed function \(g=\hat{g}\circ\phi\) cannot be extended beyond the boundary of \(H\) (for points \(z\) with \(z^{2}\neq 0\), this follows directly, and if \(g\) were holomorphic at a point \(z\) with \(z^{2}=0\), it would also be holomorphic in a neighborhood \(U(z)\). But since \(U(z)\cap\left\{z\mid 0\leq z^{2}\leq m^{2}\right\}\neq\emptyset\) for all such \(z\), this case cannot occur due to the choice of \(\tilde{g}\)).
Thus, \(H\) is a domain of holomorphy and therefore the desired holomorphic envelope.
### Envelope of holomorphy and hyperboloids
\[G_{1}:=\left\{x\in\mathbb{R}^{4}\mid\sqrt{x_{1}^{2}+x_{2}^{2}+x_{3}^{2}+m_{1} ^{2}}<x_{0}<\sqrt{x_{1}^{2}+x_{2}^{2}+x_{3}^{2}+m_{2}^{2}}\right\}\]
The regions \(G_{1}\) and \(G_{2}\) satisfy the conditions of the Jost-Lehmann-Dyson formula 4.11. Thus, using this formula, one can determine the holomorphic envelopes of \(\tilde{G}_{1}\cup T^{+}\cup T^{-}\) and \(\tilde{G}_{2}\cup T^{+}\cup T^{-}\).
Due to the symmetry of \(G_{1}\) and \(G_{2}\) with respect to space rotations, it suffices to calculate the holomorphic envelope in (1+1) dimensions. For spacelike \(y\), we denote by \(\hat{y}\) as before the vector uniquely determined by \(\hat{y}^{2}=1\), \(\hat{y}\in V^{+}\), and \(\hat{y}y=0\):
\[\hat{y}=\frac{\operatorname{sgn}y_{1}}{\sqrt{-y^{2}}}\left(y_{1},y_{0}\right)\]
**Theorem 4.26**: \(H\left(\tilde{G}_{1}\cup T^{+}\cup T^{-}\right)=\\ T^{+}\cup T^{-}\cup\left\{z\mid y=0,z\in G_{1}\right\}\cup\left\{z\mid y^{2}=0, y\neq 0,x_{0}>x_{1}\,\operatorname{sgn}y_{0}\,\operatorname{sgn}y_{1} \right\}\cup\\ \cup\left\{z=x+iy\mid 0<y^{2}<\frac{m_{2}-m_{1}}{2}\text{ and }F^{-}\left(x_{1},y \right)<x_{0}<F^{+}\left(x_{1},y\right)\right\}\) _with_
\[F^{-}\left(x_{1},y\right) :=-\hat{y}_{0}\sqrt{\left(\frac{m_{2}-m_{1}}{2}\right)^{2}+y^{2} }+\sqrt{\left(\frac{m_{2}+m_{1}}{2}\right)^{2}+\left(x_{1}+\hat{y}_{1}\sqrt{ \left(\frac{m_{2}-m_{1}}{2}\right)^{2}+y^{2}}\right)^{2}}\] \[F^{+}\left(x_{1},y\right) :=\hat{y}_{0}\sqrt{\left(\frac{m_{2}-m_{1}}{2}\right)^{2}+y^{2}}+ \sqrt{\left(\frac{m_{2}+m_{1}}{2}\right)^{2}+\left(x_{1}-\hat{y}_{1}\sqrt{ \left(\frac{m_{2}-m_{1}}{2}\right)^{2}+y^{2}}\right)^{2}}.\]
In the case where \(y^{2}<0\), this theorem states that for a fixed \(y\), \(x\) lies between the upper branches of the two hyperbolas \(\left(x\pm\hat{y}\sqrt{\left(\frac{m_{2}-m_{1}}{2}\right)^{2}+y^{2}}\right)^{ 2}=\left(\frac{m_{2}+m_{1}}{2}\right)^{2}\).
**Proof** The Jost-Lehmann-Dyson formula 4.11 is used. It is
\[N\left(G_{1}\right)=\left\{\left(x^{\prime},\lambda\right)\mid x^{\prime}\in \overline{V^{+}},\lambda\geq\max\left\{m_{2}-\sqrt{x^{\prime 2}},\sqrt{x^{ \prime 2}}-m_{1}\right\}\right.\]
With this we now calculate the boundary of \(H\left(\tilde{G_{1}}\cup T^{+}\cup T^{-}\right)\).
\(\left(z-x^{\prime}\right)^{2}=\lambda^{2}\) means:
\(\left(1\right)\)\(\left(x-x^{\prime}\right)^{2}-y^{2}=\lambda^{2}\)
\(\left(2\right)\)\(\left(x-x^{\prime}\right)y=0\).
From (2) it follows that \(x-x^{\prime}=\mu\hat{y},\mu\in\mathbb{R}\). Using (1) we get \(\mu^{2}-y^{2}=\lambda^{2}\), hence \(\mu=\pm\sqrt{\lambda^{2}+y^{2}}\). If \(x^{\prime}=a\hat{y}+by\left(\Rightarrow x^{\prime 2}=a^{2}+b^{2}y^{2}\right)\), then we obtain with \(\alpha:=\sqrt{x^{\prime 2}}\):
\[x=\left(\pm\sqrt{\lambda^{2}+y^{2}}+\sqrt{\alpha^{2}-b^{2}y^{2}}\right)\hat{y} +by.\]
If \(y\) is given, then the "\({}^{\pm}\)" sign describes the points above the holomorphic envelope (i.e. with larger \(x_{0}\) values); the " sign corresponds to points below the holomorphic envelope. Eliminating the parameter \(b\) from the last equation yields (in the case of,,\({}^{\pm}\)"):
\[x_{0}=\hat{y}_{0}\sqrt{\lambda^{2}+y^{2}}+\sqrt{\alpha^{2}+\left(x_{1}-\hat{y} _{1}\sqrt{\lambda^{2}+y^{2}}\right)^{2}}.\]
where:
\[\alpha\geq 0,\quad\lambda\geq\max\left\{m_{2}-\alpha,\alpha-m_{1}\right\}.\]
\(x_{0}\) is strictly monotonically increasing in \(\alpha\), as differentiation shows: \(\frac{\partial x_{0}}{\partial\lambda}>0\), because the derivative of the first term of \(x_{0}\) is positive and has a magnitude greater than that of the second term. This means that by decreasing \(\alpha\) and \(\lambda\) while holding \(x_{1}\) and \(y\) constant, we obtain smaller \(x_{0}\)-values. Thus, the smallest \(x_{0}\)-value is obtained when \(\lambda=m_{2}-\alpha\) and \(0\leq\alpha\leq\frac{m_{1}+m_{2}}{2}\). Substituting this value, we get:
\[x_{0}=\hat{y}_{0}\sqrt{\lambda^{2}+y^{2}}+\sqrt{\left(m_{2}-\lambda\right)^{2} +\left(x_{1}-\hat{y}_{1}\sqrt{\lambda^{2}+y^{2}}\right)^{2}}.\]
Which value of \(\lambda\), \(\frac{m_{2}-m_{1}}{2}\leq\lambda\leq m_{2}\), now yields the smallest value of \(x_{0}\)?
\[\frac{\partial x_{0}}{\partial\lambda}=\frac{\lambda}{\sqrt{\lambda^{2}+y^{2} }}\hat{y}_{0}+\frac{\lambda-m_{2}+\left(x_{1}-\hat{y}_{1}\sqrt{\lambda^{2}+y^{ 2}}\right)\hat{y}_{1}\frac{-\lambda}{\sqrt{\lambda^{2}+y^{2}}}}{\sqrt{\left(m _{2}-\lambda\right)^{2}+\left(x_{1}-\hat{y}_{1}\sqrt{\lambda^{2}+y^{2}}\right) ^{2}}}.\]
For which values of \(x1,y,\lambda\) is this expression positive?
Upon calculation, we find that \(\frac{\partial x_{0}}{\partial\lambda}>0\Longleftrightarrow\)
\(x_{1}^{2}-2\frac{m_{2}\hat{y}_{1}}{\lambda}\sqrt{\lambda^{2}+y^{2}}\,x_{1}+ \left(m_{2}-\lambda\right)^{2}\hat{y}_{0}^{2}+\left(\lambda^{2}+y^{2}\right) \left(\frac{2\hat{y}_{1}^{2}m_{2}}{\lambda}-\left(\hat{y}_{0}-\frac{m_{2}}{ \lambda}\right)^{2}\right)>0\).
This is a quadratic expression in \(x_{1}\). It describes an upward-opening parabola with fixed remaining parameters, and has, as can be calculated, no roots.
Therefore, \(\frac{\partial x_{0}}{\partial\lambda}>0\) always holds. So the smallest value of \(\lambda=\frac{m_{2}-m_{1}}{2}\) yields the boundary.
\[x_{0}=\hat{y}_{0}\sqrt{\left(\frac{m_{2}-m_{1}}{2}\right)^{2}+y^{2}}+\sqrt{ \left(\frac{m_{2}+m_{1}}{2}\right)^{2}+\left(x_{1}-\hat{y}_{1}\sqrt{\left( \frac{m_{2}-m_{1}}{2}\right)^{2}+y^{2}}\right)^{2}}.\]
The same procedure is applied for the lower boundary in the range \(y^{2}<0\). The claim is also obtained there.
For the case \(y^{2}=0,y\neq 0\), one has to take the closure of those points \(z\) with \(y^{2}<0\) that do not lie in \(H\left(\tilde{G}_{1}\cup T^{+}\cup T^{-}\right)\).
For \(z\in H\left(\tilde{G}1\cup T^{+}\cup T^{-}\right)\), \(x\) lies precisely between the upper branches of the hyperbolas describing the boundary, given by
\[\left(x\pm\hat{y}\sqrt{\left(\frac{m_{2}-m_{1}}{2}\right)^{2}+y^{2}}\right)^{ 2}=\left(\frac{m_{2}+m_{1}}{2}\right)^{2}.\]
For \(y_{1}\longrightarrow y_{0}\neq 0\), this yields \(x_{0}\leq x_{1}\) sgn \(y_{0}\) sgn \(y_{1}\).
This proves the claim.
For \(y=0\), one obtains the desired result directly from the definition of \(N(G)\). \(\blacksquare\)
## 5 Application of holomorphic continuation to a mass gap situation
With the preparations from the previous sections, we can now prove the following statement:
**Theorem 5.1**: _Let \(\{\mathcal{A}(O),\mathcal{A},M,\alpha\}\) be a theory of local observables with the covariant factorization representation \(\pi\). For the minimal \(U\) corresponding to \(\pi\), spec \(U\) cannot be restricted in the following way:_
\[\operatorname{spec}\,U\subset\left\{p\mid m^{2}\leq p^{2}\leq m_{1}^{2},p_{0}>0 \right\},\quad m<m_{1}<\infty\]
_where \(m\) is chosen maximally, i.e., \(\left\{p\mid p^{2}=m^{2},p_{0}>0\right\}\subset\operatorname{spec}U\)._
**Proof** We assume that the spectrum of \(U\) is localized as follows:
\[\operatorname{spec}\,U\subset\left\{p\mid m^{2}\leq p^{2}\leq m_{1}^{2},p_{0} >0\right\}\subset V^{+}\]
and the spectrum starts exactly at \(m\), and we will use these assumptions to arrive at a contradiction. For \(t\in V^{+}\), we choose \(x\in\mathcal{A}\left(D_{-t,t}\right)\) and define the functions:
\[F_{x,\psi}^{+}(a) =(\Psi,\pi(x^{*})U(a)\pi(x)\Psi)\,\text{ and }\] \[F_{x,\psi}^{-}(a) =(\Psi,\pi(\alpha_{a}x)\pi(x^{*})U(a)\Psi)=\left(\Psi,E(a)\pi(x)U (-a)\pi(x^{\prime}U(a)\Psi\right).\]
Due to Axiom 2 (locality), we have:
\[F_{x,\Psi}(a):=F_{x,\psi}^{+}(a)-F_{x,\psi}^{-}(a)=0,\text{ if }a\in\left(D_{-2t,2t}\right)^{\prime}\]
since \(\pi\left(\alpha_{a}x\right)\) and \(\pi\left(x^{*}\right)\) commute in this case.
Therefore, since \(\operatorname{supp}\,F_{x,\psi}\subset\left(-2t+V^{+}\right)\cup\left(2t+V^{-}\right)\), we can split \(F_{x,\psi}\) into \(F_{x,\psi}=G^{+}-G^{-}\), such that
\[\operatorname{supp}G^{+}\subset-2t+V^{+}\quad\operatorname{supp}G^{-}\subset 2 t+V^{-}.\]
Outside of \(D_{-2t,2t}\), this splitting is unique.
Due to Theorem 4.6, the following holds for the Fourier transforms:
\(\widetilde{G^{+}}\) is the boundary value in the distributive sense of a holomorphic function in \(T^{+}\), which is also denoted by \(\widetilde{G^{+}}\). Similarly, \(\widetilde{G^{-}}\) is the boundary value of a holomorphic function in \(T^{-}\). Furthermore, for real points, the following holds:
\[\widetilde{G^{+}}(p)=\widetilde{G^{-}}(p)\quad\forall p\in\Gamma:=M\backslash \operatorname{supp}\widetilde{F_{x,\psi}}.\]
The Edge-of-the-Wedge Theorem 4.7 is tailored to this situation: There exists a holomorphic function \(G^{*}\) on \(T^{+}\cup T^{-}\cup\tilde{\Gamma}\) such that:
\[G^{*}\left|T^{+}=\widetilde{G^{+}},\ G^{*}\right|T^{-}=\widetilde{G^{-}}\text { and }\ G^{+}(p)=\widetilde{G^{+}}(p)=\widetilde{G^{-}}(p)\quad\text{for }p\in\Gamma.\]
Due to Lemma 3.1, further statements can be made about the shape of the region \(\Gamma\): \(\operatorname{supp}\widetilde{F_{x,\Psi}}\subset\operatorname{supp} \overline{F_{x,\Psi}^{+}}\cup\operatorname{supp}\widetilde{F_{x,\Psi}}\subset \operatorname{spec}U\cup\left(2S-\operatorname{spec}U\right)\), where \(S\) denotes the support of \(\Psi\).
If \(S\) is a small Borel containing \(ms\) (where \(s\) is a vector in \(V^{+}\) with \(s^{2}=1\)), then we obtain:
\(\Gamma\supset\left\{p\mid p^{2}>m_{\mathbf{1}}^{2},p_{0}>0\right\}\cup\left(D _{0,(2m+\epsilon)s}\right)^{\prime}\quad\text{ for a small }\epsilon\) determined by S.
Using Theorem 4.25, we see that \(G^{*}\) is also holomorphic in \(N:=M\backslash\left\{p\mid 0\leq p^{2}\leq m_{1}^{2}\right\}\). Since \(\widetilde{G^{+}}\) and \(\widetilde{G^{-}}\) are boundary values of \(G^{*}\) at real points, we have
\[\widetilde{G^{+}}(p)=\widetilde{G^{-}}(p)\quad\forall p\in N\]
Hence, we also have: \(\widetilde{F_{x,\Psi}}\mid N=0\). Since \(\operatorname{supp}\,\widetilde{F_{x,\psi}^{+}}\cap N=\emptyset\), it follows that \(\widetilde{F_{x,\psi}}\mid N=0\).
Claim: \(q\in 2ms-\left\{p\mid p^{2}=m^{2},p_{0}>0\right\}\Rightarrow\exists\Psi_{0}\) with \(\operatorname{supp}\Psi_{0}\ni ms\) and \(x_{0}\in\mathcal{A}\left(D_{-t,t}\right)\) for a large \(t\), such that \(q\in\operatorname{supp}\widetilde{F_{x_{0},\Psi_{0}}}\).
Proof of this claim: For any neighborhood \(V\) of \(ms\), \(E(V)\neq 0\); therefore, by Lemma 3.2, \(\overline{K(V)}=\mathcal{H}\). Hence,
\[K^{\prime}(V):=\left\{\Phi\mid\Phi=\int_{V}\pi\left(x^{*}\right)dE(p)\Psi,\, \pi\left(x^{*}\right)\Psi\in K(V)\right\}\]
is dense in \(\mathcal{H}\). It follows that for any relatively compact \(W\) with \(W\cap\operatorname{spec}\,U\neq\emptyset\), there exists \(\Phi\in K^{\prime}(V)\) such that
\[\int_{W}(\Phi,dE(p)\Phi)\neq 0;\]
\(W\cap\mbox{spec}\ U=\emptyset\). Now, if \(q\in 2ms-\{p\mid p^{2}=m^{2},p_{0}>0\}\), then for every relatively compact neighborhood \(U(q)\), there exists an \(S\ni ms\) and a relatively compact \(W\) with \(W\cap\mbox{ spec}\ U\neq\emptyset\) such that \(2S-W=U(q)\). Furthermore, there exist \(\Phi\in K^{\prime}(S)\) and \(\rho\in\mathcal{S}(M)\) with \(\rho\mid U(q)\equiv 1\), such that
\[\int_{W}\rho(p)(\Phi,dE(p)\Phi)\neq 0.\]
Write \(\Phi\) as \(\Phi=\int_{S}\pi\left(x_{0}^{*}\right)dE(s)\Psi_{0}\), then it is shown that \(q\in\mbox{supp}\widetilde{F_{x_{0},\Psi_{0}}^{-}}\), since \(U(q)\) can be chosen arbitrarily small. This proves the intermediate claim.
Since \(2ms-\{p\mid p^{2}=m^{2},p_{0}>0\}\cap N\neq\emptyset\), a contradiction to the location of the support of \(\widetilde{F_{x_{0},\psi_{0}}^{-}}\) is obtained. This proves the theorem.
## 6 Outlook
With the formulation of the millennium problems in [10], the topic of the mass gap became more visible again. Although in [1] it is shown that a theory without mass gap is consistent with the axiomatic approach, the techniques elaborated in this article might still prove useful in that context. Relevant literature that combines methods of holomorphic continuation with the millennium problem and Yang-Mills-Theory seems to be rare though.
Another interesting application for the techniques described in this article is the following: Special representations of the observable algebra are the factor representations, also known as superselection sectors. They represent the set of states with the same charge quantum numbers.
For example, consider the state that corresponds to the presence of a particle with a specific charge. The state corresponding to two of these particles plus a corresponding antiparticle again has the same charge. However, the possible energy-momentum values for the 3-particle system should belong to the same superselection sector again. Denoting the spectrum of translations to superselection sector A as \(S_{A}\), one would thus conjecture that (s. [1]):
\[3\ S_{A}\subset S_{A}\]
A mathematical proof of this conjecture would be desirable. It is likely that it can be proven with the techniques of this article.
Figure 4: Portion of the coindidence domain contained in \(\Gamma\)
## Glossary
Table of commonly used symbols
\[M \text{Minkowski space}\] \[V^{+} \text{Forward light cone}\] \[V^{-} \text{Backward light cone}\] \[T^{+} =\mathbb{R}^{n}+iV^{+}\text{, forward tube}\] \[T^{-} =\mathbb{R}^{n}+iV^{-}\text{, backward tube}\] \[V^{+}_{\mu} \left\{x\in V^{+}\mid x^{2}>\mu^{2}\right\}\] \[R \text{Set of spacelike points}\] \[S^{\prime} \text{($S\subset\mathbb{R}^{n}$), set of spacelike points corresponding to each $x\in S$}\] \[N(.) \text{Set of parameters for admissible hyperbolas or hyperboloids}\] \[N_{\infty}(.) \text{Set of parameters for admissible lines or planes}\] \[\mathcal{N}^{\prime} \text{($\mathcal{N}\subset\mathcal{B}(\mathcal{H})$), commutant of $\mathcal{N}$}\] \[D_{a,b} \text{Double cone spanned by $a,b\in\mathbb{R}^{n}$}\] \[H(G) \text{Envelope of holomorphy of the domain $G\subset\mathbb{C}^{n}$}\] \[\tilde{B} \text{($B\subset\mathbb{R}^{n}$ region), neighborhood of $B$ resulting from the Edge-of-the-Wedge-Theorem}\] \[f \text{(f function), Fourier transform of $f$}\]
## List of Figures
* 1 Positioning of the curve \(K\) in the proof of Theorem 4.17
* 2 Position of the curve \(C_{0}\)
* 3 Transformation of reciprocal radii applied to \(V^{+}_{m}\cup(D_{0,s})^{\prime}\)
* 4 Portion of the coindidence domain contained in \(\Gamma\)
|
2310.20584 | First measurement of kaonic helium-4 M-series transitions | In this paper we present the results of a new kaonic helium-4 measurement
with a 1.37 g/l gaseous target by the SIDDHARTA-2 experiment at the DA{\Phi}NE
collider. We measured, for the first time, the energies and yields of three
transitions belonging to the Mseries. Moreover, we improved by a factor about
three, the statistical precision of the 2p level energy shift and width induced
by the strong interaction, obtaining the most precise measurement for gaseous
kaonic helium, and measured the yield of the L{\alpha} transition at the
employed density, providing a new experimental input to investigate the density
dependence of kaonic atoms transitions yield. | F Sgaramella, D Sirghi, L Abbene, F Artibani, M Bazzi, D Bosnar, M Bragadireanu, A Buttacavoli, M Cargnelli, M Carminati, A Clozza, F Clozza, G Deda, R Del Grande, L De Paolis, K Dulski, L Fabbietti, C Fiorini, I Friscic, C Guaraldo, M Iliescu, M Iwasaki, A Khreptak, S Manti, J Marton, M Miliucci, P Moskal, F Napolitano, S Niedzwiecki, H Ohnishi, K Piscicchia, F Principato, A Scordo, M Silarski, F Sirghi, M Skurzok, A Spallone, K Toho, M Tuchler, O Vazquez Doce, C Yoshida, J Zmeskal, C Curceanu | 2023-10-31T16:18:29Z | http://arxiv.org/abs/2310.20584v1 | # First measurement of kaonic helium-4 M-series transitions
###### Abstract
In this paper we present the results of a new kaonic helium-4 measurement with a 1.37 g/l gaseous target by the SIDDHARTA-2 experiment at the DA\(\Phi\)NE collider. We measured, for the first time, the energies and yields of three transitions belonging to the M-series. Moreover, we improved by a factor about three, the statistical precision of the 2p level energy shift and width induced by the strong interaction, obtaining the most precise measurement for gaseous kaonic helium, and measured the yield of the L\({}_{\alpha}\) transition at the employed density, providing a new experimental input to investigate the density dependence of kaonic atoms transitions yield.
_Keywords_: Kaonic Helium, X-rays spectroscopy, atomic cascade, kaon-nucleon interaction Submitted to: _J. Phys. G: Nucl. Part. Phys._
## Introduction
The study of kaonic atoms through X-ray spectroscopy provides valuable experimental data to probe the strong interaction between negatively charged kaons (K\({}^{-}\)) and nuclei at low energy. This technique allows to extract information about the strong interaction between the kaon and the nucleus at threshold energy, by analyzing the shift and the broadening of the energy levels with respect to the values predicted by quantum electrodynamics (QED). Such data are crucial to constrain the quantum chromodynamics models in the non-perturbative regime with strangeness [1, 2, 3, 4].
A kaonic atom is formed when a K\({}^{-}\), with sufficiently low momentum, is stopped in a target and captured by an atom via the electromagnetic interaction. The capture occurs in highly excited state, determined by the reduced mass of the system. After the capture, the kaonic atom experiences a series of de-excitation processes, including Coulomb de-excitation and external Auger emission [5], which bring the kaon to the ground state. These processes are accompanied by the emission of radiation which, for the transitions to the lower-lying level, is in the X-ray domain. Not all the kaons reach the ground state, since other processes may heavily influence the cascade. Among these processes, for exotic hydrogen and, for some extent, for helium, the Stark mixing is quite important [6]. Stark effect is responsible for a drastic reduction of the X-ray yields to lower levels when the target density increases. For this reason, in addition to the study of the strong interaction, kaonic atoms X-ray spectroscopy is a unique tool to investigate the de-excitation mechanisms that occur in kaonic atoms, by measuring the X-ray yields of various transitions. Since the competing processes become more prevalent with density, they play a significant role in determining the X-ray yields, and the experimental density dependence of the yields can serve as a test bench for various cascade models [7, 8].
In this context, a special role is played by kaonic helium. In the 1970s and 1980s, three different experiments [9, 10, 11] observed a large energy shift on the kaonic helium-4 2p level induced by the strong interaction, results which were in contradiction with the theoretical expectations [12, 13]. The so-called kaonic "helium puzzle" was solved by the E570 experiment at KEK [14], confirmed also by the SIDDHARTA experiment [15, 16]. More recent and precise measurements have been performed at J-PARC by E62 [17], using a liquid helium target, and by the SIDDHARTA-2 collaboration, employing a gaseous one [18, 19].
The study of kaonic helium is a topic of great interest [20], concerning both the X-ray energies and yields. Koike and Akaishi [21] developed a cascade model for kaonic helium, which reproduced the experimental data measured at liquid helium-4 density, but failed to match the data from the SIDDHARTA experiment [22], which included measurements at various gas densities. As a result, the dependence of the yields in kaonic helium across the whole density scale, from liquid to gas, is still an open issue, requiring experimental input.
In this work, we report the recent measurement of gaseous kaonic helium-4 performed by the SIDDHARTA-2 collaboration at the DA\(\Phi\)NE collider of INFN-LNF. For the first time, we successfully identified and measured three M-series transitions in kaonic helium-4 and we determined their X-ray yields. Additionally, we measured the X-ray yield for some L-series transitions at the density of 1.37 \(\pm\) 0.07 g/l, providing new valuable information on kaonic helium cascade process. Finally, we performed a new measurement of the 2p level energy shift and width, with threefold improved statistical precision with respect to the previous measurements performed with gaseous helium, setting a new record for gaseous targets.
## 1 The SIDDHARTA-2 experiment
The SIDDHARTA-2 apparatus (see Figure 1) is installed above the interaction region (IR) of the DA\(\Phi\)NE collider [23] at the National Institute of Nuclear Physics in Frascati (INFN-LNF). Low momentum (p = 127 MeV/c) and monochromatic (\(\Delta\)p/p = 0.1%) kaons are delivered via the \(\phi\)-decay into a K\({}^{+}\)K\({}^{-}\) pair.
The main goal of the experiment is to perform the first measurement of the strong interaction induced shift and width of the fundamental level in kaonic deuterium. This measurement, combined with the kaonic hydrogen one already performed by SIDDHARTA [24], will allow extracting, for the first time, the experimental isospin dependent antikaon-nucleon scattering lengths [4]. To face the challenging kaonic deuterium measurement, the SIDDHARTA-2 collaboration developed a completely new apparatus with respect to the one used for the kaonic hydrogen measurement. The core of the setup consists of a cryogenic cylindrical target made of 150 \(\mu\)m kapton walls and a high purity aluminium frame to ensure an efficient cooling. The target can be filled with different types of gases. The cooling system permits to cool the gas down to 20 K, while the pressure can be tuned up to 1.4 bar to optimize the kaons' stopping efficiency and perform studies at different densities. The target is surrounded by 384 Silicon Drift Detectors (SDDs), covering an active area of 245.8 cm\({}^{2}\). The SDDs have been developed by Fondazione Bruno Kessler (FBK) in collaboration with INFN-LNF, Politecnico of Milano and the Stefan Meyer Institute (SMI), specifically for performing kaonic atoms measurements. The excellent energy and time resolutions (FWHM), 157.8\(\pm\)0.3 eV at 6.4 keV [25] and 500 ns [26], respectively, are fundamental for the background reduction and, consequently, the success of the measurement. Another factor of merit of the setup is the capability to determine the X-ray energy with a systematic error of a few eV [27], making SIDDHARTA-2 the ideal experiment to perform high precision kaonic atoms X-ray spectroscopy.
Two types of background are considered, electromagnetic and hadronic. The electromagnetic one, asynchronous with the kaons' production, is generated by particles lost by DA\(\Phi\)NE's circulating beams due to the beam-gas interaction and the Touschek effect.
The kaon trigger (KT), consisting of two plastic scintillators placed above and below the interaction region, is used to detect the back-to-back emitted K\({}^{+}\)K\({}^{-}\) pairs. The coincidence between the two scintillators provides the trigger signal that allows to reject the hits on the SDDs not synchronous with kaons.
The hadronic background is related to the K\({}^{+}\) decay and the K\({}^{-}\) nuclear absorption resulting in the emission of particles (MIPs), mostly pions and muons, releasing a signal in the SDDs synchronous with the KT signal. To overcome this drawback, three different veto systems [28], placed behind the SDDs and around the vacuum chamber, are employed to detect and reject the MIP-induced signals. The setup is also equipped with a luminosity monitor [29], placed on the longitudinal plane in front of the IR, to monitor the background and measure the collider luminosity in real-time.
In 2022 the SIDDHARTA-2 setup was installed on DA\(\Phi\)NE and then optimized by performing kaonic helium transitions measurements to the 2p level, which have a much higher yields than the transitions on the 1s level in kaonic deuterium. The target cell was filled with helium-4 at the density of 1.37 \(\pm\) 0.07 g/l (1.1% liquid helium density). No veto systems were installed at that time; for this reason only the electromagnetic background reduction procedure is applied in this work.
## 2 Data selection
Figure 2 shows the inclusive energy spectrum acquired by the SDDs during the helium-4 run for a total integrated luminosity of 45 pb\({}^{-1}\). The spectrum displays fluorescence peaks which correspond to the X-ray emission of materials placed around the SDDs. Titanium and copper lines are from setup components inside the vacuum chamber, while the bismuth comes from the alumina ceramic boards behind the SDDs.
The high continuous background contribution prevents to directly observe the kaonic helium signal. In this context, the KT plays a crucial role. Only the events falling in a 5 \(\mu\)s time window in coincidence with a trigger signal are selected, rejecting a substantial fraction of the background. The time window width was tuned to enable the front-end electronics to
Figure 1: Schematic drawing of the SIDDHARTA-2 experimental apparatus with various elements of the setup indicated.
process and acquire the signals. However, there are cases where MIPs, generated by beam-beam and beam-gas interactions, can produce a trigger signal when they simultaneously pass through the KT scintillators. To distinguish between these MIP-induced triggers and those originating from K\({}^{+}\)K\({}^{-}\) pairs, a Time of Flight (TOF) analysis is employed. This technique relies on measuring the temporal difference between the trigger signal and the DA\(\Phi\)NE radio-frequency (RF), which serves as a collision reference. Figure 3 shows the mean time distribution measured by the two KT scintillators and the TOF cut used to reject the MIP-induced triggers.
In order to enhance the background rejection, the time difference between the KT signal and the time of X-ray detection was evaluated. This time distribution is shown in Figure 3; the main peak within the red dashed lines corresponds to hits on the SDDs in coincidence with the trigger, while the flat distribution is given by uncorrelated events. The combined use of KT and SDDs time information allowed to reduce the background by a factor \(\sim\)10\({}^{5}\), resulting in the final energy spectrum shown in Figure 4.
## 3 Data analysis, results and discussion
After the selection procedure, the kaonic helium-4 M-series and L-series lines are clearly visible (Figure 4) in the energy regions 3.0 keV - 4.5 keV and 6.4 keV - 12 keV, respectively. Additionally, the X-rays lines corresponding to kaonic carbon, oxygen, nitrogen and aluminium, generated by kaons stopped in the apparatus support frame and in the kapton window of the target cell, are also detected. The measurements and a detailed investigation of
Figure 2: Inclusive kaonic helium-4 energy spectrum.
these lines are reported in [30].
The kaonic helium-4 peaks were fitted to extract their energies and the number of events associated to each line. The detector energy response function is given by the convolution of a Gaussian (Eq. 1) with a tail-function (Eq. 2) [31], used to account for the electron-hole recombination and the incomplete charge collection effect. The width (\(\sigma\)) of the Gaussian represents the energy resolution of the SDDs, and is described as a function of three parameters: the Fano Factor (FF) [32], the electron-hole pair energy creation (\(\epsilon\)), the electronic and
Figure 4: X-ray energy spectrum and fit of the data after the background suppression procedure (see text). The kaonic helium-4 L-series and M-series transitions are indicated.
Figure 3: Left: Two-dimensional scatter plot of KT time distributions. The coincidence events related to kaons (high intensity) are clearly distinguishable from MIPs (low intensity). Right: Time difference between the KT signals and X-ray hits on the SDDs. The dashed lines represent the acceptance window.
thermal noises (noise).
\[G(x) =\frac{A_{G}}{\sqrt{2\pi}\sigma}\cdot e^{\frac{-(x-x_{0})^{2}}{2 \sigma^{2}}}\hskip 56.905512pt\sigma=\sqrt{FF\cdot\varepsilon\cdot E+\frac{noise^{2}} {2.35^{2}}} \tag{1}\] \[T(x) =\frac{A_{T}}{2\beta\sigma}\cdot e^{\frac{x-x_{0}}{\beta\sigma}+ \frac{1}{2\beta^{2}}\cdot erfc}\left(\frac{x-x_{0}}{\sqrt{2}\sigma}+\frac{1}{ \sqrt{2}\beta}\right) \tag{2}\]
The \(A_{G}\) and \(A_{T}\) are the amplitudes of the Gauss and Tail functions, respectively, while the \(\beta\) parameter is the slope of the tail; the _erfc_ term stands for the complementary error function.
Since the 2p level could be affect by the strong interaction between kaon and nucleus, a Lorentzian function (Eq. 3), accounting for the intrinsic line width (\(\Gamma\)) induced by the strong interaction, convoluted with the Gaussian and the tail-function, was used to fit the L-series peaks.
\[L(x)=\frac{1}{\pi}\frac{\frac{1}{2}\Gamma}{\left(x-x_{0}\right)^{2}+\left( \frac{1}{2}\Gamma\right)^{2}} \tag{3}\]
For a full description of the spectrum shape, an exponential function plus a first degree polynomial were employed to reproduce the continuous background below the peaks. The global fit function, shown in Figure 4, properly reproduces the data distribution in the energy range from 3 to 12 keV, with a \(\chi^{2}/ndf=1.11\).
### Kaonic helium-4 M-series transitions
Using the gaseous target allowed to observe and measure, for the first time, M-lines in kaonic helium, in particular the M\({}_{\beta}\), M\({}_{\gamma}\), and M\({}_{\delta}\) transitions. Their energies are reported in Table 1. The associated systematic errors were calculated taking into account the linearity and stability of the SDDs, as well as the calibration accuracy [27]. The M\({}_{\alpha}\) transition was not observed because, having an energy lower than 3 keV, it is absorbed by the target cell kapton walls.
### A new kaonic helium-4 L\({}_{\alpha}\) transition measurement
Assuming the effect of the strong interaction on the 3d level to be negligible, the energy shift (\(\epsilon_{2p}\)) has been extracted for the 2p level from the difference between the measured
\begin{table}
\begin{tabular}{l l c} \hline Transition & X-ray name & Energy \\ \hline
3d\(\rightarrow\)2p & \(L_{\alpha}\) & \(6461.4\pm 0.8\,(\mathrm{stat})\pm 2.0\,(\mathrm{sys})\) eV \\
5f\(\rightarrow\)3d & \(M_{\beta}\) & \(3300.8\pm 13.2\,(\mathrm{stat})\pm 2.0\,(\mathrm{sys})\) eV \\
6f\(\rightarrow\)3d & \(M_{\gamma}\) & \(3860.4\pm 13.6\,(\mathrm{stat})\pm 2.2\,(\mathrm{sys})\) eV \\
7f\(\rightarrow\)3d & \(M_{\delta}\) & \(4214.1\pm 19.6\,(\mathrm{stat})\pm 2.2\,(\mathrm{sys})\) eV \\ \hline \end{tabular}
\end{table}
Table 1: The measured energies of the kaonic helium-4 L\({}_{\alpha}\), M\({}_{\beta}\), M\({}_{\gamma}\), and M\({}_{\delta}\) transitions.
L\({}_{\alpha}\) transition energy (E\({}_{\rm 3d\to 2p}^{\rm exp}\)), reported in Table 1, and the electromagnetic value (E\({}_{\rm 3d\to 2p}^{\rm e.m}\)) calculated by considering vacuum polarization and the recoil effect [33]. The width (\(\Gamma_{2p}\)) is directly derived from the \(\Gamma\) parameter of the Lorentzian function used to fit the L\({}_{\alpha}\) peak. Therefore, the measured strong interaction induced shift and width of the 2p level in kaonic helium-4 are:
\[\epsilon_{2p}=\rm E_{\rm 3d\to 2p}^{\rm exp}-\rm E_{\rm 3d\to 2p}^{\rm e.m}=-1.9 \pm 0.8\,(stat)\pm 2.0\,(sys)\,\,eV \tag{4}\] \[\Gamma_{2p}=0.01\pm 1.60\,(stat)\pm 0.36\,(sys)\,\,eV \tag{5}\]
The systematic uncertainty on the shift is related to the accuracy of the SDDs calibration, whereas the one on the width is given by the inaccuracy on the SDDs energy resolution.
The reported results show that there is no sharp effect of the strong interaction on the 2p level, confirming the past measurements on gaseous kaonic helium-4, and improving by a factor of three the statistical precision on the 2p level shift and width, making it the most precise measurement in a gas target.
### The L and M-series transitions X-ray yields
Monte Carlo simulations are used to evaluate the fraction of kaons stopping in the gas targets, which is necessary to extract the absolute X-ray yields. The absolute yield (Y) for an X-ray transition per stopped kaon is given by the ratio between the experimental detection efficiency (\(\epsilon^{EXP}\)) and the Monte Carlo efficiency (\(\epsilon^{MC}\)). The Monte Carlo simulation code is based on the GEANT4 toolkit, where all the materials and geometries used in the experiment were included. The \(\epsilon^{EXP}\) is obtained by normalizing the number of measured X-rays (\(N_{X-ray}^{exp}\)) to the number of kaon triggers (\(N_{KT}^{exp}\)) and the active area of the detectors. Similarly, the \(\epsilon^{MC}\) is given by the number of simulated X-rays (\(N_{X-ray}^{MC}\)), normalized to the number of simulated kaon triggers (\(N_{KT}^{MC}\)). The kaonic atom X-rays were generated at the position where the \(K^{-}\) stopped, and were isotropically emitted with a 100% yield for each transition and each kaonic atom. Hence, the absolute X-ray yield is given by:
\[Y=\frac{\epsilon^{EXP}}{\epsilon^{MC}}=\frac{N_{X-ray}^{exp}/N_{KT}^{exp}}{N_ {X-ray}^{MC}/N_{KT}^{MC}} \tag{6}\]
We extracted the number of events for each kaonic helium-4 transition from the fit of the energy spectrum (Figure 4), to evaluate the X-ray yields of the L and M-series transitions. The number of events for each transition is listed in Table 2 with the statistical uncertainty given by the fit. The absolute yields for the kaonic helium-4 L\({}_{\alpha}\) and M\({}_{\beta}\) transitions were obtained by applying Eq. (6) and are reported in Table 3 with their statistical and systematic uncertainties. The relative yields of the L\({}_{\beta}\), L\({}_{\gamma}\), M\({}_{\gamma}\), and M\({}_{\delta}\) transitions, taking into account the energy-dependent detection efficiency, were also evaluated and are reported in Table 3.
The main source of systematic uncertainty for the yields measurements is related to the accuracy with which the gas density is known. The gas density is a key input for the Monte Carlo simulation, since it affects the number of kaons stopped in the gas target, and consequently the number of X-ray events. The helium gas density was determined by measuring the gas pressure and temperature. The uncertainties
and pressure sensors are \(\pm 2\%\) and \(\pm 3.5\%\), respectively, leading to a density error of \(\pm 5\%\). Monte Carlo simulations were used to estimate the systematic error on the absolute yields due to uncertainty on the gas density. Instead, for the relative yields the systematic error is negligible with respect to the statistical one.
It is worth to underline that the results shown in Table 3 represent the first experimental measurement of the M-series transitions yields, providing new experimental data to optimize the cascade models for kaonic helium and, more generally, kaonic atoms. Furthermore, the measurement of the L\({}_{\alpha}\) X-ray yield at the density of 1.37 \(\pm\) 0.07 g/l establishes a new experimental record and data point that, combined with the measurements performed by SIDDHARTA [22] and SIDDHARTINO [19], will allow to check and improve kaonic atoms cascade models across the density scale (see Figure 5).
\begin{table}
\begin{tabular}{l l l} \hline \hline Transition & X-ray name & number of events \\ \hline
3d\(\rightarrow\)2p & \(L_{\alpha}\) & \(9158\pm 133\) \\
4d\(\rightarrow\)2p & \(L_{\beta}\) & \(1852\pm 62\) \\
5d\(\rightarrow\)2p & \(L_{\gamma}\) & \(139\pm 9\) \\
5f\(\rightarrow\)3d & \(M_{\beta}\) & \(289\pm 36\) \\
6f\(\rightarrow\)3d & \(M_{\gamma}\) & \(306\pm 33\) \\
7f\(\rightarrow\)3d & \(M_{\delta}\) & \(365\pm 55\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Number of events for the kaonic helium-4 L-series and M-series transitions, obtained by the fit of the spectrum in Figure 4.
\begin{table}
\begin{tabular}{l l} \hline \hline Density & 1.37 \(\pm\) 0.07 g/l \\ \hline \(L_{\alpha}\)_yield_ & \(0.119\pm 0.002\,(\mathrm{stat})^{+0.006\,(\mathrm{sys})}_{-0.009\,(\mathrm{sys})}\) \\ \(M_{\beta}\)_yield_ & \(0.026\pm 0.003\,(\mathrm{stat})^{+0.010\,(\mathrm{sys})}_{-0.001\,(\mathrm{sys})}\) \\ \hline \(L_{\beta}\)/\(L_{\alpha}\) & \(0.172\pm 0.008\,(\mathrm{stat})\) \\ \(L_{\gamma}\)/\(L_{\alpha}\) & \(0.012\pm 0.001\,(\mathrm{stat})\) \\ \(M_{\beta}\)/\(L_{\alpha}\) & \(0.218\pm 0.029\,(\mathrm{stat})\) \\ \(M_{\gamma}\)/\(M_{\beta}\) & \(0.48\pm 0.11\,(\mathrm{stat})\) \\ \(M_{\delta}\)/\(M_{\beta}\) & \(0.43\pm 0.12\,(\mathrm{stat})\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: The absolute yields of the kaonic helium-4 L\({}_{\alpha}\) and M\({}_{\beta}\) transitions and the relative yields of L\({}_{\beta}\), L\({}_{\gamma}\), M\({}_{\gamma}\), and M\({}_{\delta}\) transitions
## 4 Conclusions
In this work, we presented a comprehensive investigation of kaonic helium-4 through X-ray spectroscopy. The excellent SDD energy response and remarkable SIDDHARTA-2's background suppression, allowed to detect and measure, for the first time, the energies of three M-series transitions. Our measurement of the X-ray yields for several M-series and L-series transitions in kaonic helium-4, provide new fundamental input to cascade models, potentially contributing to a deeper understanding of the de-excitation mechanisms within kaonic atoms. The measurement of the L\({}_{\alpha}\) yield at the density of 1.37 \(\pm\) 0.07 g/l, combined with the previous results obtained by SIDDHARTA [22] and SIDDHARTINO [19], offers a new opportunity to understand the density dependence of the kaonic helium yield. Moreover, the measurement of the 2p level energy shift and width improves the statistical accuracy by a factor three, compared to the previous results with gaseous helium-4, definitely rejecting the hypothesis of a large energy shift and width.
The SIDDHARTA-2 outcome refines our understanding of the kaonic helium-4 system, and contribute to the ongoing efforts to comprehend the strong interaction in the non-perturbative regime in systems with strangeness.
## Acknowledgments
We thank C. Capoccia from LNF-INFN and H. Schneider, L. Stohwasser, and D. Pristauz-Telsnigg from Stefan Meyer-Institut for their fundamental contribution in designing and building the SIDDHARTA-2 setup. We thank as well the DA\(\Phi\)NE staff for the excellent working conditions and permanent support. Part of this work was supported by the Austrian Science Fund (FWF): [P24756-N20 and P33037-N] and FWF Doctoral program No. W1252
Figure 5: K\({}^{-}\)\({}^{4}\)He L\({}_{\alpha}\) X-ray yield as function of the target density from this work, SIDDHARTINO [19], and SIDDHARTA [22].
N27; the EXOTICA project of the Minstero degli Affari Esteri e della Cooperazione Internazionale, PO22MO03; the Croatian Science Foundation under the project IP-2018-01-8570; the EU STRONG-2020 project (Grant Agreement No. 824093); the EU Horizon 2020 project under the MSCA (Grant Agreement 754496); the Japan Society for the Promotion of Science JSPS KAKENHI Grant No. JP18H05402; the Polish Ministry of Science and Higher Education grant No. 7150/E-338/M/2018 and the Polish National Agency for Academic Exchange( grant no PPN/BIT/2021/1/00037); the EU Horizon 2020 research and innovation programme under project OPSVIO (Grant Agreement No. 101038099). The authors acknowledge support from the SciMat and qLife Priority Research Areas budget under the program Excellence Initiative--Research University at the Jagiellonian University. Catalina Curceanu acknowledge University of Adelaide, where part of this work was done (under the George Southgate fellowship, 2023).
|
2301.00115 | Longtime Dynamics of Irrotational Spherical Water Drops: Initial Notes | In this note, we propose several unsolved problems concerning the
irrotational oscillation of a water droplet under zero gravity. We will derive
the governing equation of this physical model, and convert it to a quasilinear
dispersive partial differential equation defined on the sphere, which formally
resembles the capillary water waves equation but describes oscillation defined
on curved manifold instead. Three types of unsolved mathematical problems
related to this model will be discussed in observation of hydrodynamical
experiments under zero gravity: (1) Strichartz type inequalities for the
linearized problem (2) existence of periodic solutons (3) normal form reduction
and generic lifespan estimate. It is pointed out that all of these problems are
closely related to certain Diophantine equations, especially the third one. | Chengyang Shao | 2022-12-31T04:11:40Z | http://arxiv.org/abs/2301.00115v2 | # Longtime dynamics of irrotational spherical water drops: initial notes
###### Abstract.
In this note, we propose several unsolved problems concerning the irrotational oscillation of a water droplet under zero gravity. We will derive the governing equation of this physical model, and convert it to a quasilinear dispersive partial differential equation defined on the sphere, which formally resembles the capillary water waves equation but describes oscillation defined on curved manifold instead. Three types of unsolved mathematical problems related to this model will be discussed in observation of hydrodynamical experiments under zero gravity1: (1) Strichartz type inequalities for the linearized problem (2) existence of periodic solutons (3) normal form reduction and generic lifespan estimate. It is pointed out that all of these problems are closely related to certain Diophantine equations, especially the third one.
Footnote 1: There are numbers of visual materials on such experiments conducted by astronauts. See for example [https://www.youtube.com/watch?v=H_aPW2bxFl88t](https://www.youtube.com/watch?v=H_aPW2bxFl88t) or [https://www.youtube.com/watch?v=e6Faq1A=ISI&t](https://www.youtube.com/watch?v=e6Faq1A=ISI&t).
## 1. Capillary Spherical Water Waves Equation: Derivation
### Water Waves Equation for a Bounded Water Drop
Comparing to gravity water waves problems, the governing equation for a spherical droplet of water under _zero gravity_ takes a very different form. At a first glance it looks similar to the water waves systems as mentioned above, but some crucial differences do arise after careful analysis. To the author's knowledge, besides those dealing with generic free-boundary Euler equation ([10], [11]), the only reference on this problem is Beyer-Gunther [1], in which the local well-posedness of the equation is proved using a Nash-Moser type implicit function theorem. We will briefly discribe known results for gravity water waves problems in the next subsection.
To start with, let us pose the following assumptions on the fluid motion that we try to describe:
* (A1) The perfect, irrotational fluid of constant density \(\rho_{0}\) occupies a smooth, compact region in \(\mathbb{R}^{3}\).
* (A2) There is no gravity or any other external force in presence.
* (A3) The air-fluid interface is governed by the Young-Laplace law, and the effect of air flow is neglected.
We assume that the boundary of the fluid region has the topological type of a smooth compact orientable surface \(M\), and is described by a time-dependent embedding \(\iota(t,\cdot):M\to\mathbb{R}^{3}\). We will denote a point on \(M\) by \(x\), the image of \(M\) under \(\iota(t,\cdot)\) by \(M_{t}\), and the region enclosed by \(M_{t}\) by \(\Omega_{t}\). The outer normal will be denoted by \(N(\iota)\). We also write \(\bar{\nabla}\) for the flat connection on \(\mathbb{R}^{3}\).
Adopting assumption (A3), we have the Young-Laplace equation:
\[\sigma_{0}H(\iota)=p_{i}-p_{e},\]
where \(H(\iota)\) is the (scalar) mean curvature of the embedding, \(\sigma_{0}\) is the surface tension coefficient (which is assumed to be a constant), and \(p_{i},p_{e}\) are respectively the inner and exterior air pressure at the boundary; they are scalar functions on the boundary and we assume that \(p_{e}\) is a constant. Under assumptions (A1) and (A2), we obtain Bernoulli's equation, sometimes referred as the pressure balance condition, on the evolving
surface:
\[\frac{\partial\Phi}{\partial t}\bigg{|}_{M_{t}}+\frac{1}{2}|\bar{\nabla}\Phi|_{M_{ t}}|^{2}-p_{e}=-\frac{\sigma_{0}}{\rho_{0}}H(\iota), \tag{1.1}\]
where \(\Phi\) is the velocity potential of the velocity field of the air. Note that \(\Phi\) is determined up to a function in \(t\), so we shall leave the constant \(p_{e}\) around for convenience reason that will be explained shortly. According to assumption (A1), the function \(\Phi\) is a harmonic function within the region \(\Omega_{t}\), so it is uniquely determined by its boundary value, and the velocity field within \(\Omega_{t}\) is \(\bar{\nabla}\Phi\). The kinematic equation on the free boundary \(M_{t}\) is naturally obtained as
\[\frac{\partial\iota}{\partial t}\cdot N(\iota)=\bar{\nabla}\Phi|_{M_{t}}\cdot N (\iota). \tag{1.2}\]
Finally, we would like to discuss the conservation laws for (1.1)-(1.2). The preservation of volume \(\text{Vol}(\Omega_{t})=\text{Vol}(\Omega_{0})\) is a consequence of incompressibility. The system describes an Eulerian flow without any external force, so the center of mass moves at a uniform speed along a fixed direction, i.e.
\[\frac{1}{\text{Vol}(\Omega_{0})}\int_{\Omega_{t}}Pd\text{Vol}(P)=V_{0}t+C_{0}, \tag{1.3}\]
with Vol being the Lebesgue measure, \(P\) marking points in \(\mathbb{R}^{3}\), \(V_{0}\) and \(C_{0}\) being the velocity and starting position of center of mass respectively. Furthermore, the total momentum is conserved, and since the flow is a potential incompressible one, the conservation of total momentum is expressed as
\[\int_{M_{t}}\rho_{0}\Phi N(\iota)d\text{Area}(M_{t})\equiv\rho_{0}\text{Vol}( \Omega_{0})V_{0}. \tag{1.4}\]
Most importantly, it is not surprising that (1.1)-(1.2) is a Hamilton system (the Zakharov formulation for water waves; see Zakharov [10]), with Hamiltonian
\[\sigma_{0}\text{Area}(\iota)+\frac{1}{2}\int_{\Omega_{t}}\rho_{0}|\bar{\nabla }\Phi|^{2}d\text{Vol}=\sigma_{0}\text{Area}(M_{t})+\frac{1}{2}\int_{M_{t}} \rho_{0}\Phi|_{M_{t}}\left(\bar{\nabla}\Phi|_{M_{t}}\cdot N(\iota)\right)d \text{Area}, \tag{1.5}\]
i.e. potential proportional to surface area plus kinetic energy of the fluid.
### Converting to a Differential System
It is not hard to verify that the system (1.1)-(1.2) is invariant if \(\iota\) is composed with a diffeomorphism of \(M\); we may thus regard it as a _geometric flow_. If we are only interested in perturbation near a given configuration, we may reduce system (1.1)-(1.2) to a non-degenerate dispersive differential system concerning two scalar functions defined on \(M\), just as Beyer and Gunther did in [1]. In fact, during a short time of evolution, the interface can be represented as the graph of a function defined on the initial surface: if \(\iota_{0}:M\to\mathbb{R}^{3}\) is a fixed embedding close to the initial embedding \(\iota(0,x)\), we may assume that \(\iota(t,x)=\iota_{0}(x)+\zeta(t,x)N_{0}(x)\), where \(\zeta\) is a scalar "height" function defined on \(M_{0}\) and \(N_{0}\) is the outer normal vector field of \(M_{0}\).
With this observation, we shall transform the system (1.1)&(1.2) into a non-local system of two real scalar functions \((\zeta,\phi)\) defined on \(M\), where \(\zeta\) is the "height" function described as above, and \(\phi(t,x)=\Phi(t,\iota(t,x))\) is the boundary value of the velocity potential, pulled back to the underlying manifold \(M\).
The operator
\[B_{\zeta}:\phi\to\bar{\nabla}\Phi|_{M_{t}}\]
maps the pulled-back Dirichlet boundary value \(\phi\) to the boundary value of the gradient of \(\Phi\). We shall write
\[(D[\zeta]\phi)N(\iota)\]
for its normal part, where \(D[\zeta]\) is the Dirichlet-Neumann operator corresponding to the region enclosed by the image of \(\iota_{0}+\zeta N_{0}\). Thus
\[\frac{\partial\zeta}{\partial t}N_{0}\cdot N(\iota)=D[\zeta]\phi.\]
We also need to calculate the restriction of \(\partial_{t}\Phi\) on \(M_{t}\) in terms of \(\phi\) and \(\iota\). By the chain rule,
\[\frac{\partial\Phi}{\partial t}\bigg{|}_{M_{t}} =\frac{\partial\phi}{\partial t}-\bar{\nabla}\Phi|_{M_{t}}\cdot \frac{\partial\iota}{\partial t}\] \[=\frac{\partial\phi}{\partial t}-\left(\bar{\nabla}\Phi|_{M_{t}} \cdot N_{0}\right)\frac{\partial\zeta}{\partial t}\] \[=\frac{\partial\phi}{\partial t}-\frac{1}{N_{0}\cdot N(\iota)} \left(\bar{\nabla}\Phi|_{M_{t}}\cdot N_{0}\right)\cdot D[\zeta]\phi.\]
We thus arrive at the following nonlinear system:
(EQ(M)) \[\begin{cases}\frac{\partial\zeta}{\partial t}=\frac{1}{N_{0} \cdot N(\iota)}D[\zeta]\phi,\\ \frac{\partial\phi}{\partial t}=\frac{1}{N_{0}\cdot N(\iota)} \left(B_{\zeta}\phi\cdot N_{0}\right)\cdot D[\zeta]\phi-\frac{1}{2}|B_{\zeta} \phi|^{2}-\frac{\sigma_{0}}{\rho_{0}}H(\iota)+p_{e},\end{cases}\]
where \(\iota=\iota_{0}+\zeta N_{0}\).
Remark 1.We may obtain an explicit expression of \(B_{\zeta}\phi=\bar{\nabla}\Phi|_{M_{t}}\) in terms of \(\phi\) (together with the connection \(\nabla_{0}\) on the fixed embedding \(\iota_{0}(M)\)), just as standard references did for Euclidean or periodic water waves, but that is not necessary for our discussion at the moment. It is important to keep in mind that the preservation of volume and conservation of total momentum (1.3)-(1.4) convert to integral equalities of \((\zeta,\phi)\). These additional restrictions are not obvious from the differential equations (EQ(M)), though they can be deduced from (EQ(M)) since they are just rephrase of the original physical laws (1.1)-(1.2).
For \(M=S^{2}\), the case that we shall discuss in detail, we refer to the system as the _capillary spherical water waves equation_. To simplify our discussion, we shall be working under the center of mass frame, and require the eigenmode \(\Pi^{(0)}\phi\) to vanish for all \(t\). This could be easily accomplished by absorbing the eigenmode into \(\phi\) since the equation is invariant by a shift of \(\phi\). In a word, from now on, we will be focusing on the
Figure 1. The shape of the surface
non-dimensional capillary spherical water waves equation
(EQ) \[\left\{\begin{aligned} \frac{\partial\zeta}{\partial t}& =\frac{1}{N_{0}\cdot N(\iota)}D[\zeta]\phi,\\ \frac{\partial\phi}{\partial t}&=\frac{1}{N_{0}\cdot N (\iota)}\left(B_{\zeta}\phi\cdot N_{0}\right)\cdot D[\zeta]\phi-\frac{1}{2}|B _{\zeta}\phi|^{2}-H(\iota),\end{aligned}\right.\]
where \(\iota=(1+\zeta)\iota_{0}\), and \(\Pi^{(0)}\phi\equiv 0\). We assume that the total volume of the fluid is \(4\pi/3\), so that the preservation of volume is expressed as
\[\frac{1}{3}\int_{S^{2}}(1+\zeta)^{3}d\mu_{0}\equiv\frac{4\pi}{3}, \tag{1.6}\]
where \(\mu_{0}\) is the standard measure on \(S^{2}\). The inertial movement of center of mass (1.3) and conservation of total momentum (1.4) under our center of mass frame are expressed respectively as
\[\int_{S^{2}}(1+\zeta)^{4}N_{0}d\mu_{0}=0,\quad\int_{S^{2}}\phi N(\iota)d\mu( \iota)=0, \tag{1.7}\]
where \(\mu(\iota)\) is the induced surface measure. Further, the Hamiltonian of the system is
\[\mathbf{H}[\zeta,\phi]=\text{Area}(\iota)+\frac{1}{2}\int_{S^{2}}\phi\cdot D[ \zeta]\phi\cdot d\mu(\iota), \tag{1.8}\]
and for a solution \((\zeta,\phi)\) there holds \(\mathbf{H}[\zeta,\phi]\equiv 4\pi\).
Up to this point, we are still working within the realm of well-established frameworks. We already know that the general free-boundary Euler equation is locally well-posed due to the work of [10] or [11], and due to the curl equation the curl free condition persists during the evolution. On the other hand, the Cauchy problem of system (1.1) and (1.2) is known to be locally well-posed, due to Beyer and Gunther in [11]. They used an iteration argument very similar to a Nash-Moser type argument in the sense that it involves multiple scales of Banach spaces and "tame" maps in a certain sense. Finally, it is not hard to transplant the potential-theoretic argument of Wu [26] to prove the local well-posedness.
To sum up, we already know that the system (EQ(M)) for a compact orientable surface \(M\) (hence (EQ) specifically) is locally well-posed. But this is all we can assert for the motion of a water droplet under zero gravity. In the following part of this note, we will propose several questions and conjectures concerning the long-time behaviour of water droplets under zero gravity.
### Previous Works on Water Waves
There has already been several different approaches to describe the motion of perfect fluid with free boundary. One is to consider the motion of perfect fluid occupying an arbitrary domain in \(\mathbb{R}^{2}\) or \(\mathbb{R}^{3}\), either with or without surface tension. The motion is described by a free boundary value problem of Euler equation. This generic approach was employed by Countand-Shkoller [10] and Shatah-Zeng [11]. Both groups proved the local-wellposedness of the problem. This approach has the advantage of being very general, applicable to all geometric shapes of the fluid.
On the other hand, when coming to potential flows of a perfect fluid, the curl free property results in _dispersive_ nature of the problem. The motion of a curl free perfect fluid under gravity and a free boundary value condition is usually referred to as _the gravity water waves problem_. The first breakthroughs in understanding local well-posedness were works of Wu [26][27], who proved local well-posedness of the gravity water waves equation without any smallness assumption. Lannes [12] extended this to more generic bottom shapes. Taking surface tension into account, the problem becomes _gravity-capillary water waves_. Schweizer [13] proved local well-posedness with small Cauchy data of the gravity-capillary water waves problem, and Ming-Zhang [14] proved local well-posedness without smallness assumption.
Alazard-Metevier [1] and Alazard-Burq-Zuily [1] used para-differential calculus to obtain the optimal regularity for local well-posedness of the water waves equation, either with or without surface tension.
For discussion of long time behavior, it is important to take into account the dispersive nature of the problem. For linear dispersive properties, there has been work of Christianson-Hur-Staffilani [1]. For gravity water waves living in \(\mathbb{R}^{2}\), works on lifespan estimate include Wu [21] and Hunter-Ifrim-Tataru [13] (almost global result), Ionescu-Pusateri [20] and Alazard-Delort [1] and Ifrim-Tataru [14] (global result). For gravity water waves living in \(\mathbb{R}^{3}\), there are works of Germain-Masmoudi-Shatah [11] and Wu [21] (no surface tension), Germain-Masmoudi-Shatah [11] (no gravity), Deng-Ionescu-Pusader-Pusateri [15] (gravity-capillary water waves) and Wang [21] (gravity-capillary water waves with finite depth). These results all employed different forms of decay estimates derived from dispersive properties.
As for long time behavior of periodic water waves, Berti-Delort [1] considered gravity-capillary water waves defined on \(\mathbb{T}^{1}\), Berti-Feola-Pusateri [1] considered gravity water waves defined on \(\mathbb{T}^{1}\), Ionescu-Pusateri [22] considered gravity-capillary water waves defined on \(\mathbb{T}^{2}\), and obtained an estimate on the lifespan beyond standard energy method. All of the three groups used para-differential calculus and suitable normal form reduction; the results of Berti-Delort and Ionescu-Pusateri were proved for physical data of full Lebesgue measure.
To sum up, all results on the gravity water waves problem listed above are concerned with an equation for two scalar functions defined on a fixed flat manifold, being one of the following: \(\mathbb{R}^{1}\), \(\mathbb{R}^{2}\), \(\mathbb{T}^{1}\), \(\mathbb{T}^{2}\), sometimes called the "bottom" of the fluid. These two functions represents the geometry of the liquid-gas interface and the boundary value of the velocity potential, respectively. The manifold itself is considered as the bottom of the container in which all dynamics are performed. We observe that the differential equation (EQ(M)) is, mathematically, fundamentally different from the water waves equations that have been well-studied.
## 2. Initial Notes on Unsolved Problems
In this section, we propose unsolved problems related to the spherical capillary water waves system (EQ(M)) with \(M=S^{2}\). Not surprisingly, these problems all have deep backgrounds in number theory.
### Linearization Around the Static Solution
We are mostly interested in the stability of the static solution of (EQ(M)). A static solution should be a fluid region whose shape stays still, with motion being a mere shift within the space. In this case, we have \(p_{e}=0\) since the reference is relatively static with respect to the air. Moreover, the velocity field \(\bar{\nabla}\Phi\) and "potential of acceleration" \(\partial\Phi/\partial t\) must both be spatially uniform, so that the left-hand-side of the pressure balance condition (1.1) is a function of \(t\) alone. It follows that \(\iota(M)\) is always a compact embedded surface of constant mean curvature, hence in fact always an Euclidean sphere by the Alexandrov sphere theorem (see [23]), and we may just take \(M=S^{2}\). Moreover, since \(\iota(S^{2})\) should enclose a constant volume by incompressibility, the radius of that sphere does not change. After suitable scaling, we may assume that the radius is always \(1\), and \(\rho_{0}=1\), \(\sigma_{0}=1\), to make (1.1)-(1.2) non-dimensional. Finally, by choosing the center of mass frame, we may simply assume that the spatial shift is always zero, so that the velocity potential \(\Phi\equiv\Phi_{0}\), a real constant. It is harmless to fix it to be zero.
Thus, under our convention, a static solution of (1.1)-(1.2) takes the form
\[\left(\begin{array}{c}\iota(t,x)\\ \Phi(t,x)\end{array}\right)=\left(\begin{array}{c}\iota_{0}(x)\\ 0\end{array}\right), \tag{2.1}\]
where \(a\in\mathbb{R}^{3}\) is a constant vector, and \(\iota_{0}\) is the standard embedding of \(S^{2}\) as \(\partial B(0,1)\subset\mathbb{R}^{3}\). Equivalently, this means that a static solution of (EQ(M)) under our convention must be \((\zeta,\phi)=(0,0)\). Note here that the Gauss map of \(\iota_{0}\) coincides with itself.
We can now start our perturbation analysis around a static solution at the linear level. Let \(\mathcal{E}^{(n)}\) be the space of spherical harmonics of order \(n\), normalized according to the standard surface measure on \(S^{2}\). In particular, \(\mathcal{E}^{(1)}\) is spanned by three components of \(N_{0}\). Let \(\Pi^{(n)}\) be the orthogonal projection on \(L^{2}(S^{2})\) onto \(\mathcal{E}^{(n)}\), \(\Pi_{\leq n}\) be the orthogonal projection on \(L^{2}(S^{2})\) onto \(\bigoplus_{k\leq n}\mathcal{E}^{(k)}\), \(\Pi_{\geq n}\) be the orthogonal projection on \(L^{2}(S^{2})\) onto \(\bigoplus_{k\geq n}\mathcal{E}^{(k)}\). For \(\iota=(1+\zeta)\iota_{0}\), the linearization of \(-H(\iota)\) around the sphere \(\zeta\equiv 0\) is \(\Delta\zeta+2\zeta\), where \(\Delta\) is the Laplacian on the sphere \(S^{2}\); cf. the standard formula for the second variation of area in [1]. Then \(H^{\prime}(\iota_{0})\) acts on \(\mathcal{E}^{(n)}\) as the multiplier \(-(n-1)(n+2)\). Note that even if we consider the dimensional form of (EQ), there will only be an additional scaling factor \(\sigma_{0}/(\rho_{0}R^{2})\), where \(R\) is the radius of the sphere. On the other hand, the following solution formula for the Dirichlet problem on \(B(0,1)\) is well-known: if \(f\in L^{2}(S^{2})\), then the harmonic function in \(B(0,1)\) with Dirichlet boundary value \(f\) is determined by
\[u(r,\omega)=\sum_{n\geq 0}r^{n}(\Pi^{(n)}f)(\omega),\]
where \((r,\omega)\) is the spherical coordinate in \(\mathbb{R}^{3}\). Thus the Dirichlet-Neumann operator \(D[\iota_{0}]\) acts on \(\mathcal{E}^{(n)}\) as the multiplier \(n\). Note again that even if we consider the dimensional form (EQ), there will only be an additional scaling factor \(R^{-1}\).
Thus, setting
\[u=\Pi^{(0)}\zeta+\Pi^{(1)}\zeta+\sum_{n\geq 2}\sqrt{(n-1)(n+2)}\cdot\Pi^{(n)} \zeta+i\sum_{n\geq 1}\sqrt{n}\cdot\Pi^{(n)}\phi,\]
we find that the linearization of (EQ) around the static solution (2.1) is a linear dispersive equation
\[\frac{\partial u}{\partial t}+i\Lambda u=0, \tag{2.2}\]
where the \(3/2\)-order elliptic operator \(\Lambda\) is given by a multiplier
\[\Lambda=\sum_{n\geq 2}\sqrt{n(n-1)(n+2)}\cdot\Pi^{(n)}=:\sum_{n\geq 0} \Lambda(n)\Pi^{(n)}.\]
Note that \((\zeta,\phi)\) is completely determined by \(u\). At the linear level, there must hold \(\Pi^{(0)}u\equiv 0\) because the first variation of volume must be zero; and \(\Pi^{(1)}u\equiv 0\) because of the conservation laws (1.7).
Let us also re-write the original nonlinear system (EQ) into a form that better illustrates its perturbative nature. For simplicity, we use \(O(u^{\otimes k})\) to abbreviate a quantity that can be controlled by \(k\)-linear expressions in \(u\), and disregard its continuity properties for the moment. For example, \(\|u\|_{H^{1}}^{2}+\|u\|_{H^{2}}^{4}\) is an expression of order \(O(u^{\otimes 2})\) when \(u\to 0\).
Since the operator \(\Lambda\) acts degenerately on \(\mathcal{E}^{(0)}\oplus\mathcal{E}^{(1)}\), we should be more careful about the eigenmodes \(\Pi^{(0)}u\) and \(\Pi^{(1)}u\). The volume preservation equation (1.6) implies \(\partial_{t}\Pi^{(0)}\zeta=O(u^{\otimes 2})\). Projecting (EQ) to \(\mathcal{E}^{(1)}\), which is spanned by the components of \(N_{0}\), we obtain \(\partial_{t}\Pi^{(1)}\zeta=\Pi^{(1)}\phi+O(u^{\otimes 2})=O(u^{\otimes 2})\) since the conservation law (1.7) implies \(\Pi^{(1)}\phi=O(u^{\otimes 2})\); and \(\partial_{t}\Pi^{(1)}\phi=O(u^{\otimes 2})\) since \(H^{\prime}(\iota_{0})=-\Delta-2\) annihilates \(\mathcal{E}^{(1)}\). We can thus formally re-write the nonlinear system (EQ) as the following:
\[\frac{\partial u}{\partial t}+i\Lambda u=\mathfrak{N}(u), \tag{2.3}\]
with \(\mathfrak{N}(u)=O(u^{\otimes 2})\) vanishing quadratically as \(u\to 0\). Note that we are disregarding all regularity problems at the moment.
### Question at Linear Level
At the linear level, our first unanswered question is
**Question 1**.: _Does the solution of the linear capillary spherical water waves equation (2.2) satisfy a Strichartz type estimate of the form_
\[\|e^{it\Lambda}f\|_{L^{p}_{T}L^{q}_{x}}\lesssim_{T}\|f\|_{H^{s}},\]
_where \(L^{p}_{T}L^{q}_{x}=L^{p}([0,T];L^{q}(S^{2}))\), and the admissible indices \((p,q)\) and \(s\) should be determined?_
Answer to Question 1 should be important in understanding the _dispersive_ nature of linear capillary spherical water waves. For Schrodinger equation on a compact manifold, a widly cited result was obtained by Burq-Gerard-Tzvetkov [1]:
**Theorem 2.1**.: _On a general compact Riemannian manifold \((M^{d},g)\), there holds_
\[\|e^{it\Delta_{g}}f\|_{L^{p}_{t}L^{q}_{x}([0,T]\times M)}\lesssim_{T}\|f\|_{H ^{1/p}(M)},\]
_where_
\[\frac{2}{p}+\frac{d}{q}=\frac{d}{2},\quad p>2.\]
The authors used a time-localization argument for the parametrix of \(\partial_{t}-i\Delta_{g}\) to prove this result. For the sphere \(S^{d}\), this inequality is not optimal. The authors further used a Bourgain space argument to obtain the optimal Strichartz inequality:
**Theorem 2.2**.: _Let \((S^{d},g)\) be the standard n-dimensional sphere. For a function \(f\in C^{\infty}(S^{d})\), there holds the Strichartz inequality_
\[\|e^{it\Delta_{g}}f\|_{L^{p}_{t}L^{q}_{x}([0,T]\times M)}\lesssim_{T}\|f\|_{H ^{s}(M)},\quad s>s_{0}(d)\]
_where_
\[s_{0}(2)=\frac{1}{8},\quad s_{0}(d)=\frac{d}{4}-\frac{1}{2},\,d\geq 3.\]
_Furthermore these inequalities are optimal in the sense that the Sobolev index \(s\) cannot be less than or equal to \(s_{0}(d)\)._
The proof is the consequence of two propositions. The first one is the "decoupling inequality on compact manifolds", in particular the following result proved by Sogge [21]:
**Proposition 2.1**.: _Let \(\Pi_{k}\) be the spectral projection to eigenspaces with eigenvalues in \([k^{2},(k+1)^{2}]\) on \(S^{d}\). Then there holds_
\[\|\Pi_{k}\|_{L^{2}\to L^{q}}\leq C_{q}n^{s(q)},\]
_where_
\[s(q)=\left\{\begin{array}{cc}\frac{d-1}{2}\left(\frac{1}{2}-\frac{1}{2q} \right),&2\leq q\leq\frac{2(d+1)}{d-1}\\ \frac{d-1}{2}-\frac{d}{q},&\frac{2(d+1)}{d-1}\leq q\leq\infty.\end{array}\right.\]
_These estimates are sharp in the following sense: if \(h_{k}\) is a zonal spherical harmonic function of degree \(k\) on \(S^{d}\), then as \(k\to\infty\),_
\[\|h_{k}\|_{L^{q}}\simeq C_{q}k^{s(q)}\|h_{k}\|_{L^{2}}.\]
The second one is a Bourgain space embedding result:
**Proposition 2.2**.: _For a function \(f\in C^{\infty}_{0}(\mathbb{R}\times S^{d})\), define the Bourgain space norm_
\[\|f(t,x)\|_{X^{s,b}}:=\left\|\langle\partial_{t}+i\Delta_{g}\rangle^{b}f(t,x) \right\|_{L^{2}_{t}H^{s}_{x}}.\]
_Then for \(b>1/2\) and \(s>s_{0}(d)\), there holds_
\[\|f\|_{L^{4}(\mathbb{R}\times S^{d})}\leq C_{s,b}\|f\|_{X^{s,b}}.\]
The key ingredient for proving this proposition is the following number-theoretic result:
\[\#\{(p,q)\in\mathbb{N}^{2}:p^{2}+q^{2}=A\}=O(A^{\varepsilon}).\]
As for optimality of the Strichartz inequality, the authors of [1] implemented standard results of Gauss sums.
The parametrix and Bourgain space argument can be repeated without essential change for the linear capillary spherical water waves equation (2.2), but this time the Bourgain space argument would be more complicated: the Bourgain space norm is now
\[\|f(t,x)\|_{X^{s,b}}:=\left\|\langle\partial_{t}+i\Lambda\rangle^{b}f(t,x) \right\|_{L^{2}_{t}H^{\varepsilon}_{x}},\]
and the embedding result becomes
\[\|f\|_{L^{4}(\mathbb{R}\times S^{2})}\leq C_{s,b}\|f\|_{X^{s,b}}\]
for \(f\in C_{0}^{\infty}(\mathbb{R}\times S^{2})\) and all \(s>3\rho/8+1/8\), \(b>1/2\), where \(\rho\) is the infimum of all exponents \(\rho^{\prime}\) such that when \(A\to\infty\), the number
\[\#\left\{(n_{1},n_{2})\in\mathbb{N}^{2}:\,\frac{1}{2}\leq\frac{n_{2}}{n_{1}} \leq 2,\,|\Lambda(n_{1})+\Lambda(n_{2})-A|\leq\frac{1}{2}\right\}\leq C_{ \rho^{\prime}}A^{\rho^{\prime}}.\]
Some basic analytic number theory implies \(\rho=1/3\), and thus the range of \(s\) is \(s>1/4\). Surprisingly this index is not better than that predicted by the parametrix method. It remains unknown whether this index could be further optimized.
To close this subsection, we note that the capillary spherical water wave lives on a compact region, so the dispersion does not take away energy from a locality to infinity. This is a crucial difference between waves on compact regions and waves in Euclidean spaces. In particular, we do not expect decay estimate for \(e^{it\Lambda}f\). For the nonlinear problem (EQ), techniques like vector field method (Klainerman-Sobolev type inequalities) do not apply.
### Rotationally Symmetric Solutions: Bifurcation Analysis
Illuminated by observations in hydrodynamical experiments under zero gravity, and suggested by the existence of standing gravity capillary water waves due to Alazard-Baldi [1], we propose the following conjecture:
**Conjecture 2.1**.: _There is a Cantor family of small amplitude periodic solutions to the spherical capillary water waves system (EQ)._
Let us conduct the bifurcation analysis that suggests why this conjecture should be true. By time rescaling, we aim to find solution \((\zeta,\phi,\omega_{0})\) of the following system that is \(2\pi\)-peiodic in \(t\):
\[\begin{cases}\omega_{0}\frac{\partial\zeta}{\partial t}=\frac{1}{N_{0}\cdot N (\iota)}D[\zeta]\phi,\\ \omega_{0}\frac{\partial\phi}{\partial t}=\frac{1}{N_{0}\cdot N(\iota)}\left( B_{\zeta}\phi\cdot N_{0}\right)\cdot D[\zeta]\phi-\frac{1}{2}|B_{\zeta}\phi|^{2}-H( \iota),\end{cases} \tag{2.4}\]
together with the conservation laws (1.6)-(1.8). Here we refer \(\omega_{0}>0\) as the _fundamental frequency_. The linearization of this system at the equilibrium \((\zeta,\phi)=(0,0)\) is
\[L_{\omega_{0}}\begin{pmatrix}\zeta\\ \phi\end{pmatrix}:=\begin{pmatrix}\omega_{0}\partial_{t}&-D[0]\\ -\Delta-2&\omega_{0}\partial_{t}\end{pmatrix}\begin{pmatrix}\zeta\\ \phi\end{pmatrix}=0,\quad\Pi^{(0)}\zeta=\Pi^{(1)}\zeta=\Pi^{(1)}\phi=0. \tag{2.5}\]
We restrict to _rotationally symmetric_ solutions of the system: that is, water droplets which are always rotationally symmetric with a fixed axis. In addition, we require \(\zeta\) to be even in \(t\) and \(\phi\) to be odd in \(t\). The
solution thus should take the form
\[\zeta(t,x)=\sum_{j,n\geq 0}\zeta_{jn}\cos(jt)Y_{n}(x),\quad\phi(t,x)=\sum_{j\geq 1,n\geq 0}\phi_{jn}\sin(jt)Y_{n}(x),\]
where \(Y_{n}\) is the \(n\)'th zonal spherical harmonic, i.e. the (unique) normalized spherical harmonic of degree \(n\) that is axially symmetric. In spherical coordinates this means that \(Y_{n}(\theta,\varphi)=P_{n}(\cos\theta)\), where \(P_{n}\) is the \(n\)'th Legendre polynomial. Since \(\phi_{0n}\) are irrelevant we fix them to be \(0\). Then
\[L_{\omega_{0}}\begin{pmatrix}\zeta\\ \phi\end{pmatrix}=\sum_{j,n\geq 0}\begin{pmatrix}(-\omega_{0}j\zeta_{jn}-n \phi_{jn})\sin(jt)Y_{n}(x)\\ ((n-1)(n+2)\zeta_{jn}+\omega_{0}j\phi_{jn})\cos(jt)Y_{n}(x)\end{pmatrix}.\]
In order that \((\zeta,\phi)^{\rm T}\in{\rm Ker}L_{\omega_{0}}\), at the level \(n=0\), we must have \(\zeta_{j0}=\phi_{j0}=0\) for all \(j\geq 0\). At the level \(n=1\), we have \(\zeta_{01}=\phi_{01}=0\), and for \(j\geq 1\) there holds \(\omega_{0}j\zeta_{j1}-\phi_{j1}=0\) and \(\omega_{0}j\phi_{j1}=0\), so \(\zeta_{j1}=\phi_{j1}=0\) for all \(j\geq 0\). Hence \(\zeta_{jn},\phi_{jn}\) can be nonzero only for \(j\geq 1\) and \(n\geq 2\).
Consequently, \(L_{\omega_{0}}\) has a one-dimensional kernel if and only if the Diophantine equation
\[\omega_{0}^{2}j^{2}=n(n-1)(n+2),\quad j\geq 1,\,n\geq 2 \tag{2.6}\]
has exactly one solution \((j_{0},n_{0})\). If \(\omega_{0}\) has this property, then at the linear level, the lowest frequency of oscillation is
\[\omega_{0}j_{0}=\sqrt{n_{0}(n_{0}-1)(n_{0}+2)}.\]
We look into this Diophantine equation. Obviously \(\omega_{0}^{2}\) has to be a rational number. The equation is closely related to a family of elliptic curves over \(\mathbb{Q}\):
\[E_{c}:y^{2}=x(x-c)(x+2c)=x^{3}+c^{2}x^{2}-2c^{2}x,\quad c\in\mathbb{N}.\]
If we set \(a/b=\omega_{0}^{2}\) (irreducible fraction), then integral solutions of (2.6) are in 1-1 correspondence with integral points with natural number coordinates on the elliptic curve \(E_{ab}\), under the following map:
\[(j_{0},n_{0})\to(abn_{0},a^{2}bj_{0})\in E_{ab}.\]
Thus we just need to find natural numbers \(a,b\) such that there is a unique (up to negation) integral point \((x,y)\in E_{ab}\), where \(x>0\) is divided by \(ab\), and \(y\) is divided by \(a^{2}b\). We seek for \(n_{0}\) as small as possible with such property, which gives lowest frequency of oscillation as small as possible. For \(ab=1,\cdots,50\), we find that if \(ab=15\), then the integral points on elliptic curve \(E_{15}\) (up to negation of Mordel-Weil group) are
\[(-30,0),\,(-5,50),\,(0,0),\,(15,0),\,(24,108),\,(90,900).\]
The only point \((x,y)\) with \(ab|x\) and \(ab^{2}|y\) is (90,900), which gives \(n_{0}=6\), and the lowest frequency \(\Lambda(n_{0})=\sqrt{n_{0}(n_{0}-1)(n_{0}+2)}\) of oscillation is \(4\sqrt{15}\simeq 15.49\cdots\).
There are other choices of \(a,b\). We list down the value of \(ab\) below 50, the corresponding \(n_{0}\) and the lowest frequency \(\Lambda(n_{0})\):
\[\begin{array}{ccc}ab&n_{0}&\Lambda(n_{0})\\ 15&6&4\sqrt{15}\\ 17&49&4\sqrt{323}\\ 22&9&6\sqrt{22}\\ 26&50&15\sqrt{78}\\ 42&7&3\sqrt{42}\\ 46&576&2040\sqrt{46}\\ 50&25&90\sqrt{2}\end{array}\]
See Appendix A for the MAGMA code used to find these values. This list suggests that \(n_{0}=6\) might be the smallest order that meets the requirement, but this remains unproved. We summarize these into the following number-theoretic question:
**Question 2**.: _For the family of elliptic curves_
\[E_{ab}:\,y^{2}=x(x-ab)(x+2ab),\quad a,b\in\mathbb{N},\]
_how many choices of \(a,b\in\mathbb{N}\) are there such that, there is exactly one integral point \((x,y)\in E_{ab}\) with \(x,y>0\) and \(ab|x\), \(a^{2}b|y\)? For such \(a,b\) and integral point \((x,y)\), is the minimal value of \(x/(ab)\) exactly 6, or is it smaller?_
Of course, a complete answer of Question 2 should imply very clear understanding of periodic solutions of the spherical capillary water waves equation constructed using bifurcation analysis. But at this moment we are satisfied with existence, so we may pick any \(\omega_{0}=a/b\) and \((j_{0},n_{0})\) that meets the requirement, for example the simplest case \(n_{0}=6\), and any of the following choices of \(\omega_{0}\) and \(j_{0}\):
\[\omega_{0}=\sqrt{15},\,j_{0}=4;\quad\omega_{0}=\sqrt{\frac{1}{15}},\,j_{0}=60 ;\quad\omega_{0}=\sqrt{\frac{3}{5}},\,j_{0}=20;\quad\omega_{0}=\sqrt{\frac{5} {3}},\,j_{0}=12.\]
We thus refine our conjecture as follows:
**Conjecture 2.2**.: _Let \((n_{0},j_{0})\) be a pair of natural numbers with \(n_{0}\geq 2\), \(j_{0}\geq 1\), and set \(\omega_{0}=\sqrt{\Lambda(n_{0})}/j_{0}\). Suppose that the only natural number solution of the Diophantine equation_
\[\omega_{0}^{2}j_{0}=\Lambda(n_{0})^{2}=n_{0}(n_{0}-1)(n_{0}+1)\]
_is \((j_{0},n_{0})\). Then there is a Cantor set with positive measure of parameters \(\omega\), clustered near \(\omega_{0}\), such that the spherical capillary water waves equation (2.4) admits small amplitude periodic solution with frequency \(\omega\)._
The counterpart of Conjecture 2.2 for gravity capillary standing water waves was proved by Alazard-Baldi [1] using a Nash-Moser type theorem. The key technique in their proof was to find a conjugation of the linearized operator of the gravity capillary water waves system on \(\mathbb{T}^{1}\) to an operator of the form
\[\omega\partial_{t}+iT+i\lambda_{1}|D_{x}|^{1/2}+i\lambda_{-1}|D_{x}|^{-1/2}+ \text{Operator of order}\leq-\frac{3}{2},\]
where \(T\) is an elliptic Fourier multiplier of order \(3/2\), and \(\lambda_{1},\lambda_{-1}\) are real constants. The frequency \(\omega\) lives in a Cantor type set that clusters around a given frequency so that the kernel of the linearized operator is 1-dimensional. With this conjugation, they were able to find periodic solutions of linearized problems required by Nash-Moser iteration.
It is expected that this technique could be transplanted to the equation (2.4), since our analysis for (2.6) suggests that the 1-dimensional kernel requirement for bifurcation analysis is met. It seems that the greatest technical issue is to find a suitable conjugation that takes the linearized operator of (2.4) to an operator of the form
\[\omega\partial_{t}+i(T_{3/2}+T_{1/2}+T_{-1/2})+\text{Operator of order}\leq- \frac{3}{2},\]
where each \(T_{k}\) is a real Fourier multiplier acting on spherical harmonics. The difficulty is that, since we are working with pseudo-differential operators on \(S^{2}\), the formulas of symbolic calculus are not as neat as those on flat spaces. It seems necessary to implement some global harmonic analysis for compact homogeneous spaces, e.g. extension of results collected in Ruzhansky-Turunen [14]. Unfortunately, those results do not include pseudo-differential operators with "rough coefficients" and para-differential operators, so it seems necessary to re-write the whole theory.
### Number-Theoretic Obstruction with Normal Form Reduction
As pointed out in Section 1, the system (2.3) is locally well-posed due to a result in [1]. General well-posedness results [13], [14] for free boundary value problem of Euler equation also apply. As for lifespan estimate for initial data \(\varepsilon\)-close to the static solution (2.1), it should not be hard to conclude that the lifespan should be bounded below by \(1/\varepsilon\). The result relates to the fact that the sphere is a _stable_ critical point of the area functional, cf. [1]. This is nothing new: a suitable energy inequality should imply it. However, although the clue is clear, the implementation is far from standard since we are working on a compact manifold. A rigorous proof still calls for hard technicalities.
Now we will be looking into the nonlinear equation (2.3) for its longer time behavior. Although appearing similar to the well-studied water waves equation in e.g. [12], [13], [14], [15], there is a crucial difference between the dispersive relation in (2.3) and the well-studied water waves equations: the dispersive relation exhibits a strong _rigidity property_, i.e. the arbitrary physical constants enter into the dispersive relation \(\Lambda\) only as _scaling factors_. For the gravity-capillary water waves, the linear dispersive relation reads
\[\sqrt{g|\nabla|+\sigma|\nabla|^{3}},\]
where \(g\) is the gravitational constant and \(\sigma\) is the surface tension coefficient. For the Klein-Gordon equation on a Riemannian manifold, the linear dispersive relation reads
\[\sqrt{-\Delta+m^{2}},\]
where \(m\) is the mass. In [12], Ionescu and Pusateri referred such dispersive relations as having _non-degenerate dependence on physical parameter_, while in our context it is appropriate to refer to the dependence as _degenerate_. We will see that this crucial difference brings about severe obstructions for the long-time well-posedness of the system.
Following the idea of Delort and Szeftel [1], we look for a normal form reduction of (2.3) and explain why the rigidity property could cause obstructions. Not surprisingly, the obstruction is due to resonances, and strongly relates to the solvablity of a Diophantine equation. Delort and Szeftel cast a normal form reduction to the small-initial-data problem of quasilinear Klein-Gordon equation on the sphere and obtained an estimate on the lifespan longer than the one provided by standard energy method. After their work, normal form reduction has been used by mathematicians to understand water waves on flat tori, for example [1], [1] and [12]. The idea was inspired by the normal form reduction method introduced by Shatah [15]: for a quadratic perturbation of a linear dispersive equation
\[\partial_{t}u+iLu=N(u)=O(u^{\otimes 2}),\]
using a new variable \(u+B(u,\bar{u})\) with a suitably chosen quadratic addendum \(B(u,\bar{u})\) can possibly eliminate the quadratic part of \(N\), thus extending the lifespan estimate beyond the standard \(1/\varepsilon\).
So we shall write the quadratic part of \(\mathfrak{N}(u)\) as
\[\sum_{n_{3}\geq 0}\Pi^{(n_{3})}\left[\sum_{n_{1}\geq 0}\sum_{n_{2}\geq 0} \mathcal{M}_{1}\left(\Pi^{(n_{1})}u,\Pi^{(n_{2})}u\right)+\mathcal{M}_{2} \left(\Pi^{(n_{1})}u,\Pi^{(n_{2})}\bar{u}\right)+\mathcal{M}_{3}\left(\Pi^{(n _{1})}\bar{u},\Pi^{(n_{2})}\bar{u}\right)\right],\]
where \(\mathcal{M}_{1},\mathcal{M}_{2},\mathcal{M}_{3}\) are complex bi-linear operators, following the argument of Section 4 in [1]. They are independent of \(t\) since the right-hand-side of the equation does not depend on \(t\) explicitly. Let's look for a diffeomorphism
\[u\to v:=u+\mathbf{B}[u,u]\]
in the function space \(C^{\infty}(S^{2})\), where \(\mathbf{B}\) is a bilinear operator, so that the equation (2.3) with quadratic nonlinearity reduces to an equation with cubic nonlinearity. The \(\mathbf{B}[u,u]\) is supposed to take the form \(\mathbf{B}[u,u]=\mathbf{B}_{1}[u,u]+\mathbf{B}_{2}[u,u]+\mathbf{B}_{3}[u,u]\), with
\[\mathbf{B}_{1}[u,u] =\sum_{n_{3}\geq 0}\sum_{n_{1},n_{2}\geq 0}b_{1}(n_{1},n_{2},n_{3} )\Pi^{(n_{3})}\mathcal{M}_{1}\left(\Pi^{(n_{1})}u,\Pi^{(n_{2})}u\right),\] \[\mathbf{B}_{2}[u,u] =\sum_{n_{3}\geq 0}\sum_{n_{1},n_{2}\geq 0}b_{2}(n_{1},n_{2},n_{3} )\Pi^{(n_{3})}\mathcal{M}_{2}\left(\Pi^{(n_{1})}u,\Pi^{(n_{2})}\bar{u}\right),\] \[\mathbf{B}_{3}[u,u] =\sum_{n_{3}\geq 0}\sum_{n_{1},n_{2}\geq 0}b_{3}(n_{1},n_{2},n_{3} )\Pi^{(n_{3})}\mathcal{M}_{3}\left(\Pi^{(n_{1})}\bar{u},\Pi^{(n_{2})}\bar{u} \right),\]
where the \(b_{j}(n_{1},n_{2},n_{3})\)'s are complex numbers to be determined. Implementing (2.3), we find
\[(\partial_{t} +i\Lambda)(u+\mathbf{B}[u,u])\] \[=\mathfrak{N}(u)+\sum_{n_{3}\geq 0}\sum_{n_{1},n_{2}\geq 0}b_{1}(n_{ 1},n_{2},n_{3})\Pi^{(n_{3})}\mathcal{M}_{1}\left(\Pi^{(n_{1})}\partial_{t}u, \Pi^{(n_{2})}u\right)\] \[\quad+\sum_{n_{3}\geq 0}\sum_{n_{1},n_{2}\geq 0}b_{1}(n_{1},n_{2},n _{3})\Pi^{(n_{3})}\mathcal{M}_{1}\left(\Pi^{(n_{1})}u,\Pi^{(n_{2})}\partial_{t }u\right)+\text{(similar terms)}\] \[\quad+\sum_{n_{3}\geq 0}\sum_{n_{1},n_{2}\geq 0}i\Lambda(n_{3})b_{1}(n _{1},n_{2},n_{3})\Pi^{(n_{3})}\mathcal{M}_{1}\left(\Pi^{(n_{1})}u,\Pi^{(n_{2} )}u\right)\] \[=\mathfrak{N}(u)+\sum_{n_{3}\geq 0}\sum_{\min(n_{1},n_{2})\leq 1}i \left[\Lambda(n_{3})-\Lambda(n_{1})-\Lambda(n_{2})\right]b_{1}(n_{1},n_{2},n_{ 3})\Pi^{(n_{3})}\mathcal{M}_{1}\left(\Pi^{(n_{1})}u,\Pi^{(n_{2})}u\right)\] \[\quad+\sum_{n_{3}\geq 0}\sum_{n_{1},n_{2}\geq 2}i\left[\Lambda(n_{3}) -\Lambda(n_{1})-\Lambda(n_{2})\right]b_{1}(n_{1},n_{2},n_{3})\Pi^{(n_{3})} \mathcal{M}_{1}\left(\Pi^{(n_{1})}u,\Pi^{(n_{2})}u\right)\] \[\quad+\text{(similar terms)}+O(u^{\otimes 3}).\]
We aim to eliminate most of the second order portions of \(\mathfrak{N}(u)\). The coefficients \(b_{j}(n_{1},n_{2},n_{3})\) are fixed as follows:
\[b_{1}(n_{1},n_{2},n_{3}) =i\left[\Lambda(n_{3})-\Lambda(n_{1})-\Lambda(n_{2})\right]^{-1}, \quad n_{1},n_{2},n_{3}\geq 2\] \[b_{2}(n_{1},n_{2},n_{3}) =i\left[\Lambda(n_{3})-\Lambda(n_{1})+\Lambda(n_{2})\right]^{-1}, \quad n_{1},n_{2},n_{3}\geq 2\] \[b_{3}(n_{1},n_{2},n_{3}) =i\left[\Lambda(n_{3})+\Lambda(n_{1})+\Lambda(n_{2})\right]^{-1}, \quad n_{1},n_{2},n_{3}\geq 2\] \[b_{1,2,3}(n_{1},n_{2},n_{3}) =0,\quad\text{if }\Lambda(n_{3})\pm\Lambda(n_{1})\pm\Lambda(n_{2})=0 \text{ or }\min(n_{1},n_{2},n_{3})\leq 1, \tag{2.7}\]
then a large portion of the second order part of \(\Pi_{\geq 2}\mathfrak{N}\left(u\right)\) will be eliminated. In fact, for \(n_{1},n_{2},n_{3}\geq 2\), if \(\Lambda(n_{3})\pm\Lambda(n_{1})\pm\Lambda(n_{2})\neq 0\), then the term
\[\Pi^{(n_{3})}\left[\sum_{n_{1},n_{2}\geq 2}\mathcal{M}_{1}\left(\Pi^{(n_{1})}u, \Pi^{(n_{2})}u\right)+\mathcal{M}_{2}\left(\Pi^{(n_{1})}u,\Pi^{(n_{2})}\bar{u }\right)+\mathcal{M}_{3}\left(\Pi^{(n_{1})}\bar{u},\Pi^{(n_{2})}\bar{u} \right)\right]\]
is cancelled out. On the other hand, by the volume preservation equality (1.6) and conservation law (1.7), there holds \(\Pi^{(0)}u=O(u^{\otimes 2})\), \(\Pi^{(1)}u=O(u^{\otimes 2})\), so the low-low interaction
\[\Pi_{\geq 2}\left[\sum_{\min(n_{1},n_{2})\leq 1}\mathcal{M}_{1}\left(\Pi^{(n_{1} )}u,\Pi^{(n_{2})}u\right)+\mathcal{M}_{2}\left(\Pi^{(n_{1})}u,\Pi^{(n_{2})}\bar{u }\right)+\mathcal{M}_{3}\left(\Pi^{(n_{1})}\bar{u},\Pi^{(n_{2})}\bar{u} \right)\right]\]
is automatically \(O(u^{\otimes 3})\).
Thus the existence and continuity of the normal form \(\mathbf{B}[u,u]\) depends on the property of the \(3\)-way resonance equation
\[\Lambda(n_{3})-\Lambda(n_{1})-\Lambda(n_{2})=0,\quad n_{1},n_{2},n_{3}\geq 2.\quad n_{3}\leq n_{1}+n_{2} \tag{2.8}\]
which is equivalent to the Diophantine equation
\[[F(n_{1})+F(n_{2})-F(n_{3})]^{2}-4F(n_{1})F(n_{2})=0,\quad n_{1},n_{2},n_{3} \geq 2, \tag{2.9}\]
where \(F(X)=X(X-1)(X+2)\).
If the tuple \((n_{1},n_{2},n_{3})\) is non-resonant, i.e. it is such that \(b_{1,2,3}(n_{1},n_{2},n_{3})\neq 0\), then some elementary number theoretic argument will give a lower bound on \(|b_{1,2,3}(n_{1},n_{2},n_{3})|\) in terms of a negative power (can be fixed as \(-9/2\)) of \(n_{1},n_{2},n_{3}\). This is usually referred as _small divisor estimate_.
To study the distribution of resonant frequencies, we propose the following unsolved question:
**Question 3**.: _Does the Diophantine equation (2.9) have finitely many solutions?_
However, the Diophantine equation (2.9) does admit non-trivial solutions \((5,5,8)\) and \((10,10,16)\). In other words, the second order terms e.g.
\[\Pi^{(8)}\mathcal{M}_{1}(\Pi^{(5)}u\cdot\Pi^{(5)}u),\quad\Pi^{(5)}\mathcal{M} _{1}(\Pi^{(8)}u\cdot\Pi^{(5)}\bar{u}),\]
in the quadratic part of \(\mathfrak{N}(u)\) cannot be eliminated by normal form reduction. On the other hand, it seems to be very hard to determine whether (2.9) still admits any other solution. We have the following proposition (the author would like to thank Professor Bjorn Poonen for the proof):
**Proposition 2.3**.: _The Diophantine equation (2.9) has no solution with \(n_{1}\leq 10^{4}\) other than \((5,5,8)\) and \((10,10,16)\)._
The proof of this proposition is computer-aided. The key point is to use the so-called Runge's method to show that if \((n_{1},n_{2},n_{3})\) is a solution, then there must hold \(n_{2}=O(n_{1}^{2})\). For a given \(n_{1}\), this reduces the proof to numerical verification for _finitely many possibilities_. The algorithm can of course be further optimized, but due to some algebraic geometric considerations, it is reasonable to conjecture that the solution of (2.9) should be very rare. In fact, there are two ways of viewing the problem. We observe that if \((n_{1},n_{2},n_{3})\) is a solution, then \(F(n_{1})F(n_{2})\) must be a square, and the square free part of \(F(n_{1}),F(n_{2}),F(n_{3})\) must be the same. Further reduction turns the problem into finding integral points on a family of elliptic curves
\[Y^{2}=cF(X),\quad c\text{ is square-free},\]
which is of course difficult, but since Siegel's theorem asserts that there are only finitely many integer points on an elliptic curve over \(\mathbb{Q}\), it is reasonable to conjecture that there are not "too many" solutions to (2.9). We may also view the problem as finding integral (rational) points on a given algebraic surface. The complex projective surface corresponding to (2.9) is given by
\[\mathfrak{V}:[X(X-W)(X+2W)+Y(Y-W)(Y+2W)-Z(Z-W)(Z+2W)]^{2}\] \[=4X(X-W)(X+2W)Y(Y-W)(Y+2W),\]
where \([X,Y,Z,W]\) is the homogeneous coordinate on \(\mathbb{CP}^{3}\). With the aid of computer, we obtain
**Proposition 2.4**.: _The complex projective surface \(\mathfrak{V}\subset\mathbb{CP}^{3}\) has Kodaira dimension 2 (i.e. it is of general type under Kodaira-Enriquez classification), and its first Betti number is 0._
See Appendix A for the code.
The first part of the proposition suggests that the rational points of \(\mathfrak{V}\) should be localized on finitely many algebraic curves laying on \(\mathfrak{V}\); nevertheless, this seemingly simple suggestion is indeed a special case of the _Bomberi-Lang conjecture_, a hard problem in number theory (its planar case is known as the celebrated Faltings's theorem). The second part suggests that the rational points of \(\mathfrak{V}\) should be rare since its Albanese variety is a single point. But these are just heuristics that solutions to (2.9) should be rare. In general, determining the solvability of a given Diophantine equation is very difficult 2, as number theorists and arithmetic geometers generally believe.
Footnote 2: For example, the seemingly simple Diophantine equation \(x^{3}+y^{3}+z^{3}=42\) is in fact a puzzle of more than 60 years, and its first solution was found recently by Booker-Sutherland [14]. It is of extremely large magnitude: \(42=(-80\ 538\ 738\ 812\ 075\ 974)^{3}+80\ 435\ 758\ 145\ 817\ 515^{3}+12\ 602\ 123 297\ 335\ 631^{3}\). Another example is the equation of same type \(x^{3}+y^{3}+z^{3}=3\). Beyond the easily found solutions (1, 1, 1), (4, -5), (4, -5, 4), (-5, 4, 4), the next solution reads (569 936\ 821\ 221\ 962\ 380\ 720,-569\ 936\ 821\ 113\ 563\ 493\ 509,-472\ 715\ 493\ 453\ 327\ 032).
The reason that such issues do not occur for water waves in the flat setting or nonlinear Klein-Gordon equations is twofold. First of all, the resonance equation is easily understood even in the degenerate case in the flat setting. For example, the capillary water waves without gravity on \(\mathbb{T}^{2}\) has dispersive relation \(|\nabla|^{1/2}\), and the 3-way resonance equation is
\[\sqrt[4]{k_{1}^{2}+k_{2}^{2}}+\sqrt[4]{l_{1}^{2}+l_{2}^{2}}=\sqrt[4]{m_{1}^{2} +m_{2}^{2}},\]
with the additional requirement \(m=k+l\). We already know that the resonance equation has no non-trivial solution at all, cf. [1]. But even without using \(m=k+l\), we would be able to conclude that there are at most finitely many non-trivial solutions from the celebrated Faltings's theorem on rational points on high-genus algebraic projective curves (although this is like using a sledge hammer to crack a nut). Secondly, for the non-degenrate case, for example the gravity-capillary waves, the dispersive relation reads \(\sqrt{g|\nabla|+\sigma|\nabla|^{3}}\), so if the ratio \(\sigma/g\) is a transcedental number then the 3-way resonance equation has no solution. Furthermore, using some elementary calculus and a measure-theoretic argument, it can be shown, not without technicalities, that the resonances admit certain small-divisor estimates for almost all parameters. This is exactly the argument employed by Delort-Szeftle [15], Berti-Delort [1] and Ionescu-Pusateri [20], so that their results were stated for _almost all_ parameters. These parameters are, roughly speaking, badly approximated by algebraic numbers.
However, the resonance equation (2.8) is inhomogeneous and allows no arbitrary physical parameter at all. Furthermore, since product of spherical harmonics are no longer spherical harmonics in general, Fourier series techniques employed by [1][1][20] that works for the torus are never valid for \(S^{2}\); for example, we cannot simply assume \(n_{3}=n_{1}+n_{2}\) in (2.8), as already illustrated by the solutions (5,5,8) (10,10,16). These are the crucial differences between the capillary spherical water waves and all known results for water waves in the flat setting.
### Heuristics for Lifespan Estimate
To summarize, almost global lifespan estimate of (2.3) depends on the difficult number theoretic question 3. Before it is fully resolved, we can only expect partial results regarding the normal form transformation.
If there are only finitely many solutions to the Diophantine equation (2.9), then under the normal form reduction \(u\to v=u+\mathbf{B}[u,u]\) with coefficients given by (2.7), the equation (2.3) is transformed into the following system:
\[\frac{\partial}{\partial t}\Pi_{c}v =O(v^{\otimes 2}),\] \[\frac{\partial}{\partial t}(1-\Pi_{c})v =O(v^{\otimes 3}),\]
where \(\Pi_{c}\) is the orthogonal projection to \(\bigoplus_{n_{3}}\mathcal{E}^{(n_{3})}\subset L^{2}(S^{2})\), with \(n_{3}\) being either \(0\) or \(1\), or exhausting the third component of all nontrivial solutions of (2.9).
There is no reasonable assertion to be made if the conjecture fails. However, if the conjecture does hold true, then we can expect that the lifespan estimate for \(\varepsilon\)-Cauchy data of (EQ) goes beyond \(\varepsilon^{-1}\), as what we expect for gravity water waves in the periodic setting, e.g. in [10]:
**Conjecture 2.3**.: _If the Diophantine equation (2.9) has only finitely many solutions, then there is some \(\alpha>0\) such that for \(\varepsilon\)-Cauchy data of (EQ), the lifespan goes beyond \(\varepsilon^{-(1+\alpha)}\) as \(\varepsilon\to 0\)._
Let's explain the heuristic as follows. The argument we aim to implement is the standard "continuous induction method", i.e. for some suitably large \(s\) and \(K\) and suitable \(\alpha>0\), assuming \(T=\varepsilon^{-(1+\alpha)}\) and \(\sup_{t\in[0,T]}\|v\|_{H^{s}(g_{0})}\leq K\varepsilon\), we try to prove a better bound \(\sup_{t\in[0,T]}\|v\|_{H^{s}(g_{0})}\leq K\varepsilon/2\). Here \(g_{0}\) is the standard metric on \(S^{2}\). It is intuitive to expect such a result for the cubic equation \(\partial_{t}(1-\Pi_{c})v=O(v^{\otimes 3})\). As for the quadratic equation \(\partial_{t}\Pi_{c}v=O(v^{\otimes 2})\), it is crucial to implement the conservation of energy \(\mathbf{H}[\zeta,\phi]\equiv 4\pi\) for a solution. We summarize it as
**Proposition 2.5**.: _Fix \(T>0\). Let \(u\) be a smooth solution of (2.3) and \(v=u+\mathbf{B}[u,u]\) be as above. Suppose for some suitably large \(s\) and \(K\), there holds_
\[\sup_{t\in[0,T]}\|v\|_{H^{s}(g_{0})}\leq K\varepsilon\]
_with \(\varepsilon\) sufficiently small. Then there is in fact a better bound for the low frequency part \(\Pi_{c}v\):_
\[\sup_{t\in[0,T]}\|\Pi_{c}v\|_{H^{s}(g_{0})}\leq K\varepsilon/4.\]
Proof.: We consider the "approximate" energy functional
\[\mathbf{H}_{0}[\zeta,\phi]=4\pi+\int_{S^{2}}2\zeta\cdot d\mu_{0}+\frac{1}{2} \int_{S^{2}}(|\nabla_{0}\zeta|^{2}+2|\zeta|^{2})d\mu_{0}+\frac{1}{2}\int_{S^{2 }}\Big{|}|\nabla_{0}^{1/2}|\phi|^{2}\,d\mu_{0},\]
where \(\mu_{0}\) is the standard area measure on \(S^{2}\) and \(\nabla_{0}\) is the standard connection on \(S^{2}\). This is nothing but the quadratic approximation to \(\mathbf{H}[\zeta,\phi]\) in (1.8) at \((0,0)\), so there holds \(\mathbf{H}_{0}[\zeta,\phi]=\mathbf{H}[\zeta,\phi]+O(u^{\otimes 3})\). Using volume preservation (1.6), we obtain \(\int_{S^{2}}\zeta d\mu_{0}=-\|\zeta\|_{L^{2}(g_{0})}^{2}+O(\zeta^{\otimes 3})\), so summarizing we have
\[\frac{1}{2}\int_{S^{2}}(|\nabla_{0}\zeta|^{2}-2|\zeta|^{2})d\mu_{0}+\frac{1}{ 2}\int_{S^{2}}\Big{|}|\nabla_{0}^{1/2}|\phi|^{2}\,d\mu_{0}=O(u^{\otimes 3}). \tag{2.10}\]
Note that we used the conservation law \(\mathbf{H}[\zeta,\phi]\equiv 4\pi\). By spectral calculus on \(S^{2}\), we have
\[\|\zeta\|_{H^{1}(g_{0})}^{2}\simeq\|\Pi^{(0)}\zeta\|_{L^{2}(g_{0})}^{2}+\|\Pi^ {(1)}\zeta\|_{L^{2}(g_{0})}^{2}+\int_{S^{2}}(|\nabla_{0}\zeta|^{2}-2|\zeta|^{2} )d\mu_{0},\]
and by volume preservation and conservation of momentum (1.7) we find
\[\|\zeta\|_{H^{1}(g_{0})}^{2}\simeq\int_{S^{2}}(|\nabla_{0}\zeta|^{2}-2|\zeta|^ {2})d\mu_{0}+O(\zeta^{\otimes 4}). \tag{2.11}\]
Now, for some \(N_{0}>0\) relating to the loss of regularity caused by \(\mathbf{B}\), we may choose \(s>>2N_{0}\). Then if \(\|v\|_{H^{s}(g_{0})}\leq K\varepsilon\), it follows that \(\|u\|_{H^{s-N_{0}}(g_{0})}\leq K^{\prime}\varepsilon\), so by (2.10) and (2.11) we have
\[\|u\|_{L^{2}(g_{0})}^{2}\simeq\|\zeta\|_{H^{1}(g_{0})}^{2}+\|\phi\|_{H^{1/2}(g_ {0})}^{2}\leq K^{\prime}\varepsilon^{3}.\]
Thus
\[\|v\|_{L^{2}(g_{0})} \leq C\|u\|_{L^{2}(g_{0})}+C\|u\|_{H^{N_{0}}(g_{0})}^{2}\] \[\leq C\varepsilon^{3/2}+K^{\prime}\varepsilon^{2}.\]
Since the spectrum of \(\Pi_{c}v\) is bounded, by Bernstein type inequality we have
\[\|\Pi_{c}v\|_{H^{s}(g_{0})}\leq C\|v\|_{L^{2}(g_{0})}\leq K^{\prime}\varepsilon^{ 3/2}(1+\varepsilon^{1/2}).\]
If \(\varepsilon\) is sufficiently small then this implies \(\|\Pi_{c}v\|_{H^{s}(g_{0})}\leq K\varepsilon/4\).
We point out that the above proof is independent from the magnitude of the lifespan \(T\), so it is always applicable as long as the cubic equation \(\partial_{t}(1-\Pi_{c})v=O(v^{\otimes 3})\) is well-understood. There are two crucial points in the proof of Proposition 2.5: the conservation of energy, and that the projection \(\Pi_{c}\) is of finite rank, so that a Bernstein type inequality holds. The last fact holds only if there are finitely many 3-way resonances, i.e. there are only finitely many solutions to the Diophantine equation (2.8).
Finally, we propose an even more ambitious conjecture concerning global dynamical properties of spherical water droplets, which is again illuminated by observation in hydrodynamical experiments under zero gravity, and also suggested by the results of Berti-Montalto [1]:
**Conjecture 2.4**.: _If the Diophantine equation (2.9) has only finitely many solutions, then a KAM type result holds for (EQ): there is a family of infinitely many quasi-periodic solutions of (EQ), depending on a parameter which takes value in a Cantor-type set._
## Appendix A MAGMA Code
MAGMA is a large, well-supported software package designed for computations in algebra, number theory, algebraic geometry, and algebraic combinatorics. In this appendix, we give the MAGMA code used to conduct computations on Diophantine equations related to the spherical capillary water waves system.
### Integral Points on Elliptic Curve
We can find all integral points on a given elliptic curve over \(\mathbb{Q}\) using MAGMA. For a monic cubic polynomial \(f(x)\), the function EllipticCurve(f) creates the elliptic curve
\[E:y^{2}=f(x),\]
and the function IntegralPoints(E) returns a sequence containing all the integral points on \(E\) under the homogeneous coordinate of \(\mathbb{QP}^{2}\), modulo negation. We use this to find out all integral points on the elliptic curve
\[E_{c}:\;y^{2}=x^{3}+cx^{2}-2c^{2}x=x(x-c)(x+2c).\]
for natural number \(c\leq 50\). The MAGMA code is listed below, which excludes all the \(c\)'s such that there are only trivial integral points \(\{(-c,0),(0,0),(c,0)\}\) on \(E_{c}\).
> Qx<x> := PolynomialRing(Rationals()); > for c in [1..50] do > E := EllipticCurve(x^3+c*x^2-2*c^2*x); > S, reps := IntegralPoints(E); > if # S gt 3 then > print c, E; > print S; > end if; > end for;
2 Elliptic Curve defined by y^2 = x^3 + 2*x^2 - 8*x over Rational Field [ (-4 : 0 : 1), (-2 : 4 : 1), (-1 : -3 : 1), (0 : 0 : 1), (2 : 0 : 1), (4 : 8 : 1), (8 : -24 : 1), (50 : 360 : 1) ]
8Elliptic Curve defined by y^2 = x^3 + 8*x^2 - 128*x over Rational Field
[ (-16 : 0 : 1), (-8 : 32 : 1), (-4 : -24 : 1), (0 : 0 : 1), (8 : 0 : 1), (9 : -15 : 1), (16 : 64 : 1), (32 : -192 : 1), (200 : 2880 : 1) ]
13Elliptic Curve defined by y^2 = x^3 + 13*x^2 - 338*x over Rational Field
[ (-26 : 0 : 1), (0 : 0 : 1), (13 : 0 : 1), (121 : 1386 : 1) ]
15Elliptic Curve defined by y^2 = x^3 + 15*x^2 - 450*x over Rational Field
[ (-30 : 0 : 1), (-5 : -50 : 1), (0 : 0 : 1), (15 : 0 : 1), (24 : 108 : 1), (90 : -900 : 1) ]
17Elliptic Curve defined by y^2 = x^3 + 17*x^2 - 578*x over Rational Field
[ (-34 : 0 : 1), (-32 : -56 : 1), (0 : 0 : 1), (17 : 0 : 1), (833 : 24276 : 1) ]
18Elliptic Curve defined by y^2 = x^3 + 18*x^2 - 648*x over Rational Field
[ (-36 : 0 : 1), (-32 : -80 : 1), (-18 : 108 : 1), (-9 : -81 : 1), (0 : 0 : 1), (18 : 0 : 1), (36 : 216 : 1), (72 : -648 : 1), (450 : 9720 : 1) ]
22Elliptic Curve defined by y^2 = x^3 + 22*x^2 - 968*x over Rational Field
[ (-44 : 0 : 1), (-32 : -144 : 1), (0 : 0 : 1), (22 : 0 : 1), (198 : 2904 : 1) ]
23Elliptic Curve defined by y^2 = x^3 + 23*x^2 - 1058*x over Rational Field
[ (-46 : 0 : 1), (0 : 0 : 1), (23 : 0 : 1), (50 : -360 : 1) ]
26Elliptic Curve defined by y^2 = x^3 + 26*x^2 - 1352*x over Rational Field
[ (-52 : 0 : 1), (-49 : -105 : 1), (0 : 0 : 1), (26 : 0 : 1), (1300 : 47320 : 1) ]
30Elliptic Curve defined by y^2 = x^3 + 30*x^2 - 1800*x over Rational Field
[ (-60 : 0 : 1), (-50 : 200 : 1), (-45 : -225 : 1), (-24 : -216 : 1), (-20 : 200 : 1), (-6 : 108 : 1), (0 : 0 : 1), (30 : 0 : 1), (36 : 144 : 1), (40 : -200 : 1), (75 : -675 : 1), (90 : 900 : 1), (300 : 5400 : 1), (324 : -6048 : 1), (480 : -10800 : 1), (7290 : 623700 : 1), (10830 : -1128600 : 1), (226875 : 108070875 : 1) ]
32Elliptic Curve defined by y^2 = x^3 + 32*x^2 - 2048*x over Rational Field
[ (-64 : 0 : 1), (-32 : 256 : 1), (-16 : -192 : 1), (0 : 0 : 1), (32 : 0 : 1), (36 : -120 : 1), (64 : 512 : 1), (128 : -1536 : 1), (800 : 23040 : 1) ]
33Elliptic Curve defined by y^2 = x^3 + 33*x^2 - 2178*x over Rational Field
[ (-66 : 0 : 1), (0 : 0 : 1), (33 : 0 : 1), (81 : -756 : 1) ]
35Elliptic Curve defined by y^2 = x^3 + 35*x^2 - 2450*x over Rational Field
[ (-70 : 0 : 1), (-49 : 294 : 1), (-45 : -300 : 1), (-40 : 300 : 1), (-14 : -196 : 1), (0 : 0 : 1), (35 : 0 : 1), (50 : 300 : 1), (175 : -2450 : 1), (224 : 3528 : 1), (280 : -4900 : 1), (4410 : -294000 : 1), (14450 : -1739100 : 1) ]
39Elliptic Curve defined by y^2 = x^3 + 39*x^2 - 3042*x over Rational Field
[ (-78 : 0 : 1), (0 : 0 : 1), (39 : 0 : 1), (147 : -1890 : 1) ]
42Elliptic Curve defined by y^2 = x^3 + 42*x^2 - 3528*x over Rational Field
[ (-84 : 0 : 1), (-56 : -392 : 1), (-12 : 216 : 1), (0 : 0 : 1), (42 : 0 : 1), (63 : -441 : 1), (294 : 5292 : 1) ]
43Elliptic Curve defined by y^2 = x^3 + 43*x^2 - 3698*x over Rational Field
[ (-86 : 0 : 1), (-32 : -360 : 1), (0 : 0 : 1), (43 : 0 : 1) ]
46Elliptic Curve defined by y^2 = x^3 + 46*x^2 - 4232*x over Rational Field
[ (-92 : 0 : 1), (0 : 0 : 1), (46 : 0 : 1), (26496 : 4316640 : 1) ]
50Elliptic Curve defined by y^2 = x^3 + 50*x^2 - 5000*x over Rational Field
[ (-100 : 0 : 1), (-50 : 500 : 1), (-25 : -375 : 1), (-4 : 144 : 1), (0 : 0 : 1), (50 : 0 : 1), (100 : 1000 : 1), (200 : -3000 : 1), (1250 : 45000 : 1) ] Running Magma V2.27-7.
Seed: 821911319; Total time: 2.430 seconds; Total memory usage: 85.16MB.
### Classification of Projective Surface
Some basic geometric parameters of the complex projective surface
\[\mathfrak{V}:[X(X-W)(X+2W)+Y(Y-W)(Y+2W)-Z(Z-W)(Z+2W)]^{2}\]
\[=4X(X-W)(X+2W)Y(Y-W)(Y+2W)\]
in \(\mathbb{CP}^{3}\) can be computed using MAGMA. The author would like to thank Professor Bjorn Poonen for introducing MAGMA and providing the code listed below.
> Q:=Rationals(); > P<x,y,z,w>:=ProjectiveSpace(Q,3); > fx:=x*(x-w)*(x+2*w); > fy:=y*(y-w)*(y+2*w); > fz:=z*(z-w)*(z+2*w); > V:=Surface(P,(fz-fx-fy)^2-4*fx*fy); > KodairaEnriquesType(V);
2 0 General type Running Magma V2.27-7. Seed: 1338492807; Total time: 0.670 seconds; Total memory usage: 32.09MB.
Note that the Kodaira dimension is invariant regardless of the choice of base field, so it is legitimate to choose the base field to be \(\mathbb{Q}\) in the above code. The variable \(w\) is used to homogenize the equation. The function KodairaEnriquesType(V) returns three values for the given projective surface \(V\): the first is the Kodaira dimension, the second is irrelevant when the Kodaira dimension is not \(-\infty\), 1 or 0, and the third is the Kodaira-Enriquez classification of the surface \(X\).
|
2309.04551 | Recursive Error Reduction for Regular Branching Programs | In a recent work, Chen, Hoza, Lyu, Tal and Wu (FOCS 2023) showed an improved
error reduction framework for the derandomization of regular read-once
branching programs (ROBPs). Their result is based on a clever modification to
the inverse Laplacian perspective of space-bounded derandomization, which was
originally introduced by Ahmadinejad, Kelner, Murtagh, Peebles, Sidford and
Vadhan (FOCS 2020).
In this work, we give an alternative error reduction framework for regular
ROBPs. Our new framework is based on a binary recursive formula from the work
of Chattopadhyay and Liao (CCC 2020), that they used to construct weighted
pseudorandom generators (WPRGs) for general ROBPs.
Based on our new error reduction framework, we give alternative proofs to the
following results for regular ROBPs of length $n$ and width $w$, both of which
were proved in the work of Chen et al. using their error reduction:
$\bullet$ There is a WPRG with error $\varepsilon$ that has seed length
$\tilde{O}(\log(n)(\sqrt{\log(1/\varepsilon)}+\log(w))+\log(1/\varepsilon)).$
$\bullet$ There is a (non-black-box) deterministic algorithm which estimates
the expectation of any such program within error $\pm\varepsilon$ with space
complexity $\tilde{O}(\log(nw)\cdot\log\log(1/\varepsilon)).$ (This was first
proved in the work of Ahmadinejad et al., but the proof by Chen et al. is
simpler.)
Because of the binary recursive nature of our new framework, both of our
proofs are based on a straightforward induction that is arguably simpler than
the Laplacian-based proof in the work of Chen et al. | Eshan Chattopadhyay, Jyun-Jie Liao | 2023-09-08T18:46:34Z | http://arxiv.org/abs/2309.04551v2 | # Recursive Error Reduction for Regular Branching Programs
###### Abstract
In a recent work, Chen, Hoza, Lyu, Tal and Wu (FOCS 2023) showed an improved error reduction framework for the derandomization of regular read-once branching programs (ROBPs). Their result is based on a clever modification to the inverse Laplacian perspective of space-bounded derandomization, which was originally introduced by Ahmadinejad, Kelner, Murtagh, Peebles, Sidford and Vadhan (FOCS 2020).
In this work, we give an alternative error reduction framework for regular ROBPs. Our new framework is based on a binary recursive formula from the work of Chattopadhyay and Liao (CCC 2020), that they used to construct weighted pseudorandom generators (WPRGs) for general ROBPs.
Based on our new error reduction framework, we give alternative proofs to the following results for regular ROBPs of length \(n\) and width \(w\), both of which were proved in the work of Chen et al. using their error reduction:
* There is a WPRG with error \(\varepsilon\) that has seed length \[\tilde{O}(\log(n)(\sqrt{\log(1/\varepsilon)}+\log(w))+\log(1/\varepsilon)).\]
* There is a (non-black-box) deterministic algorithm which estimates the expectation of any such program within error \(\pm\varepsilon\) with space complexity \[\tilde{O}(\log(nw)\cdot\log\log(1/\varepsilon)).\]
This was first proved in the work of Ahmadinejad et al., but the proof by Chen et al. is simpler.
Because of the binary recursive nature of our new framework, both of our proofs are based on a straightforward induction that is arguably simpler than the Laplacian-based proof in the work of Chen et al.
In fact, because of its simplicity, our proof of the second result directly gives a slightly stronger claim: our algorithm computes a \(\varepsilon\)-singular value approximation (a notion of approximation introduced in a recent work by Ahmadinejad, Peebles, Pyne, Sidford and Vadhan (FOCS 2023)) of the random walk matrix of the given ROBP in space \(\tilde{O}(\log(nw)\cdot\log\log(1/\varepsilon))\). It is not clear how to get this stronger result from the previous proofs.
## 1 Introduction
A central problem in complexity theory is to understand to what extent is randomness useful in space-bounded computation. It is widely conjectured that every randomized algorithm can be made deterministic with only a constant-factor blowup in space, i.e. \(\mathbf{BPL}=\mathbf{L}\). A central approach to derandomize \(\mathbf{BPL}\) is to construct explicit pseudorandom generators (PRGs) for standard-order read-once branching programs (ROBPs), which we formally define below.
**Definition 1.1** (robops).: _A (standard-order) ROBP \(B\) of length \(n\) and width \(w\) is specified by a start state \(v_{0}\in[w]\), a set of accept states \(V_{\mathrm{acc}}\) and \(n\) transition functions \(B_{i}:[w]\times\{0,1\}\to[w]\) for \(i\) from \(1\) to \(n\). The ROBP \(B\) computes a function \(B:\{0,1\}^{n}\to\{0,1\}\) as follows. Given an input \(x\in\{0,1\}^{n}\), define \(v_{i}=B_{i}(v_{i-1},x_{i})\), where \(x_{i}\) denotes the \(i\)-th bit of \(x\). Then output \(B(x)=1\) if \(v_{n}\in V_{\mathrm{acc}}\), or \(B(x)=0\) otherwise._
**Remark 1.2**.: _Equivalently, one can view a ROBP \(B\) as a directed graph as follows. Consider \(n+1\) layers of nodes \(L_{0},L_{1},\ldots,L_{n}\), each having size \(w\), and label the nodes in each \(L_{i}\) with \([w]\). For every \(i\in[n],v\in[w],b\in\{0,1\}\), construct an edge with label \(b\) from \(v\) in \(L_{i-1}\) to \(B_{i}(v,b)\) in \(L_{i}\). Then the computation of \(B(x)\) corresponds to a walk following label \(x\) from \(L_{0}\) to \(L_{n}\). In this paper we usually consider the equivalent graph view, and we refer to \(L_{i}\) as layer \(i\)._
**Definition 1.3** (PrGs).: _Let \(\mathcal{F}\) be a class of functions \(f:\{0,1\}^{n}\to\{0,1\}\). An \(\varepsilon\)-PRG for \(\mathcal{F}\) is a function \(G:\{0,1\}^{d}\to\{0,1\}^{n}\) such that for every \(f\in\mathcal{F}\),_
\[\left|\operatorname*{\mathbb{E}}_{x\sim\{0,1\}^{n}}[f(x)]-\operatorname*{ \mathbb{E}}_{s\sim\{0,1\}^{d}}[f(G(s))]\right|\leq\varepsilon.\]
_We say \(G\)\(\varepsilon\)-fools the class \(\mathcal{F}\) if \(G\) is an \(\varepsilon\)-PRG for \(\mathcal{F}\). We call \(d\) the seed length of \(G\). We say \(G\) is explicit if it can be computed in space \(O(d)\).1_
Footnote 1: Throughout this paper, when we say a function \(f\) is explicit, it means the function \(f\) can be computed in space \(O(n)\) where \(n\) is the input length.
It can be shown (via probabilistic method) that there exists a \(\varepsilon\)-PRG for width-\(w\) length-\(n\) ROBP with seed length \(O(\log(nw/\varepsilon))\), which is optimal. Furthermore, an _explicit_ PRG with such seed length would imply \(\mathbf{BPL}=\mathbf{L}\). In a seminal work, Nisan [20] constructed an explicit PRG with seed length \(O(\log(n)\cdot\log(nw/\varepsilon))\), which is only a \(O(\log(n))\) factor away from optimal. Nisan [20] then used this PRG to prove that any problem in \(\mathbf{BPL}\) can be deterministically computed in \(O(\log^{2}(n))\) space and \(\operatorname{poly}(n)\) time. Another remarkable work by Saks and Zhou [11] also applied Nisan's generator in a non-trivial way to show that any problem in \(\mathbf{BPL}\) can be deterministically computed in \(O(\log^{3/2}(n))\) space.
### Weighted PRGs
Despite decades of effort, the seed length of Nisan's PRG remains the state-of-the-art for width \(w\geq 4\). In fact, even for the \(w=3\) special case, Nisan's seed length remained unbeatable until a recent work by Meka, Reingold and Tal [13] which improved the seed length to \(\tilde{O}(\log(n)\log(1/\varepsilon))\). This has motivated researchers to study relaxed notions of PRGs and their applications in the derandomization of \(\mathbf{BPL}\). A well-studied notion is that of a hitting set generator (HSG), which is the "one-sided" variant of a PRG.
**Definition 1.4** (HSGs).: _Let \(\mathcal{F}\) be a class of functions \(f:\{0,1\}^{n}\to\{0,1\}\). A \(\varepsilon\)-PRG for \(\mathcal{F}\) is a function \(G:\{0,1\}^{d}\to\{0,1\}^{n}\) such that for every \(f\in\mathcal{F}\) s.t. \(\operatorname*{\mathbb{E}}_{x\sim\{0,1\}^{n}}[f(n)]>\varepsilon\), it holds that \(\operatorname*{\mathbb{E}}_{s\sim\{0,1\}^{d}}[f(G(s))]>0\)._
The study of explicit HSGs for ROBPs has a long history, starting from the seminal work by Ajtai, Komlos and Szemeredi [1]. While being weaker than PRGs, explicit constructions of HSGs can still be used to derandomize randomized log-space algorithms with one-sided error (\(\mathbf{RL}\)). In fact, a recent work by Cheng and Hoza [14] shows that an explicit HSG with optimal seed length \(O(\log(nw/\varepsilon))\) already implies \(\mathbf{BPL}=\mathbf{L}\).
In 2018, Braverman, Cohen and Garg [1] introduced another relaxed notion of PRG called _weighted PRG_ (WPRG). In this relaxed notion, each output string of \(G\) is further assigned a real weight that can possibly be negative.
**Definition 1.5**.: _Let \(\mathcal{F}\) be a class of functions \(f:\{0,1\}^{n}\to\{0,1\}\). A \(\varepsilon\)-WPRG is a pair of functions \((\rho,G):\{0,1\}^{d}\to\{0,1\}^{n}\times\mathbb{R}\) such that for every \(f\in\mathcal{F}\),_
\[\left|\operatorname*{\mathbb{E}}_{x\sim\{0,1\}^{n}}[f(x)]-\operatorname*{ \mathbb{E}}_{s\sim\{0,1\}^{d}}[\rho(s)\cdot f(G(s))]\right|\leq\varepsilon.\]
Surprisingly, by simply allowing negative weights, [1] showed how to construct an explicit \(\varepsilon\)-WPRG with seed length
\[\tilde{O}(\log(n)\log(nw)+\log(1/\varepsilon)),\]
which has almost optimal dependence on \(\varepsilon\). A sequence of followup work [1, 1, 1, 13] further improved the seed length with simpler WPRG constructions. In particular, Hoza [1] completely removed the hidden \(\log\log\) factors and improve the seed length to \(O(\log(n)\log(nw)+\log(1/\varepsilon))\).
It was observed in [1] that \(\varepsilon\)-WPRGs implies \(\varepsilon\)-HSGs. In addition, WPRGs seem closer to PRGs than HSGs in the sense that one can use a WPRG to estimate the expectation of a ROBP \(f\) by simply enumerating all the seeds. In fact, following a suggestion in [1], [10] proved that a WPRG with good enough bound on the output of \(\rho\) can be used in the derandomization framework by Saks and Zhou [11]. Hoza [12] then used his WPRG result to prove that \(\mathbf{BPL}\) can be derandomized in deterministic space \(O(\log^{3/2}(n)/\sqrt{\log\log(n)})\). This was the first improvement over Saks and Zhou's decades-old result.
### Regular branching programs
For the original notion of PRGs, while there has been no improvement over Nisan's seed length for general (standard-order) ROBPs, a lot of progress has been made in some restricted families. One important example is the setting of _regular ROBPs_, which is the main focus of this work.
**Definition 1.6** (Regular ROBPs).: _We say a (standard-order) ROBP \(B\) is regular if for every transition function \(B_{i}:[w]\times\{0,1\}\to[w]\) in \(B\), every state \(v\in[w]\) has exactly \(2\) pre-images._
An important reason to study this family is that general ROBPs can be reduced to regular ROBPs [13, 14]. In fact, a surprisingly simple proof in a recent work by Lee, Pyne and Vadhan [15] shows that any function that can be computed by a ROBP of length \(n\) and width \(w\) can also be computed by a regular ROBP of width \(O(nw)\).
In 2010, Braverman, Rao, Raz and Yehudayoff proved that the INW generator [16] with proper choices of parameters is in fact a PRG for regular ROBPs with seed length \(O(\log(n)\cdot(\log\log(n)+\log(w/\varepsilon)))\). This is better than Nisan's PRG's seed length when \(\log(w/\varepsilon)=o(\log(n))\). More generally, they introduced the "weight" measure for ROBPs and proved that an INW generator with fixed parameters has error proportional to the weight. They then showed that regular ROBPs have smaller weight than general ROBPs when \(w\ll n\), which implies their better seed length bound. (See Section 3 for the formal definitions.) Their better PRG construction for "small-weight" ROBPs also turns out to be an important ingredient of the PRG for width-3 ROBPs in [13].
Recently, Ahmadinejad, Kelner, Murtagh, Peebles, Sidford and Vadhan [1] proved a remarkable result that it takes only \(\tilde{O}(\log(nw))\) space to estimate the expectation of a regular ROBP \(B\) in a non-black-box way. In fact, they designed an algorithm that can estimate the expectation of \(B\) to very a high precision without much overhead:
**Theorem 1.7**.: _For every \(\varepsilon>0\) there is a deterministic algorithm which takes a regular ROBP \(B\) of length \(n\) and width \(w\) as input, and computes a value within \(\mathbb{E}_{x}\left[B(x)\right]\pm\varepsilon\) in space complexity \(\tilde{O}(\log(nw)\log\log(1/\varepsilon))\)._
### Error reduction for regular branching programs
Given the better PRG by [1] in the regular setting, it is natural to ask whether one can get a better WPRG than [12] in the regular setting too. This is in fact plausible because the WPRG constructions introduced in Section 1.1 can all be viewed as _error reduction procedures_: given any \(\varepsilon_{0}\)-PRG for ROBPs for some "mild error" \(\varepsilon_{0}\) (which we call the "base PRG"), one can construct a \(\varepsilon\)-WPRG for ROBPs with better dependence on \(\varepsilon\). For general standard-order ROBPs, The \(O(\log(n)\log(nw)+\log(1/\varepsilon))\) seed length described in Section 3 was obtained by taking Nisan's PRG as the base PRG. Therefore, it is natural to think that one can obtain a better \(\varepsilon\)-WPRG for regular ROBPs by taking the PRG in [1] as the base PRG instead.
However, it turns out that the intuition is not trivially true, because every known error reduction procedure for general ROBPs requires the "base error" \(\varepsilon_{0}\) to be at most \(<1/n\). When \(\varepsilon_{0}<1/n\), the \(\tilde{O}(\log(n)\log(w/\varepsilon_{0}))\) seed length bound in [1] is no better than Nisan's \(O(\log(n)\log(nw/\varepsilon_{0}))\) seed length, so we cannot hope to get any improvement in the seed length of the corresponding WPRG.
This problem was recently solved by Chen, Hoza, Lyu, Tal and Wu [17]. They showed how to exploit the regular property and obtain a reduction from \(\varepsilon\)-WPRG for regular ROBPs to PRG for regular ROBPs with error \(\varepsilon_{0}=O(1/\log^{2}(n))\). As a result, they proved the following theorem.
**Theorem 1.8** ([17]).: _There is an explicit \(\varepsilon\)-WPRG for regular ROBPs with seed length_
\[\tilde{O}\left(\log(n)\left(\log(w)+\sqrt{\log(1/\varepsilon)}\right)+\log(1/ \varepsilon)\right).\]
Following [1, 1, 2, 3], the WPRG construction in [1] is based on the "inverse Laplacian" perspective of small-space derandomization and Richardson iteration. The key step in their construction is to modify the approximated inverse Laplacian based on a structure called "shortcut graph". With the shortcut graph structure, they showed how to apply the potential argument in [1] to get a better bound for \(\varepsilon\) that is still non-trivial even when \(\varepsilon_{0}=O(1/\log^{2}(n))\). Based on the same idea, [1] also showed how to get a simplified proof of the non-black-box derandomization result in [1] (Theorem 1.7).
In short, the main purpose of using the shortcut graph idea in [1] is to embed a "binary-recursive-like" structure into the inverse Laplacian analysis. Such a structure makes their analysis compatible with the potential argument in [1]. In order to prove the non-black-box derandomization result in Theorem 1.7, [1] showed that one can apply a different potential argument based on the notion of "singular-value approximation" (SV approximation) defined in [1].
### Our contribution
While the shortcut graph modification gives a nice structure to the inverse Laplacian analysis, the inverse Laplacian perspective itself is sometimes tricky to work with. In fact, although the proof of Theorem 1.7 in [1] is simpler than the original proof in [1], they still need to work on a sophisticated matrix seminorm, and the corresponding potential argument requires non-trivial ideas to analyze.
In this work, we give an alternative error reduction framework for regular ROBPs by modifying a WPRG construction by Chattopadhyay and Liao [1]. The advantage of using [1] is that their WPRG construction is _actually binary recursive_, and hence is naturally compatible with the weight argument in [1]. To construct a WPRG for regular branching program that matches the parameter in Theorem 1.8, we show that the analysis in [1] can be improved in the regular setting based on the weight argument in [1]. Inspired by the proof of Theorem 1.7 in [1], we also give an alternative proof of Theorem 1.7 based on the notion of SV approximation. Because of the binary recursive nature of [1], both proofs are relatively straightforward by induction and are arguably simpler than the proofs in [1].
In fact, our proof of Theorem 1.7 implies a slightly stronger claim (Theorem 4.5) which might be of independent interest: we can compute an \(\varepsilon\)-SV approximation of the random walk matrix of any regular ROBP of width \(w\) and length \(n\) in space \(\tilde{O}(\log(nw)\log\log(1/\varepsilon))\). (See [1] for comparison between SV approximation and other notions of approximation.) It is not clear how to obtain this stronger claim from the previous proofs of Theorem 1.7[1, 1].
Finally, we show in Appendix D that the Laplacian-based construction in [1] is actually equivalent to the binary recursive construction in [1] that we use in this paper. We note that our proofs of Theorem 1.7 and Theorem 1.8 are self-contained and do not rely on this fact.
**Remark 1.9**.: _In addition to Theorem 1.7 and Theorem 1.8, [1] also constructed WPRGs for width-\(3\) ROBPs and unbounded-width permutation ROBPs, both having seed length \(\tilde{O}(\log(n)\sqrt{\log(1/\varepsilon)}+\log(1/\varepsilon))\). For width-\(3\) ROBPs, they improved the reduction from width-\(3\) to small-weight ROBPs in [1], and then applied the reduction on their WPRG in Theorem 1.7 which also works for small-weight ROBPs. For unbounded-width permutation ROBPs, they adapted their proof of Theorem 1.7 to the setting of WPRGs for permutation ROBPs, similar to what [12] did for the proof in [1]. We note that for both of these results the Theorem 1.7 and Theorem 1.8 parts can be replaced with our proofs too._
### Organization
In Section 2 we introduce some general definitions that are used in both the proofs of Theorem 1.8 and Theorem 1.7, and give a brief overview of our proofs. In Section 3 we formally prove Theorem 1.8. In Section 4 we prove Theorem 1.7.
## 2 General Setup and Proof Overview
Notation.For \(n\in\mathbb{N}\), denote \([n]=\{1,2,\ldots,n\}\). We write matrices in boldface and use \(\mathbf{M}[i,j]\) to denote the entry of matrix \(\mathbf{M}\) on the \(i\)-th row and the \(j\)-th column. We use \(\mathbf{I}_{w}\) to denote the \(w\times w\) identity matrix.
For a column vector \(x\), we denote the \(i\)-th entry of \(x\) by \(x[i]\). For every matrix \(\mathbf{M}\in\mathbb{R}^{w\times w}\), \(\left\|\mathbf{M}\right\|_{\infty}\) denotes the infinity norm \(\sup_{\left\|v\right\|_{\infty}=1}\left\|\mathbf{M}v\right\|_{\infty}\) and \(\left\|\mathbf{M}\right\|\) denotes the \(2\)-norm \(\sup_{\left\|v\right\|=1}\left\|\mathbf{M}v\right\|\). For any symbol \(\Sigma\) and string \(x\in\Sigma^{*}\), we use \(\left|x\right|\) to denote the length of \(x\), \(x_{[i]}\) to denote the \(i\)-th symbol of \(x\) and \(x_{[\leq i]}\) to denote the prefix of \(x\) of length \(x\). For any two strings \(x,y\), we use \(x\circ y\) to denote the concatenation of \(x\) and \(y\).
### ROBPs and matrices
For the rest of this paper, we consider a fixed regular ROBP \(B\) of length \(n\) and width \(w\) specified by transition functions \(B_{1},\ldots,B_{n}\). For every \(i\in[n]\), and every \(b\in\{0,1\}\), define the matrix \(\mathbf{M}_{i}(b)\in\mathbb{R}^{w\times w}\) as
\[\forall u,v\in[w],\mathbf{M}_{i}(b)[u,v]:=\begin{cases}1\text{ if }B_{i}(u,b)=v,\\ 0\text{ otherwise.}\end{cases}\]
We refer to \(\mathbf{M}_{i}(b)\) as the _transition matrix of \(B_{i}\) on \(b\)_. In addition, for every \(0\leq\ell<r\leq n\) and a string \(s\in\{0,1\}^{r-\ell}\), we denote the transition matrix from layer \(\ell\) to layer \(r\) on input \(x\) as
\[\mathbf{M}_{\ell.r}(s):=\prod_{i=1}^{r-\ell}\mathbf{M}_{\ell+i}(s_{i})\]
In this paper we frequently use the following fact:
**Fact 1**.: _For every \(\ell<m<r\) and \(x\in\{0,1\}^{m-\ell},y\in\{0,1\}^{r-m}\), \(\mathbf{M}_{\ell.m}(x)\mathbf{M}_{m.r}(y)=\mathbf{M}_{\ell.r}(x\circ y)\)._
In addition, observe that for a start state \(v_{0}\in[w]\) and a set of accept state \(V_{\mathrm{acc}}\subseteq[w]\), \(B(s)=1\) if and only if there exists \(v_{n}\in V_{\mathrm{acc}}\) s.t. \(\mathbf{M}_{0..n}(s)[v_{0},v_{n}]=1\).
Given the definitions above, we further define \(\mathbf{M}_{i}:=\frac{1}{2}(\mathbf{M}_{i}(0)+\mathbf{M}_{i}(1))\) which we call the _random walk matrix_ of \(B_{i}\), and define \(\mathbf{M}_{\ell.r}:=\prod_{i=\ell+1}^{r}\mathbf{M}_{i}\) which is the random walk matrix from layer \(\ell\) to layer \(r\). Note that \(\left\|\mathbf{M}_{\ell.r}\right\|_{\infty}\leq 1\) because \(\mathbf{M}_{\ell.r}\) is right-stochastic,2 and we also have \(\left\|\mathbf{M}_{\ell.r}\right\|\leq 1\) because \(\mathbf{M}_{\ell.r}\) is doubly-stochastic by the regularity.
Footnote 2: This still holds when \(B\) is not regular.
Finally, we define \(v_{\mathsf{st}}\) to be the "start vector" s.t. \(v_{\mathsf{st}}[v_{0}]=1\) and \(v_{\mathsf{st}}[i]=0\) for every \(i\neq v_{0}\), and \(v_{\mathsf{ed}}\) to be the "accept vector" s.t. \(v_{\mathsf{ed}}[i]=1\) if \(i\in V_{\mathrm{acc}}\) and \(v_{\mathsf{ed}}[i]=0\) otherwise. Then observe that
\[B(s)=v_{\mathsf{st}}^{\top}\mathbf{M}_{0..n}(s)v_{\mathsf{ed}}\]
and
\[\operatorname*{\mathbb{E}}_{s\in\{0,1\}^{n}}\left[B(s)\right]=v_{\mathsf{st}} ^{\top}\mathbf{M}_{0..n}v_{\mathsf{ed}}.\]
Given these facts, our goal is to find a "good approximation" of \(\mathbf{M}_{0..n}\), denoted by \(\widetilde{\mathbf{M}_{0..n}}\), s.t.
\[\left|v_{\mathsf{st}}^{\top}\mathbf{M}_{0..n}v_{\mathsf{ed}}-v_{\mathsf{st}} ^{\top}\widetilde{\mathbf{M}_{0..n}}v_{\mathsf{ed}}\right|\leq\varepsilon.\]
For Theorem 1.8 we want \(\widetilde{\mathbf{M}_{0..n}}\) to correspond to the output of a WPRG with short seed length, while for Theorem 1.7 we want to make sure that \(\widetilde{\mathbf{M}_{0..n}}\) can be implemented in \(\widetilde{O}(\log(nw)\log\log(1/\varepsilon))\) space. Because of the different goals, the notions of approximation would also be different in the proofs of Theorem 1.8 and Theorem 1.7.
### Recursion
In this section, we introduce a recursive definition from [20] which we use in both the proofs of Theorem 1.8 and Theorem 1.7. Without loss of generality, we assume that \(n\) is a power of \(2\) for the rest of this paper. Then define the set of "binary splitting points" to be
\[\mathsf{BS}_{n}=\{(\ell,r):\exists i,k\in\mathbb{N}\cup\{0\}\text{ s.t. }\ell=i \cdot 2^{k}\wedge r=\ell+2^{k}\}.\]
Suppose for every \((\ell_{0},r_{0})\in\mathsf{BS}_{n}\), we have defined a matrix \(\mathbf{M}^{(0)}_{\ell_{0}\dots r_{0}}\) that is a "mild approximation" of \(\mathbf{M}_{\ell_{0}\dots r_{0}}\). Then consider the following recursive definition of matrices for every \((\ell,r)\in\mathsf{BS}_{n}\) and every \(k\in\mathbb{N}\):
\[\mathbf{M}^{(k)}_{\ell\dots r}=\begin{cases}\mathbf{M}_{r}&\text{ if }r-\ell=1,\\ \sum_{i+j=k}\mathbf{M}^{(i)}_{\ell\dots m}\cdot\mathbf{M}^{(j)}_{m\dots r}- \sum_{i+j=k-1}\mathbf{M}^{(i)}_{\ell\dots m}\cdot\mathbf{M}^{(j)}_{m\dots r}& \text{ otherwise, where }m=(\ell+r)/2.\end{cases} \tag{1}\]
The WPRG construction in [10] is exactly a derandomization of the matrix \(\mathbf{M}^{(\log(1/\varepsilon))}_{0\dots n}\), where the base cases \(\mathbf{M}^{(0)}_{\ell_{0}\dots r_{0}}\) are generated by Nisan's PRG with error \(1/n\). In this paper, we also prove Theorem 1.8 and Theorem 1.7 by showing that \(\mathbf{M}^{(k)}_{0\dots n}\) is a good enough approximation of \(\mathbf{M}_{0\dots n}\) (with different choices of the parameter \(k\) and base case matrices \(\mathbf{M}^{(0)}_{\ell_{0}\dots r_{0}}\)).
Now for every \(i\geq 0\), define \(\boldsymbol{\Delta}^{(i)}_{\ell\dots r}:=\mathbf{M}^{(i)}_{\ell\dots r}- \mathbf{M}_{\ell\dots r}\). The correctness of both [10] and our results relies on the following identity.
**Lemma 2.1** ([10]).: _For every \((\ell,r)\in\mathsf{BS}_{n}\) s.t. \(r-\ell>1\) and \(m=(\ell+r)/2\),_
\[\boldsymbol{\Delta}^{(k)}_{\ell\dots r}:=\sum_{i+j=k}\boldsymbol{\Delta}^{(i)} _{\ell\dots m}\cdot\boldsymbol{\Delta}^{(j)}_{m\dots r}-\sum_{i+j=k-1} \boldsymbol{\Delta}^{(i)}_{\ell\dots m}\cdot\boldsymbol{\Delta}^{(j)}_{m\dots r }+\boldsymbol{\Delta}^{(k)}_{\ell\dots m}\mathbf{M}_{m\dots r}+\mathbf{M}_{ \ell\dots m}\boldsymbol{\Delta}^{(k)}_{m\dots r}.\]
We briefly sketch how the correctness in [10] was proved based on the lemma above. Suppose the "base PRG" has error \(\varepsilon_{0}\) so that \(\left\|\boldsymbol{\Delta}^{(0)}_{\ell_{0}\dots r_{0}}\right\|_{\infty}\leq \varepsilon_{0}\). Then one can prove by induction that \(\left\|\boldsymbol{\Delta}^{(k)}_{0\dots n}\right\|_{\infty}\leq O(n\varepsilon _{0})^{k+1}\), i.e. \(\mathbf{M}^{(k)}_{0\dots n}\) is a \(O(n\varepsilon_{0})^{k+1}\)-approximation of \(\mathbf{M}_{0\dots n}\), using the fact that \(\left\|\mathbf{M}_{\ell\dots r}\right\|_{\infty}\leq 1\) for every \(\ell<r\).
Now observe that the \(O(n\varepsilon_{0})^{k+1}\) bound is only non-trivial when \(\varepsilon_{0}<1/n\). As discussed in the introduction, the seed length of [1] is not better than Nisan's PRG in this parameter regime. Therefore, in the regular setting, even if we can take the base PRG to be the improved PRG in [1], we do not get a better WPRG directly. The main contribution of this work is to give an improved analysis of the error of \(\mathbf{M}^{(k)}_{0\dots n}\) in the regular setting.
### Proof overview
Similar to [10], the reason why we can get an improvement in the regular setting is because a regular ROBP has a bounded "total amount of mixing", no matter how large \(n\) is. Our goal is to inductively prove an approximation guarantee that the error of \(\mathbf{M}^{(k)}_{\ell\dots r}\) is _proportional to the amount of mixing_ from layer \(\ell\) to layer \(r\). For the proof of WPRG construction (Theorem 1.8), this statement is formalized based on the "weight" defined in [1]. For the proof of non-black-box derandomization (Theorem 1.7), this statement is formalized with SV approximation [1]. We defer the formal definitions to later sections, and focus on why this statement gives a better bound.
The first observation is that the last two error terms in Lemma 2.1 combine nicely. That is, by induction hypothesis we can show that the second last error term \(\boldsymbol{\Delta}^{(k)}_{\ell\dots m}\mathbf{M}_{m\dots r}\) is proportional to the amount of mixing from layer \(\ell\) to layer \(m\), and the last error term \(\mathbf{M}_{\ell\dots m}\boldsymbol{\Delta}^{(k)}_{m\dots r}\) is proportional to the amount of mixing from layer \(m\) to layer \(r\). Therefore, their sum is proportional to the total amount of mixing from layer \(\ell\) to layer \(r\). Furthermore, we observe that with a proper choice of parameters, the error terms in the first two summations are actually very small compared to the last two terms, and hence do not affect the total error too much.
Specifically, suppose we want to prove by induction that the error \(\boldsymbol{\Delta}^{(i)}_{\ell\dots r}\) is roughly bounded by the "level-\(i\) error" \(\varepsilon^{(i)}\). We properly choose \(\varepsilon^{(i)}\) as in the following lemma, so that the error terms in the first two summations sum up to roughly \(\varepsilon^{(k)}/\log(n)\). We defer the proof to Appendix A.3
Footnote 3: One can also choose \(\varepsilon^{(i)}=\gamma^{i+1}/((2K+1)\log(n))\) where \(K\) is an upper bound for \(k\). Then the proof of Lemma 2.2 becomes straightforward, and it turns out that this does not affect the final result.
**Lemma 2.2**.: _Let \(\gamma<1/2\), and define \(\varepsilon^{(i)}=\frac{(\gamma)^{i+1}}{10\log(n)(i+1)^{2}}\). Then for every \(k\in\mathbb{N}\) we have_
\[\sum_{i+j=k}\varepsilon^{(i)}\varepsilon^{(j)}+\sum_{i+j=k-1}\varepsilon^{(i)} \varepsilon^{(j)}\leq\varepsilon^{(k)}/\log(n).\]
With the choice of parameters above, we can prove that the error of the "level-\(k\) approximation" \(\mathbf{M}_{\ell..r}^{(k)}\) only grows by a factor of \((1+1/\log(n))\) after each recursion. After \(\log(n)\) levels of recursion, the error only grows by a constant factor. Therefore, we can choose the "base-case error" \(\varepsilon^{(0)}\) to be as small as \(O(1/\log(n))\). This allows us to choose base cases with small seed length or space complexity. For the proof of Theorem 1.8, we choose the base case to be the [1] PRG with error \(2^{-\sqrt{\log(1/\varepsilon)}}\). For the proof of Theorem 1.7 the base cases are generated using derandomized squaring [10, 1].
### Small-space computation
Finally, before we start the formal proofs, we briefly discuss the model of space-bounded computation. We consider the standard model which is a Turing machine with a read-only input tape, a constant number of work tapes, and a write-only output tape. We say an algorithm runs in space \(s\) if it uses at most \(s\) cells on the _work tapes_ throughout the computation. Note that the input length and output length can be larger than \(s\).
Next we recall some basic facts that we will use in space complexity analysis. For parallel composition of algorithms \(\mathcal{A}_{1},\ldots,\mathcal{A}_{t}\) we can reuse the work tape and get the following lemma.
**Lemma 2.3**.: _Let \(\mathcal{A}_{1},\ldots,\mathcal{A}_{t}\) be algorithms that on input \(x\) run in space \(s_{1},\ldots,s_{t}\) respectively. Then there exists an algorithm \(\mathcal{A}\) that on input \(x\) outputs \((\mathcal{A}_{1}(x),\mathcal{A}_{2}(x),\ldots,\mathcal{A}_{t}(x))\) and runs in space \(\max_{i\in[t]}(s_{i})+O(\log(t))\)._
Furthermore, for sequential composition \(\mathcal{A}_{1}(\mathcal{A}_{2}(x))\), while we cannot fully store \(\mathcal{A}_{2}(x)\) in the work tape, we can still simulate an input tape containing \(\mathcal{A}_{2}(x)\) by computing the mapping \((x,i)\to\mathcal{A}_{2}(x)_{[i]}\) instead. (See, e.g., [1, Lemma 4.15].) This implies the following lemma.
**Lemma 2.4**.: _Let \(\mathcal{A}_{2}\) be an algorithm that runs in space \(s_{2}\) on input \(x\), and \(\mathcal{A}_{1}\) be an algorithm that runs in space \(s_{1}\) on input \(\mathcal{A}_{2}(x)\). Then there exists an algorithm \(\mathcal{A}\) that on input \(x\) outputs \(\mathcal{A}_{1}(\mathcal{A}_{2}(x))\) in space \(s_{1}+s_{2}+O(\log(s_{1}+s_{2}+|\mathcal{A}_{2}(x)|))\)._
We also use the following lemma that can be found in [14, 1].
**Lemma 2.5**.: _Let \(\mathbf{M}_{1},\ldots,\mathbf{M}_{t}\) be \(w\times w\) real matrices where each entry has bit length at most \(T\). Then \(\prod_{i=1}^{t}\mathbf{M}_{i}\) can be computed in space \(O(\log(t)\log(twT))\)._
## 3 WPRG for regular ROBPs
Using the matrix notation, the weight defined in [1] can be written as follows.4
Footnote 4: The original results in [1] only consider \(y\in[0,1]^{w}\), but one can easily generalize them to \(\mathbb{R}^{w}\) by shifting and scaling.
**Definition 3.1**.: _For every vector \(y\in\mathbb{R}^{w}\) and every \(i\in[n]\), define the layer-\(i\) weight on \(y\) as_
\[W(i,y):=\sum_{u\in[w]}\sum_{b\in\{0,1\}}\left|(\mathbf{M}_{i}y)[u]-y[B_{i}(u, b)]\right|.\]
_For every \(0\leq\ell<r\leq n\), the total weight between layer \(\ell\) and \(r\) on \(y\) is defined as_
\[W(\ell,r,y):=\sum_{i=\ell+1}^{r}W(i,\mathbf{M}_{i..r}y).\lx@note{footnote}{ For the degenerate case $i=r$, let $\mathbf{M}_{r..r}$ denote the identity matrix.}\]
**Remark 3.2**.: _To interpret \(W(\ell,r,y)\) with the original description in [1], consider the graph view of ROBPs, and consider \(y\) to be the values on the nodes in layer \(r\). Then for every \(i\leq r\), \(\mathbf{M}_{i..r}y\) corresponds to the values on layer \(i\). Observe that each term in the definition of \(W(i,\mathbf{M}_{i..r}y)\) corresponds to the "weight" on an edge between layer \(i-1\) and \(i\). In consequence, \(W(\ell,r,y)\) corresponds to the total weight of the sub-program between layer \(\ell\) and \(r\) (i.e. the ROBP specified by transition functions \((B_{\ell+1},\ldots,B_{r})\))._
The following identity is straightforward by definition:
**Fact 2**.: _For every \(0\leq\ell<m<r\leq n\) and every vector \(y\in\mathbb{R}^{w}\), \(W(\ell,r,y)=W(\ell,m,\mathbf{M}_{m\ldots r}y)+W(m,r,y)\). This also implies \(\max(W(\ell,m,\mathbf{M}_{m\ldots r}y),W(m,r,y))\leq W(\ell,r,y)\)._
Given the definition of weight, the main results in [1] imply the following lemmas.6
Footnote 6: To get Lemma 3.4, we use the fact that \(\mathbf{M}_{\ell\ldots r}(\cdot)\) corresponds to the transition matrices of a ROBP of length \((r-\ell)\) and width \(w\), and one can extend it to length \(n\) by adding more identity transitions which do not affect the total weight.
**Lemma 3.3**.: _For every \(\ell<r\) and every vector \(y\in\mathbb{R}^{w}\), \(W(\ell,r,y)\leq w^{2}\left\|y\right\|_{\infty}\)._
**Lemma 3.4**.: _For every \(\delta,W^{*}>0\), there exists an explicit PRG \(G_{0}:\{0,1\}^{d_{0}}\to\{0,1\}^{n}\) s.t. for every \(0\leq\ell<r\leq n\),_
\[\left\|\left(\operatorname*{\mathbb{E}}_{s\sim\{0,1\}^{d_{0}}}\left[\mathbf{M }_{\ell\ldots r}(G_{0}(s)_{[\leq r-\ell]}\right)\right]-\mathbf{M}_{\ell\ldots r }\right)y\right\|_{\infty}\leq\delta W(\ell,r,y).\]
_In addition, the seed length is \(d_{0}=O\left(\log(n)\left(\log\log(n)+\log(w/\delta)\right)\right)\)._
Now define \(W^{*}:=w^{2}\), which by Lemma 3.3 implies \(W^{*}\geq\max_{y:\left\|y\right\|_{\infty}=1}\left(W(0,n,y)\right)\). To simplify notation, we define _weight approximation_ as follows.
**Definition 3.5**.: _For every \(0\leq\ell<r\leq n\), we say \(\widetilde{\mathbf{M}_{\ell\ldots r}}\) is a \(\varepsilon_{0}\)-weight approximation of \(\mathbf{M}_{\ell\ldots r}\) if_
\[\left\|\left(\widetilde{\mathbf{M}_{\ell\ldots r}}-\mathbf{M}_{\ell\ldots r} \right)y\right\|_{\infty}\leq\varepsilon_{0}\cdot\frac{W(\ell,r,y)}{W^{*}}.\]
Note that \(\widetilde{\mathbf{M}_{\ell\ldots r}}\) being a \(\delta\)-weight approximation of \(\mathbf{M}_{\ell\ldots r}\) also implies \(\left\|\widetilde{\mathbf{M}_{\ell\ldots r}}-\mathbf{M}_{\ell\ldots r} \right\|_{\infty}\leq\delta\). Now fix a parameter \(\gamma>0\) to be specified later, and define \(\varepsilon^{(i)}=\frac{\gamma^{i+1}}{10(i+1)^{2}\log(n)}\) as in Lemma 2.2. Let \(G_{0}\) be the PRG in Lemma 3.4 with parameter \(\delta=\varepsilon^{(0)}/(3W^{*})\), and for every \((\ell,r)\in\mathsf{BS}_{n}\) such that \(r-\ell>1\), define
\[\mathbf{M}_{\ell\ldots r}^{(0)}:=\operatorname*{\mathbb{E}}_{s\sim\{0,1\}^{d _{0}}}\left[\mathbf{M}_{\ell\ldots r}(G_{0}(s)_{[\leq r-\ell]})\right],\]
which is a \((\varepsilon^{(0)}/3)\)-weight approximation by Lemma 3.4. Then define \(\mathbf{M}_{\ell\ldots r}^{(k)}\) recursively as in Equation (1). The following is our main lemma for proving Theorem 1.8:
**Lemma 3.6** (main).: _For every \(k\in\mathbb{N}\), every \(y\in\mathbb{R}^{w}\) and every \((\ell,r)\in\mathsf{BS}_{n}\), \(\mathbf{M}_{\ell\ldots r}^{(k)}\) is a \(C_{t}\varepsilon^{(k)}\)-weight approximation of \(\mathbf{M}_{\ell\ldots r}\), where \(t=\log(r-\ell)\) and \(C_{t}=(1+1/\log(n))^{t}/3\)._
Proof.: We prove the lemma by induction over \(t\) and \(k\). The first base case \(t=0\) is trivial since \(\mathbf{M}_{\ell\ldots r}^{(k)}=\mathbf{M}_{\ell\ldots r}\). The second base case \(k=0\) is also true by definition. For the general case, first we note that the lemma also implies \(\left\|\mathbf{\Delta}_{\ell\ldots r}^{(k)}\right\|_{\infty}\leq C_{t} \varepsilon^{(k)}\leq\varepsilon^{(k)}\). Then observe that by Lemma 2.1 and sub-additivity/sub-multiplicativity of infinity norm, we have
\[\left\|\mathbf{\Delta}_{\ell\ldots r}^{(k)}y\right\|_{\infty} \leq\sum_{i+j\in\{k-1,k\}}\left\|\mathbf{\Delta}_{\ell\ldots m}^{ (i)}\mathbf{\Delta}_{m\ldots r}^{(j)}y\right\|_{\infty}+\left\|\mathbf{M}_{ \ell\ldots m}\mathbf{\Delta}_{m\ldots r}^{(k)}y\right\|_{\infty}+\left\| \mathbf{\Delta}_{\ell\ldots m}^{(k)}\mathbf{M}_{m\ldots r}y\right\|_{\infty}\] \[\leq\sum_{i+j\in\{k-1,k\}}\left\|\mathbf{\Delta}_{\ell\ldots m}^{ (i)}\right\|_{\infty}\left\|\mathbf{\Delta}_{m\ldots r}^{(j)}y\right\|_{ \infty}+\left\|\mathbf{M}_{\ell\ldots m}\right\|_{\infty}\left\|\mathbf{\Delta }_{m\ldots r}^{(k)}y\right\|_{\infty}+\left\|\mathbf{\Delta}_{\ell\ldots m}^{( k)}\mathbf{M}_{m\ldots r}y\right\|_{\infty}\] \[\leq\sum_{i+j\in\{k-1,k\}}\left(C_{t-1}^{2}\varepsilon^{(i)} \varepsilon^{(j)}\cdot\frac{W(m,r,y)}{W^{*}}\right)+C_{t-1}\varepsilon^{(k)} \cdot\frac{W(m,r,y)+W(\ell,m,\mathbf{M}_{m\ldots r}y)}{W^{*}}\] (induction) \[\leq C_{t}\varepsilon^{(k)}\cdot\frac{W(\ell,r,y)}{W^{*}}.\] (by Lemma 2.2 and Fact 2 ) In other words, \[\mathbf{M}_{\ell\ldots r}^{(k)}\] is a \[C_{t}\varepsilon^{(k)}\] -weight approximation of \[\mathbf{M}_{\ell\ldots r}\].
Lemma 3.6 shows that \(\mathbf{M}^{(k)}_{0..n}\) is a \(\varepsilon^{(k)}\)-weight approximation, which also implies \(\left\|\mathbf{M}^{(k)}_{0..n}-\mathbf{M}_{0..n}\right\|_{\infty}\leq\varepsilon ^{(k)}\). It remains to construct a WPRG that actually "implements" \(\mathbf{M}^{(k)}_{0..n}\). This step is rather standard and is essentially the same as the corresponding step in [10]: to get the seed length as claimed in Theorem 1.8, we need to further "derandomize" \(\mathbf{M}^{(k)}_{0..n}\) using the technique in [11, 12]. In short, we expand the recursive formula for \(\mathbf{M}^{(k)}_{0..n}\) and get an "error reduction polynomial" over matrices \(\mathbf{M}^{(0)}_{i..j}\). One can show that there are at most \(K=n^{O(k)}\) terms in the polynomial, and each term has at most \(h=k\log(n)\) factors. Then we can use the INW generator [13] for length \(h\) and width \(w\) to approximate each term with error \(\varepsilon/2K\), which gives us a \(\varepsilon/2\)-approximation to \(\mathbf{M}^{(k)}_{0..n}\). We discuss the details in Section 3.1.
**Remark 3.7**.: _Note that our construction and proof also works for "small-weight ROBPs" in general, if we define \(W^{*}=\max_{B,y:\left\|y\right\|_{\infty}=1}\left(W(0,n,y)\right)\) instead. Similar to [10], this only costs additional \(O(\log(n)\log(W^{*}))\) bits of seed length._
### Final WPRG construction
To simplify notation, we assume without loss of generality that the first output bit of \(G_{0}\) is unbiased, i.e. \(\Pr_{s\in\{0,1\}^{d_{0}}}\left[G_{0}(d_{0})_{[1]}=1\right]=1/2\).7 Then we can merge the two different base cases by defining \(\mathbf{M}^{(0)}_{r-1..r}:=\mathbf{M}^{(k)}_{r-1..r}=\mathbb{E}_{s\in\{0,1\}^{ d_{0}}}\left[\mathbf{M}_{r-1..r}(G_{0}(s)_{[\leq 1]})\right]\). Now consider the following notation.
Footnote 7: To get such a PRG, we can simply take a PRG \(G_{0}^{\prime}\) from [1] with \((n-1)\)-bit output and define \(G_{0}(b\circ s)=b\circ G_{0}^{\prime}(s)\), where \(b\) is the first bit.
**Definition 3.8**.: _For every \(0\leq\ell<r\leq n\), let \(\mathsf{IS}_{\ell..r}\) denote the set of increasing sequences \(\mathsf{sq}=(i_{0},i_{1}\ldots,i_{h})\) s.t. \(\ell=i_{0}<i_{1}<\ldots<i_{h}=r\). We say \(h\) is the length of \(\mathsf{s}\). For every \(\mathsf{sq}\in\mathsf{IS}_{\ell..r}\), define_
\[\mathbf{M}^{(0)}_{\mathsf{sq}}:=\prod_{j=1}^{h}\mathbf{M}^{(0)}_{i_{j-1}..i_ {j}}.\]
Given the notation above, we get the following lemma regarding the expansion of \(\mathbf{M}^{(k)}_{0..n}\), which is not hard to prove by induction. For completeness we include a proof in Appendix B.
**Lemma 3.9**.: _For every \(k\in\mathbb{N}\) and every \((\ell,r)\in\mathsf{BS}_{n}\), there is a (multi)set \(S\subseteq\mathsf{IS}_{\ell..r}\times\{-1,+1\}\) which satisfies that_
* \(\mathbf{M}^{(k)}_{\ell..r}=\sum_{(\mathsf{sq},\sigma)\in S}\sigma\mathbf{M}^{( 0)}_{\mathsf{sq}}\)_._
* \(|S|\leq(r-\ell)^{2k}\)__
* _For every_ \((\mathsf{sq},\sigma)\in S\)_, the length of_ \(\mathsf{sq}\) _is at most_ \(k\log(r-\ell)+1\)__
In addition, we would need to derandomize each term \(\mathbf{M}^{(0)}_{\mathsf{sq}}\) using the following matrix view of INW generator [13], which can be found in, e.g., [1]:
**Lemma 3.10**.: _Let \(\Sigma\) be a finite set of symbols. Suppose for every \(i\in[h]\), there is a matrix-valued function \(\mathbf{A}_{i}:\Sigma\to\mathbb{R}^{w\times w}\) which on every input in \(\Sigma\) outputs a stochastic matrix. Then for every \(\varepsilon_{\mathrm{INW}}>0\) there exists an explicit function \(G_{\mathrm{INW}}:\{0,1\}^{d}\to\Sigma^{h}\) such that_
\[\left\|\underset{s\in\{0,1\}^{d}}{\mathbb{E}}\left[\prod_{i=1}^{h}\mathbf{A}_{ i}(G_{\mathrm{INW}}(s)_{[i]})\right]-\prod_{i=1}^{h}\underset{x\in\Sigma}{ \mathbb{E}}\left[\mathbf{A}_{i}(x)\right]\right\|_{\infty}\leq\varepsilon_{ \mathrm{INW}},\]
_and \(d=O(\log|\Sigma|+\log(h)\log(hw/\varepsilon_{\mathrm{INW}}))\)._
Now we are ready to prove Theorem 1.8.
Proof of Theorem 1.8.: Let \(S\subseteq\mathsf{IS}_{0..n}\times[-1,1]\) be the set defined in Lemma 3.9 s.t. \(\mathbf{M}_{0..n}^{(k)}=\sum_{(\mathbf{s}_{0},\sigma)\in S}\sigma\mathbf{M}_{ \mathbf{s}_{0}}^{(0)}\). Without loss of generality we can assume that \(S\) has size exactly \(2^{2k\log(n)}\) by adding dummy sequences with weight \(\sigma=0\). In addition, note that there is a enumeration function \(E_{S}:\{0,1\}^{2k\log(n)}\to S\) that can be implemented in space \(O(\log(k)\log(n))\) following recursive formula (1).8 Then consider \(G_{\mathrm{INW}}:\{0,1\}^{d_{\mathrm{INW}}}\to\Sigma^{h}\) in Lemma 3.10 with \(\Sigma=\{0,1\}^{d_{0}},h=k\log(n)+1\) and \(\varepsilon_{\mathrm{INW}}=\varepsilon/(2|S|)\), then define \(d=2k\log(n)+d_{\mathrm{INW}}\). The final WPRG construction \((\rho,G):\{0,1\}^{d}\to\mathbb{R}\times\{0,1\}^{n}\) is as follows. On any input \(s\),
Footnote 8: That is, we use the first \(\lceil\log(2k+1)\rceil\) bits as index to determine a term in the recursive formula, then discard these \(\lceil\log(2k+1)\rceil\) bits and recurse. If there’s any undefined index, simply return a dummy sequence with weight \(0\).
1. Parse \(s\) as \((s_{\mathrm{enum}},s_{\mathrm{INW}})\in\{0,1\}^{2k\log(n)}\times\{0,1\}^{d_{ \mathrm{INW}}}\)
2. Define \(((i_{0},i_{1},\ldots,i_{h}),\sigma):=E_{S}(s_{\mathrm{enum}})\).
3. For \(j\in[h]\), define \(r_{j}:=G_{0}(G_{\mathrm{INW}}(y)_{[j]})\in\{0,1\}^{n}\).
4. Output \((\rho(s),G(s)):=(\sigma|S|,(r_{1})_{[\leq i_{1}-i_{0}]}\circ(r_{2})_{[\leq i_ {2}-i_{1}]}\circ\ldots\circ(r_{h})_{[\leq i_{h}-i_{h-1}]})\).
Next we prove the correctness of \(G\). Observe that
\[\operatorname*{\mathbb{E}}_{s}\left[\rho(s)\mathbf{M}_{0..n}(G(s))\right]= \sum_{((i_{0},\ldots,i_{h}),\sigma)\in S}\sigma\cdot\operatorname*{\mathbb{E} }_{s_{\mathrm{INW}}}\left[\prod_{j=1}^{h}\mathbf{M}_{i_{j-1}..i_{j}}(G_{0}(G_{ \mathrm{INW}}(s_{\mathrm{INW}})_{[j]}))\right].\]
For every term in the above equation, consider the matrix-valued functions \(\mathbf{A}_{j}:\{0,1\}^{d_{0}}\to\mathbb{R}^{w\times w}\) s.t. \(\mathbf{A}_{j}(r)=\mathbf{M}_{i_{j-1}..i_{j}}(G_{0}(r)_{[\leq i_{j}-i_{j-1}]})\). Note that \(\operatorname*{\mathbb{E}}_{r}\left[\mathbf{A}_{j}(r)\right]=\mathbf{M}_{i_{ j-1}..i_{j}}^{(0)}\). Then by Lemma 3.10 we have
\[\left\|\operatorname*{\mathbb{E}}_{s_{\mathrm{INW}}}\left[\prod_{j=1}^{h} \mathbf{M}_{i_{j-1}..i_{j}}(G_{\mathrm{INW}}(s_{\mathrm{INW}})_{j})\right]- \mathbf{M}_{(i_{0},i_{1},\ldots,i_{h})}^{(0)}\right\|_{\infty}\leq\varepsilon/ (2|S|),\]
which by the sub-additivity of \(\left\|\cdot\right\|_{\infty}\) implies
\[\left\|\operatorname*{\mathbb{E}}_{s}\left[\rho(s)\mathbf{M}_{0..n}(G(s)) \right]-\mathbf{M}_{0..n}^{(k)}\right\|_{\infty}\leq\varepsilon/2.\]
We pick suitable \(\gamma,k\) (to be specified later) so that \(\varepsilon^{(k)}\leq\varepsilon/2\). Then by Lemma 3.6 we have
\[\left\|\operatorname*{\mathbb{E}}_{s}\left[\rho(s)\mathbf{M}_{0..n}(G(s)) \right]-\mathbf{M}_{0..n}^{(k)}\right\|_{\infty}\leq\varepsilon.\]
Then consider the vectors \(v_{\mathbf{s}\mathbf{t}},v_{\mathbf{ed}}\in\mathbb{R}^{w}\) corresponding to the start and end states as discussed in Section 2. Observe that \(\left\|v_{\mathbf{s}\mathbf{t}}\right\|_{1}=1\) and \(\left\|v_{\mathbf{ed}}\right\|_{\infty}\leq 1\) by definition. Therefore we have
\[\left|\operatorname*{\mathbb{E}}_{s\in\{0,1\}^{d}}\left[\rho(s)B( G(s))\right]-\operatorname*{\mathbb{E}}_{x\in\{0,1\}^{n}}[B(x)]\right| =\left|v_{\mathbf{s}\mathbf{t}}^{\top}\left(\operatorname*{ \mathbb{E}}_{s}\left[\rho(s)\mathbf{M}_{0..n}(G(s))\right]-\operatorname*{ \mathbb{E}}_{x}\left[\mathbf{M}_{0..n}(x)\right]\right)v_{\mathbf{ed}}\right|\] \[\leq\left\|v_{\mathbf{s}\mathbf{t}}\right\|_{1}\cdot\left\| \operatorname*{\mathbb{E}}_{s}\left[\rho(s)\mathbf{M}_{0..n}(G(s))\right]- \mathbf{M}_{0..n}\right\|_{\infty}\cdot\left\|v_{\mathbf{ed}}\right\|_{\infty}\] \[\leq\varepsilon.\]
Finally we analyze the seed length with an unspecified parameter \(0<\gamma<1/\log(n)\). Take \(k\) to be the minimum integer s.t. \(\varepsilon^{(k)}\leq\varepsilon/2\). Observe that \(\log(1/\varepsilon^{(0)})=O(1/\gamma)\) and \(k=O(\log(1/\varepsilon)/\log(1/\gamma))\). This implies
\[d=d_{0}+O(\log(h)\log(hw/\varepsilon)+k\log(n))=d_{0}+\tilde{O}(\log(w/ \varepsilon)+k\log(n)).\]
By Lemma 3.4, we have
\[d_{0}=O(\log(n)\log(wW^{*}/\gamma)+\log(n)\log\log(n))=\tilde{O}(\log(n)\log(w/ \gamma)).\]
Therefore,
\[d=\tilde{O}\left(\log(n)\log(w/\gamma)+\frac{\log(n)\log(1/\varepsilon)}{\log(1/ \gamma)}+\log(1/\varepsilon)\right).\]
Taking \(\gamma=2^{-\sqrt{\log(n)}}\), we get
\[d=\tilde{O}\left(\log(n)\left(\log(w)+\sqrt{\log(1/\varepsilon)}\right)+\log( 1/\varepsilon)\right).\]
Finally, observe that the space complexity is \(O(d_{0}+\log(k)\log(n)+d_{\text{INW}})=O(d)\). Therefore the WPRG is explicit.
## 4 Non-black-box Derandomization for Regular ROBPs
We prove Theorem 1.7 in this section. Inspired by [10], we use the notion of SV approximation to capture the "amount of mixing". To simplify notation, we define the function \(D:\mathbb{R}^{w\times w}\times\mathbb{R}^{w}\to\mathbb{R}\) to be \(D(\mathbf{A},y):=\left\|y\right\|^{2}-\left\|\mathbf{A}y\right\|^{2}\), which plays the same role as the weight measure in the proof of Theorem 1.8. The following fact is straightforward from the definition.
**Fact 3**.: \(D(\mathbf{B},y)+D(\mathbf{A},\mathbf{B}y)=D(\mathbf{A}\mathbf{B},y)\)_._
The notion of singular value approximation (SV approximation) is defined as follows.
**Definition 4.1** (SV approximation [1]).: _Let \(\mathbf{W}\in\mathbb{R}^{w\times w}\) be a doubly stochastic matrix. We say \(\widetilde{\mathbf{W}}\) is a \(\varepsilon\)-SV approximation of \(W\) if for every \(x,y\in\mathbb{R}^{w}\),_
\[\left|x^{\top}(\widetilde{\mathbf{W}}-\mathbf{W})y\right|\leq\varepsilon\cdot \left(\frac{D(\mathbf{W}^{\top},x)+D(\mathbf{W},y)}{2}\right).\]
_Equivalently, for every \(x,y\in\mathbb{R}^{w}\),_
\[\left|x^{\top}(\widetilde{\mathbf{W}}-\mathbf{W})y\right|\leq\varepsilon\cdot \left(\sqrt{D(\mathbf{W}^{\top},x)\cdot D(\mathbf{W},y)}\right).\]
The proof of Theorem 1.7 is very similar to our proof of Theorem 1.8 in the previous section. First we also need a base case for the different approximation notion. As proved in [1, 10], there is a space-efficient implementation of SV approximation of random walk matrices based on derandomized squaring [14]:
**Lemma 4.2** ([1, 10]).: _For every \((\ell,r)\in\mathsf{BS}_{n}\), there is an algorithm that computes a \(\delta\)-SV approximation of \(\mathbf{M}_{\ell..r}\) in space \(\tilde{O}(\log(nw)\log(1/\delta))\). Further, each entry of this approximation matrix has bit length at most \(O(\log(n)\log(1/\delta))\)._
We also need the following simple lemma, which can be found in [10]. We include its (short) proof for completeness.
**Lemma 4.3**.: _Suppose \(\widetilde{\mathbf{W}}\) is a \(\delta\)-SV-approximation of \(\mathbf{W}\), and let \(\mathbf{\Delta}=\widetilde{\mathbf{W}}-\mathbf{W}\). Then_
\[\left\|\mathbf{\Delta}y\right\|_{2}\leq\delta\sqrt{D(\mathbf{W},y)}.\]
Proof.: Observe that
\[\left\|\mathbf{\Delta}y\right\|_{2}^{2}=(\mathbf{\Delta}y)^{\top}\mathbf{ \Delta}y\leq\delta\sqrt{D(\mathbf{W},y)\cdot\left(\left\|\mathbf{\Delta}y \right\|_{2}^{2}-\left\|\mathbf{W}^{\top}\mathbf{\Delta}y\right\|_{2}^{2} \right)}\leq\delta\sqrt{D(\mathbf{W},y)}\cdot\left\|\mathbf{\Delta}y\right\|.\]
Now for every \((\ell,r)\in\mathsf{BS}_{n}\), define \(\mathbf{M}_{\ell..r}^{(0)}\) to be a \((\varepsilon^{(0)}/3)\)-SV approximation of \(\mathbf{M}_{\ell..r}\), and define \(\mathbf{M}_{\ell..r}^{(k)}\) using the recursion (Equation (1)). We prove the following lemma which is analogous to Lemma 3.6:
**Lemma 4.4** (main).: _For every \(k\in\mathbb{N}\) and every \((\ell,r)\in\mathsf{BS}_{n}\), \(\mathbf{M}_{\ell..r}^{(k)}\) is a \(C_{t}\varepsilon^{(k)}\)-SV approximation of \(\mathbf{M}_{\ell..r}\), where \(t=\log(r-\ell)\) and \(C_{t}=(1+1/\log(n))^{t}/3\)._
Proof.: We again prove the lemma by induction. The base cases \(t=0\) or \(k=0\) are trivial by definition. For the general case, observe that by Lemma 2.1, for every \(x,y\in\mathbb{R}^{w}\) we have
\[x^{\top}\mathbf{\Delta}_{\ell..r}^{(k)}y=\sum_{i+j=k}x^{\top}\mathbf{\Delta}_ {\ell..m}^{(i)}\mathbf{\Delta}_{m..r}^{(j)}y-\sum_{i+j=k-1}x^{\top}\mathbf{ \Delta}_{\ell..m}^{(i)}\mathbf{\Delta}_{m..r}^{(j)}y+x^{\top}\mathbf{M}_{\ell..m}\mathbf{\Delta}_{m..r}^{(k)}y+x^{\top}\mathbf{\Delta}_{\ell..m}^{(k)} \mathbf{M}_{m..r}y. \tag{2}\]
To bound the first two summations in Equation (2), observe that
\[\sum_{i+j=k}x^{\top}\mathbf{\Delta}_{\ell..m}^{(i)}\mathbf{\Delta }_{m..r}^{(j)}y-\sum_{i+j=k-1}x^{\top}\mathbf{\Delta}_{\ell..m}^{(i)}\mathbf{ \Delta}_{m..r}^{(j)}y\] \[\leq\sum_{i+j\in\{k-1,k\}}\left\|(\mathbf{\Delta}_{\ell..m}^{(i)} )^{\top}x\right\|_{2}\left\|\mathbf{\Delta}_{m..r}^{(j)}y\right\|_{2}\] (Cauchy-Schwarz) \[\leq C_{t-1}^{2}\sum_{i+j\in\{k-1,k\}}\varepsilon^{(i)} \varepsilon^{(j)}\sqrt{D\left(\mathbf{M}_{\ell..m}^{\top},x\right)D\left( \mathbf{M}_{m..r},y\right)}\] (Lemma 4.3) \[\leq C_{t-1}^{2}\cdot\frac{\varepsilon^{(k)}}{\log(n)}\cdot\frac{ D\left(\mathbf{M}_{\ell..m}^{\top},x\right)+D\left(\mathbf{M}_{m..r},y\right)}{2}\] (by Lemma 2.2 and AM-GM) \[\leq C_{t-1}\cdot\frac{\varepsilon^{(k)}}{\log(n)}\cdot\frac{D \left(\mathbf{M}_{\ell..r}^{\top},x\right)+D\left(\mathbf{M}_{\ell..r},y \right)}{2}\] (since
\[C_{t-1}\leq 1\]
and
\[\left\|\mathbf{M}_{\ell..m}\right\|_{2},\left\|\mathbf{M}_{m..r}^{\top}\right\| _{2}\leq 1\]
)
To bound the last two terms in Equation (2), observe that
\[x^{\top}\mathbf{M}_{\ell..m}\mathbf{\Delta}_{m..r}^{(k)}y+x^{ \top}\mathbf{\Delta}_{\ell..m}^{(k)}\mathbf{M}_{m..r}y\] \[\leq C_{t-1}\cdot\varepsilon^{(k)}\cdot\frac{D(\mathbf{M}_{m..r}, y)+D(\mathbf{M}_{m..r}^{\top},\mathbf{M}_{\ell..m}^{\top}x)+D(\mathbf{M}_{\ell..m},\mathbf{M}_{m..r}y)+D(\mathbf{M}_{\ell..m}^{\top},x)}{2}\] \[=C_{t-1}\cdot\varepsilon^{(k)}\cdot\frac{D(\mathbf{M}_{\ell..r}, y)+D(\mathbf{M}_{\ell..r}^{\top},x)}{2}.\] (Fact 2)
By summing up the two inequalities, we can conclude that
\[x^{\top}\mathbf{\Delta}_{\ell..r}^{(k)}y\leq C_{t}\cdot\varepsilon^{(k)} \cdot\frac{D(\mathbf{M}_{\ell..r},y)+D(\mathbf{M}_{\ell..r}^{\top},x)}{2}.\]
Because negating \(y\) does not change the bound above, we get
\[\left|x^{\top}(\mathbf{M}_{\ell..r}^{(k)}-\mathbf{M}_{\ell..r})y\right|\leq C _{t}\cdot\varepsilon^{(k)}\cdot\frac{D(\mathbf{M}_{\ell..r},y)+D(\mathbf{M}_{ \ell..r}^{\top},x)}{2},\]
i.e. \(\mathbf{M}_{\ell..r}^{(k)}\) is a \(C_{t}\varepsilon^{(k)}\)-SV approximation of \(\mathbf{M}_{\ell..r}\).
Finally, let \(\gamma=1/\log(n)\) and \(k=O(\log(1/\varepsilon)/\log(1/\gamma))\) be the minimum integer s.t. \(\varepsilon^{(k)}\leq\varepsilon\). We claim that \(\mathbf{M}_{0..n}^{(k)}\) can be implemented in space \(\tilde{O}(\log(k)\log(nw))\), which implies the following result:
**Theorem 4.5**.: _There is an algorithm which can compute an \(\varepsilon\)-SV approximation of the random walk matrix \(\mathbf{M}_{0..n}\) in space \(\tilde{O}(\log(nw)\log\log(1/\varepsilon))\)._
Note that \(Theorem\) 1.7 is also a direct corollary of this theorem:
Proof of Theorem 1.7.: Compute a \((\varepsilon/\sqrt{w})\)-SV approximation of \(\mathbf{M}_{0..n}\), denoted by \(\widetilde{\mathbf{M}_{0..n}}\). By Theorem 4.5 this takes space \(\tilde{O}(\log(nw)\log\log(1/\varepsilon))\). Then consider \(v_{\mathsf{st}}\), \(v_{\mathsf{ed}}\) as defined in Section 2, and output \(v_{\mathsf{st}}^{\top}\widetilde{\mathbf{M}_{0..n}}v_{\mathsf{ed}}\). To prove the correctness, recall that \(v_{\mathsf{st}}^{\top}\mathbf{M}_{0..n}v_{\mathsf{ed}}=\mathbb{E}_{x}\left[B( x)\right]\), which implies
\[\left|v_{\mathsf{st}}^{\top}\widetilde{\mathbf{M}_{0..n}}v_{\mathsf{ed}}- \mathop{\mathbb{E}}_{x\in\{0,1\}^{n}}\left[B(x)\right]\right|=\left|v_{\mathsf{ st}}^{\top}(\widetilde{\mathbf{M}_{0..n}}-\mathbf{M}_{0..n})v_{\mathsf{ed}} \right|\leq\varepsilon/\sqrt{w}\left\|v_{\mathsf{st}}\right\|\left\|v_{ \mathsf{ed}}\right\|\leq\varepsilon\]
by the fact that \(\left\|v_{\mathsf{st}}\right\|=1\) and \(\left\|v_{\mathsf{ed}}\right\|\leq\sqrt{w}\).
**Remark 4.6**.: _Note that \(\mathbf{M}^{(k)}_{0..n}\) does not satisfy the original definition of SV approximation in [1] because it is not necessarily doubly stochastic. While every row and column in \(\mathbf{M}^{(k)}_{0..n}\) does sum up to \(1\), some of its entries might be negative._
### Space-efficient implementation
Finally we prove that \(\mathbf{M}^{(k)}_{0..n}\) can be implemented in space \(\tilde{O}(\log(k)\log(nw))\). Note that a naive implementation of the recursion (Equation (1)) takes at least \(O(\log(n)\log(wk))\) space. Furthermore, we cannot naively enumerate each term of \(\mathbf{M}^{(k)}_{0..n}\) in its expansion (Lemma 3.9) either, because there are \(n^{O(k)}\) terms in total, which takes at least \(O(k\log(n))\) bits to enumerate.
To reduce the space complexity, we will compute \(\mathbf{M}_{0..n}\) with a different recursive formula. The intuition of the new recursion is as follows. First observe that each term in the expansion of \(\mathbf{M}^{(k)}_{0..n}\) corresponds to a way to put \(k\) balls into \(2n-1\) bins (indexed by \([2n-1]\)), under the constraint that each odd-indexed bin contains at most one ball. To see why this is the case, observe that the original recursion corresponds to the following way to recursively enumerate all the ball-to-bin combinations: first we put \(b\in\{0,1\}\) ball in the middle bin (which corresponds to the sign \((-1)^{b}\)), then choose \(i,j\) s.t. \(i+j=k-b\), and then recursively put \(i\) balls in the left \((n-1)\) bins (which corresponds to \(\mathbf{M}^{(i)}_{0..n/2}\)) and \(j\) balls in the right \((n-1)\) bins (which corresponds to \(\mathbf{M}^{(j)}_{n/2..n}\)).
Then observe that there is a different way to enumerate all the combinations with only \(\lceil\log(k)\rceil\) levels of recursion as follows. First decide where the \(h\)-th ball is located, where \(h=\lceil k/2\rceil\). If it is in an even-indexed bin, also decide how many balls are on the left and how many balls are on the right. Otherwise, there can be only one ball in the selected odd-indexed bin, and the numbers of balls on the left and right are fixed. Then for each choice, recursively enumerate the combinations on the left and right respectively. We claim that there is a corresponding recursive formula for \(\mathbf{M}^{(k)}_{0..n}\) which can be implemented in only \(\tilde{O}(\log(k)\log(nw))\) space.
To define this recursive formula, first we generalize the definition of \(\mathbf{M}^{(k)}_{\ell..r}\) to \((\ell,r)\not\in\mathsf{BS}_{n}\). For any \((\ell,r)\) s.t. \(0\leq\ell<r\leq n\), define \(\mathsf{LCA}(\ell,r)\) as follows. Let \(t\) be the largest integer such that there exists a multiple of \(2^{t}\) in the range \((\ell,r)\). Then we define \(\mathsf{LCA}(\ell,r)\) to be the unique multiple of \(2^{t}\) in \((\ell,r)\).9 Observe that for \((\ell,r)\in\mathsf{BS}_{n}\) s.t. \(r-\ell>1\), \(\mathsf{LCA}(\ell,r)=(\ell+r)/2\). Therefore, we can generalize the recursion (Equation (1)) to any \(r-\ell>1\) by defining \(m=\mathsf{LCA}(\ell,r)\). We also generalize the same recursion to \(k=0,(\ell,r)\not\in\mathsf{BS}_{n}\), so that \(\mathbf{M}^{(0)}_{\ell..r}=\mathbf{M}^{(0)}_{\ell..m}\mathbf{M}^{(0)}_{tm..r}\).10 For the degenerate case \(\ell=r\) we define \(\mathbf{M}^{(k)}_{\ell..r}=\mathbf{I}_{w}\). Next we want to prove the following identity which naturally gives a recursive algorithm for \(\mathbf{M}_{0..n}\). This identity is essentially the recursive enumeration we described above, except that we utilize \(\mathbf{M}^{(k)}_{s-1..s}=\mathbf{M}_{s}\) to get some cancellation which simplifies the recursion.
Footnote 9: If there are two consecutive multiples of \(2^{t}\) in \((\ell,r)\), then \(2^{t+1}\) divides one of them, which violates the definition of \(t\).
**Lemma 4.7**.: _For every \((\ell,r)\) s.t. \(r>\ell\) and every \(h,k\in\mathbb{N}\) s.t. \(h\leq k\), we have the following identity:_
\[\mathbf{M}^{(k)}_{\ell..r}=\sum_{s=\ell+1}^{r}\mathbf{M}^{(h-1)}_{\ell..s-1} \mathbf{M}^{(k-h)}_{s..r}-\sum_{s=\ell+1}^{r-1}\mathbf{M}^{(h-1)}_{\ell..s} \mathbf{M}^{(k-h)}_{s..r}. \tag{3}\]
Surprisingly, this new recursion coincides with the recursion of Richardson iteration from the inverse Laplacian perspective. This shows that the construction in [11] is actually equivalent to the [12] construction that we use in this paper. We briefly discuss this equivalence in Appendix D.
Before we prove the identity, we show that the new recursion does imply an algorithm that runs in space \(\tilde{O}(\log(k)\log(nw))\). Consider the algorithm which recursively computes \(\mathbf{M}^{(k)}_{\ell..r}\) using the formula in Lemma 4.7 with \(h=\lceil k/2\rceil\). Observe that the right hand side of Equation (3) involves at most \(O(n)\) matrices. In addition, given all the matrices on the right hand side, the computation of \(\mathbf{M}^{(k)}_{\ell..r}\) takes only \(O(\log(nkwT))\) additional bits, where \(T\) is the maximum bit length of all the matrix entries. From Lemma 3.9 and Lemma 4.2 we can see that \(T\) is at most \(\tilde{O}(k\log^{2}(n))\), so \(O(\log(nwkT))=\tilde{O}(\log(nwk))\). Finally, observe that each matrix on the right hand side has precision parameter at most \(\max(h-1,k-h)\leq\lfloor k/2\rfloor\). Therefore the recursion reaches
the \(\mathbf{M}^{(0)}_{\ell..r}\) base cases after at most \(\lceil\log(k)\rceil\) levels. By repeatedly applying Lemma 2.4 and Lemma 2.3 we can conclude that the space complexity is at most \(\tilde{O}(\log(k)\log(nw))+O(s_{\text{base}})\), where \(s_{\text{base}}\) is the maximum space complexity of computing \(\mathbf{M}^{(0)}_{\ell..r}\). The base case complexity \(s_{\text{base}}\) is indeed \(\tilde{O}(\log(nw))\), which we prove in Appendix C.
Finally, we prove the identity in Lemma 4.7.
Proof of Lemma 4.7.: We prove the claim by induction on \(r-\ell\). Let \(m=\mathsf{LCA}(\ell,r)\). For the base case \(r-\ell=1\), the lemma says \(\mathbf{M}^{(k)}_{\ell..r}=\mathbf{M}^{(h-1)}_{\ell..\ell}\mathbf{M}_{r..r}\), which is trivially true. Next we prove the general case by induction. For each matrix \(\mathbf{M}^{(k^{\prime})}_{\ell^{\prime}..r^{\prime}}\) on the right hand side s.t. \(\ell^{\prime}<m<r^{\prime}\), we expand \(\mathbf{M}^{(k^{\prime})}_{\ell^{\prime}..r^{\prime}}\) using (1). Note that \(\mathsf{LCA}(\ell^{\prime},r^{\prime})\) is also \(m\). In addition, for the \(s=m\) term in the first summation we apply the dummy expansion \(\mathbf{M}^{(k-h)}_{m..r}=\sum_{a+b=k-h}\mathbf{M}^{(a)}_{m..m}\mathbf{M}^{(b) }_{m..r}-\sum_{a+b=k-h-1}\mathbf{M}^{(a)}_{m..m}\mathbf{M}^{(b)}_{m..r}\). Similarly, for the \(s=m+1\) term in the first summation we also expand \(\mathbf{M}^{(k-h)}_{\ell..m}=\sum_{a+b=h-1}\mathbf{M}^{(a)}_{\ell..m}\mathbf{M }^{(b)}_{m..m}-\sum_{a+b=h-2}\mathbf{M}^{(a)}_{\ell..m}\mathbf{M}^{(b)}_{m..m}\). After rearranging we get that the right hand side equals to
\[\sum_{s=\ell+1}^{m}\sum_{a+b=k-h}\mathbf{M}^{(h-1)}_{\ell..s-1} \mathbf{M}_{s}\mathbf{M}^{(a)}_{s..m}\mathbf{M}^{(b)}_{m..r} -\sum_{s=\ell+1}^{m-1}\sum_{a+b=k-h}\mathbf{M}^{(h-1)}_{\ell..s} \mathbf{M}^{(a)}_{s..m}\mathbf{M}^{(b)}_{m..r}\] \[-\sum_{s=\ell+1}^{m}\sum_{a+b=k-h-1}\mathbf{M}^{(h-1)}_{\ell..s-1 }\mathbf{M}_{s}\mathbf{M}^{(a)}_{s..m}\mathbf{M}^{(b)}_{m..r}\] \[+\sum_{s=m+1}^{r}\sum_{a+b=h-1}\mathbf{M}^{(a)}_{\ell..m} \mathbf{M}^{(b)}_{m..s-1}\mathbf{M}_{s}\mathbf{M}^{(k-h)}_{s..r}\] \[-\sum_{s=m+1}^{r}\sum_{a+b=h-2}\mathbf{M}^{(a)}_{\ell..m} \mathbf{M}^{(b)}_{m..s-1}\mathbf{M}_{s}\mathbf{M}^{(k-h)}_{s..r}\] \[-\mathbf{M}^{(h-1)}_{\ell..m}\mathbf{M}^{(k-h)}_{m..r}.\]
Note that the first summation in Equation (3) expands to the terms on the left, and the second summation in Equation (3) expands to the terms on the right.
Now we classify all the terms in the first line by \(b\), and take out the right factor \(\mathbf{M}^{(b)}_{m..r}\). For any fixed \(b\leq k-h\) we get the sum
\[\left(\sum_{s=\ell+1}^{m}\mathbf{M}^{(h-1)}_{\ell..s-1}\mathbf{M}_{s} \mathbf{M}^{(k-b-h)}_{s..m}-\sum_{s=\ell+1}^{m-1}\mathbf{M}^{(h-1)}_{\ell..s} \mathbf{M}^{(k-b-h)}_{s..m}\right)\mathbf{M}^{(b)}_{m..r}.\]
Observe that this is exactly \(\mathbf{M}^{(k-b)}_{\ell..m}\mathbf{M}^{(b)}_{m..r}\) by induction. Therefore, we can see that the first line is exactly \(\sum_{b=0}^{k-h}\mathbf{M}^{(k-b)}_{\ell..m}\mathbf{M}^{(b)}_{m..r}\). Similarly, we can get that the second line is \(-\sum_{b=0}^{k-h-1}\mathbf{M}^{(k-b-1)}_{\ell..m}\mathbf{M}^{(b)}_{m..r}\). For the third and fourth lines, we can also classify the terms by \(a\) and take out the left factor \(\mathbf{M}^{(a)}_{\ell..m}\) to get to get \(\sum_{a=0}^{h-1}\mathbf{M}^{(a)}_{\ell..m}\mathbf{M}^{(k-a)}_{m..r}\) and \(-\sum_{a=0}^{h-2}\mathbf{M}^{(a)}_{\ell..m}\mathbf{M}^{(k-a-1)}_{m..r}\) respectively. Finally, collect all the simplified terms (including the only term in the fifth line in the expansion), and we get
\[\sum_{b=0}^{k-h}\mathbf{M}^{(k-b)}_{\ell..m}\mathbf{M}^{(b)}_{m..r }+\sum_{a=0}^{h-1}\mathbf{M}^{(a)}_{\ell..m}\mathbf{M}^{(k-a)}_{m..r}-\sum_{b=0 }^{k-h-1}\mathbf{M}^{(k-b)}_{\ell..m}\mathbf{M}^{(b)}_{m..r}-\sum_{a=0}^{h-2} \mathbf{M}^{(a)}_{\ell..m}\mathbf{M}^{(k-a-1)}_{m..r}-\mathbf{M}^{(h-1)}_{\ell..m}\mathbf{M}^{(k-h)}_{m..r}\] \[=\sum_{a+b=k}\mathbf{M}^{(a)}_{\ell..m}\mathbf{M}^{(b)}_{m..r}- \sum_{a+b=k-1}\mathbf{M}^{(a)}_{\ell..m}\mathbf{M}^{(b)}_{m..r}\] \[=\mathbf{M}^{(k)}_{\ell..r}.\]
## Acknowledgements
We want to thank Xin Lyu, Edward Pyne, Salil Vadhan and Hongxun Wu for helpful discussions. |
2309.04237 | Interband scattering- and nematicity-induced quantum oscillation
frequency in FeSe | Understanding the nematic phase observed in the iron-chalcogenide materials
is crucial for describing their superconducting pairing. Experiments on
FeSe$_{1-x}$S$_x$ showed that one of the slow Shubnikov--de Haas quantum
oscillation frequencies disappears when tuning the material out of the nematic
phase via chemical substitution or pressure, which has been interpreted as a
Lifshitz transition [Coldea et al., npj Quant Mater 4, 2 (2019), Reiss et al.,
Nat. Phys. 16, 89-94 (2020)]. Here, we present a generic, alternative scenario
for a nematicity-induced sharp quantum oscillation frequency which disappears
in the tetragonal phase and is not connected to an underlying Fermi surface
pocket. We show that different microscopic interband scattering mechanisms -
for example, orbital-selective scattering - in conjunction with nematic order
can give rise to this quantum oscillation frequency beyond the standard Onsager
relation. We discuss implications for iron-chalcogenides and the interpretation
of quantum oscillations in other correlated materials. | Valentin Leeb, Johannes Knolle | 2023-09-08T09:58:25Z | http://arxiv.org/abs/2309.04237v1 | # Interband scattering- and nematicity-induced quantum oscillation frequency in FeSe
###### Abstract
Understanding the nematic phase observed in the iron-chalcogenide materials is crucial for describing their superconducting pairing. Experiments on FeSe\({}_{1-x}\)S\({}_{x}\) showed that one of the slow Shubnikov-de Haas quantum oscillation frequencies disappears when tuning the material out of the nematic phase via chemical substitution or pressure, which has been interpreted as a Lifshitz transition [Coldea _et al._, npj Quant Mater 4, 2 (2019), Reiss _et al._, Nat. Phys. 16, 89-94 (2020)]. Here, we present a generic, alternative scenario for a nematicity-induced sharp quantum oscillation frequency which disappears in the tetragonal phase and is not connected to an underlying Fermi surface pocket. We show that different microscopic interband scattering mechanisms - for example, orbital-selective scattering - in conjunction with nematic order can give rise to this quantum oscillation frequency beyond the standard Onsager relation. We discuss implications for iron-chalcogenides and the interpretation of quantum oscillations in other correlated materials.
_Introduction.-_ The availability of experimental methods, which are able to correctly identify the low energy electronic structure of quantum materials, is critical for understanding their emergent phenomena like superconductivity, various density waves or nematic orders. For example, angle-resolved photoemission spectroscopy (ARPES) on the cuprate materials confirmed that a single band Hubbard-like description is a reasonable starting point for modelling their low energy structure [1], but iron-based superconductors require a multi-band, multi-orbital description [2; 3; 4]. Beyond ARPES, quantum oscillation (QO) measurements are an exceptionally sensitive tool for measuring Fermi surface (FS) geometries as well as interaction effects via extracting the effective masses from the temperature dependence [5]. For example, QO studies famously confirmed the presence of a closed FS pocket in underdoped cuprates in a field [6; 7] or observed the emergence of small pockets in the spin density wave parent phase of iron-based superconducting compounds [8; 9; 10].
The interpretation of QOs, as measured in transport or thermodynamic observables, is based on the famous Onsager relation, which ascribes each QO frequency to a semi-classical FS orbit [5; 13]. In the past years, this canonical description has been challenged by the observation of _anomalous_ QOs in correlated insulators [14; 15] which motivated a number of works revisiting the basic theory of QOs [16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28]. Very recently, forbidden QO frequencies have been reported in the multi-fold semi-metal CoSi [29], which generalize so-called magneto-intersubband oscillations known in coupled 2D electron gases [30; 31; 32] to generic bulk metals [33]. In Ref. [29] it was proposed that QO of the quasiparticle lifetime in systems with multiple allowed FS orbits can lead to new combination frequencies without a corresponding semi-classical FS trajectory.
Here, we propose a new explanation for the QO spectra measured in the iron-chalcogenide superconductor FeSe\({}_{1-x}\)S\({}_{x}\) which leads to an alternative identification of its low energy electronic structure with direct implications for the superconducting pairing. Iron-chalcogenides are unique among the iron-based superconductors as they show an orthorombic distortion without stripe magnetism, i.e. pristine FeSe is already in a nematic phase [34; 35; 36]. Recently it was reported that one of the observed slow QO frequencies (labeled as \(\lambda\) in the experimental data) vanishes when tuning out of the nematic into the tetragonal phase, via pressure in FeSe\({}_{0.89}\)S\({}_{0.11}\)[12] or via isoelectronic substitution in FeSe\({}_{1-x}\)S\({}_{x}\)[11]. Following Onsager's standard theory it has been interpreted as a Lifshitz transition, i.e. a FS pocket present in the nematic phase which disappears at the nematic quantum critical point [12]. As an alternative scenario, we show here that an additional slow QO frequency without an underlying FS orbit can naturally appear in an electronic nematic phase.
Our scenario requires the following features of iron-chalcogenides [37; 38; 39; 40; 41]: (i) The FS consists of several pockets, in particular two electron pockets (labeled here as \(\beta_{x}\) and \(\beta_{y}\)) around the \(Y\) and \(X\) point of the Brillouin zone (BZ), see Fig. 1 panel (b). \(\beta_{x}\) (\(\beta_{y}\)) has almost pure \(d_{xz}(d_{yz})\) orbital character with some \(d_{xy}\) content. They are related to each other via a C\({}_{4}\) rotation in the tetragonal phase. (ii) When tuning into nematic phase with broken rotational symmetry (reduced to C\({}_{2}\)) one of the pockets spontaneously increases in size, whereas the other one shrinks, see panel (a). In the QO spectrum, this is visible by the split up of one formerly degenerate QO frequency into 2 frequencies. (iii) A strong inter-pocket scattering between the \(\beta_{x}\) and \(\beta_{y}\) pocket exists [34; 42; 43; 44]. It can be caused either by orbital selective impurity scattering over the \(d_{xy}\)-channel, low-momentum scattering, collective fluctuations or, most likely, a combination of all. As a result, we will show that a new slow QO frequency, set by the difference of the \(\beta_{x}\) and \(\beta_{y}\) frequencies, emerges.
We argue that our theory can not only explain the slow SdH QO frequency observed in iron-chalcogenides, but also discuss that it provides further support for the robustness of \(s_{\pm}\) superconducting pairing.
We note that we do not aim towards a full quantitative description of the complicated QO spectrum of FeSe but rather focus on presenting a new theory for the additional slow QO frequency appearing in the nematic phase, thus, concentrating on model descriptions with the minimal ingredients of the electronic structure (e.g. neglecting aspects of three-dimensionality).
The paper is organized as follows: We first introduce a basic two-band model which captures the minimal features of an electronic nematic phase transition. We then show that inter-pocket scattering leads to a new QO frequency in a full lattice calculation of the SdH effect, including the orbital magnetic field via Peierls substitution. Next, we discuss a more microscopic multi-orbital description of iron-chalcogenides and identify different scattering mechanisms leading to strong inter-electron pocket coupling. Again, we confirm the emergence of a slow nematicity-induced frequency in a full lattice calculation. We close with a summary and outlook.
_Minimal two-pocket model.-_ First, we consider a minimal model with two electron pockets and the Hamiltonian
\[H_{0}=\sum_{\mathbf{k}}(\epsilon_{x,\mathbf{k}}-\delta\mu)d^{\dagger}_{x,\mathbf{k}}d_{x, \mathbf{k}}+(\epsilon_{y,\mathbf{k}}+\delta\mu)d^{\dagger}_{y,\mathbf{k}}d_{y,\mathbf{k}} \tag{1}\]
with the dispersion \(\epsilon_{x,\mathbf{k}}=-2t_{1}\cos k_{x}+2t_{2}\cos k_{y}\) and \(\epsilon_{y,\mathbf{k}}=2t_{2}\cos k_{x}-2t_{1}\cos k_{y}\). It consists of a \(\beta_{x}\)-FS pocket around the \(Y\)-point and a \(\beta_{y}\)-Fermi pocket around the \(X\)-point, see Fig. 1 (b). For \(\delta\mu=0\) the Hamiltonian is invariant under the \(C_{4}\) rotation \((k_{x},k_{y})\rightarrow(k_{y},-k_{x}),(d_{x},d_{y})\rightarrow(d_{y},-d_{x})\). Additional density-density interactions \(\sum_{\mathbf{r},\alpha,\beta}d^{\dagger}_{\alpha,\mathbf{r}}d_{\alpha,\mathbf{r}}d^{ \dagger}_{\beta,\mathbf{r}}d_{\beta,\mathbf{r}}\) can induce a nematic transition with a finite orbital asymmetry \(\delta\mu\neq 0\) breaking the \(C_{4}\) rotation symmetry. Mean-field calculations confirm that \(\delta\mu\) becomes non-zero for interactions above a critical threshold [45]. Thus, \(\delta\mu\) serves as an order parameter for a nematic phase transition, which is manifest in the band structure by the spontaneous growth/shrinking of the two inequivalent pockets, see Fig. 1 (a). We note that additional FS pockets are present in FeSe and change properties of the nematic phase quantitatively but are not relevant for our purpose.
In practice an external parameter \(\lambda\) tunes the effective interaction strength, e.g. via a change of applied pressure [12] or chemical substitution [11]. Again, the precise relation between \(\delta\mu(\lambda)\) and \(\lambda\) depends on microscopic details but we assume in the following the generic form of a second order phase transition \(\delta\mu\propto(\lambda_{c}-\lambda)^{\alpha}\theta(\lambda_{c}-\lambda)\) and fix, for simplicity, the exponent to be of the standard mean-field behaviour \(\alpha=1/2\).
Following our recent works [29; 33], we introduce a scattering contribution between the two electron pockets via impurities
\[H_{\text{imp}}=\sum_{\mathbf{r}}\Lambda_{xy,\mathbf{r}}d^{\dagger}_{x,\mathbf{r}}d_{y,\bm {r}}+h.c. \tag{2}\]
where \(\Lambda_{xy,\mathbf{r}}\) are drawn randomly, independently and uniformly in space from the interval \([-\Lambda_{xy}/2,\Lambda_{xy}/2]\). On average the system retains its translation and rotation symmetry. For simplicity we set the intraorbital part of the impurities, i.e. \(\Lambda_{xx}\) and \(\Lambda_{yy}\) to zero, as they will only suppress the amplitude of all QO frequencies [33].
We include a magnetic field by standard Peierls substitution, effectively inserting a flux \(\Phi\) in each plaquette of the square lattice. We have implemented the hopping Hamiltonian with magnetic field and impurities for system sizes up to \(300\times 300\) lattice sites. We determined
Figure 1: (b) FS of a minimal model including only 2 electron pockets \(\beta_{x},\beta_{y}\) with different orbital character. (a) In the nematic phase the \(\beta_{x}\) pocket spontaneously grows whereas \(\beta_{y}\) shrinks, see (c) and (d) for representative numerical SdH QO spectra for the different phases. (e) In the nematic phase (gray background) the degenerate frequency of the C\({}_{4}\) symmetric phase splits up into two frequencies (blue, orange), each associated with one FS. When taking interband coupling from impurities (\(\Lambda_{xy}\)), see (a), (b) into account a third frequency (red) which is exactly the difference of the basis frequencies \(\beta_{x}-\beta_{y}\) appears in the nematic phase. We fixed \(t_{1}/2=t_{2}=t\), \(\mu=-3t\). The inset of panel (e) shows the experimentally detected peak frequencies in FeSe\({}_{1-x}\)S\({}_{x}\)[11]. The red dashed line (\(\propto\sqrt{\lambda_{c}-\lambda}\)) is a guide to the eye highlighting the emergent frequency in the nematic phase, identified as \(\lambda\) in Ref. [11; 12].
the conductance through the Landau-Buttiker algorithm using the python package kwant [46] and observed SdH oscillations of the conductance as function of \(1/\Phi\). We then analyzed the Fourier transformation in \(2\pi/\Phi\) with standard QO techniques, which include subtraction of a polynomial background, zero padding and windowing, see SM. Representative Fourier spectra for the tetragonal (C\({}_{4}\) symmetric) and nematic phase are shown in Fig. 1 (c) and (d), where the frequencies are shown in units of the area of the BZ.
The Fourier spectrum of the SdH oscillations features, as expected from Onsager's relation, peaks at frequencies \(F_{\beta}=S_{\beta}/2\pi e\), which correspond to the area of the respective FSs \(S_{\beta}\) and higher harmonics thereof. As our main finding, the spectrum has clear peaks at combination frequencies in the nematic phase, most dominantly \(\beta_{x}-\beta_{y}\). Crucially, this frequency does not have an underlying FS or semiclassical orbit of any kind but is a consequence of QO of the quasi-particle lifetime. We note that this is in accordance with our recent analytical work [33], which we confirm here for the first time in a numerical lattice calculation.
In Fig. 1 (e), we plot the frequencies of the 3 strongest signals for weak inter-orbit scattering as a function of the external parameter \(\lambda\) tuning through the nematic transition. When increasing the nematic order, the main frequency peak splits into two, and the additional low-frequency \(\beta_{x}-\beta_{y}\) oscillation emerges similar to the experimental data, see inset.
_Multi-orbital model.-_ After studying a minimal two-band model, we next want to understand the possible origin of a strong inter-pocket scattering. Therefore, we need to take the multi-orbital character of iron-chalcogenides into account. In order to keep the numerical lattice calculations tractable we focus on the following key features, see Fig. 2 (a): (i) Two electron-like elliptical pockets \(\beta_{x}\) and \(\beta_{y}\) around the \(Y\) and \(X\) points which have mainly \(d_{xz}\) and \(d_{yz}\) orbital character but in addition also an admixture of \(d_{xy}\) orbitals; (ii) One (or depending on the precise model and parameter regime also two) hole-like circular pockets \(\gamma\) around the \(\Gamma\) point which have mixed \(d_{xz}\) and \(d_{yz}\) orbital character; (iii) Only the electron pockets \(\beta_{x}\) and \(\beta_{y}\) have additional \(d_{xy}\) orbital character.
All features (i)-(iii) are captured by a three orbital model [45] with \(d_{xz},d_{yz}\), and \(d_{xy}\) orbitals (denoted by \(xz,yz,xy\)). Introducing \(\mathbf{\Psi_{k}}=(d_{\mathbf{k},xz},d_{\mathbf{k},yz},d_{\mathbf{k},xy})\), the Hamiltonian reads
\[H_{0}=\sum_{\mathbf{k}}\mathbf{\Psi_{k}^{\dagger}}(T(\mathbf{k})-\mu)\mathbf{ \Psi_{k}}+\delta\mu\begin{pmatrix}d_{\mathbf{k},xz}\\ d_{\mathbf{k},yz}\end{pmatrix}^{\dagger}\sigma^{z}\begin{pmatrix}d_{\mathbf{k},xz}\\ d_{\mathbf{k},yz}\end{pmatrix} \tag{3}\]
where \(T(\mathbf{k})\) is a \(3\times 3\) matrix which depends on the electronic hopping strengths between the orbitals. The real-space form of the Hamiltonian, \(T(\mathbf{k})\) and the parameters are given in the SM.
In the tetragonal phase, with \(\delta\mu=0\), the Hamiltonian is again invariant under the \(C_{4}\) rotation \((k_{x},k_{y})\to(k_{y},-k_{x}),(d_{x},d_{y})\to(d_{y},-d_{x})\). Similar to the toy model from above, a nematic phase is characterized by a finite \(\delta\mu\) where the rotation symmetry is reduced to a \(\mathbb{Z}_{2}\) reflection symmetry / \(C_{2}\) rotation symmetry.
The parameter \(\delta\mu\) is again an effective, emergent parameter but now we can relate its microscopic origin to orbital ordering. For example the interorbital density interaction between \(xz\) and \(yz\) orbitals
\[H_{\text{int}}=U\sum_{r}d_{r,xz}^{\dagger}d_{\mathbf{r},xz}d_{\mathbf{r},yz}^{\dagger }d_{\mathbf{r},yz}. \tag{4}\]
can be decoupled in mean-field to obtain a self-consistent order parameter for the nematic (now orbital ordering) transition leading to \(\delta\mu=U\left(\langle d_{r,xz}^{\dagger}d_{\mathbf{r},xz}\rangle-\langle d_{r, yz}^{\dagger}d_{\mathbf{r},yz}\rangle\right)/2\). A typical FS within the nematic phase is shown in Fig. 2 (a).
We note that this role of orbital ordering, or an imbalance of the orbital occupation, in the nematic phase has been confirmed in a number of experiments [38; 39] most recently via X-ray linear dichroism [40]. While our minimal three-orbital model captures the key features, the precise asymmetry of the \(\gamma\) hole pocket(s) in the nematic phase of FeSe is more complicated however its shape does not affect our new findings.
_Impurities and orbital selective scattering.-_ As confirmed in our two-band model numerically and expected from analytical calculations [33], a nematicity induced difference frequency requires a sizeable coupling of the pockets \(\beta_{x}\) and \(\beta_{y}\). The absence of other frequency combinations points towards a negligible coupling of \(\beta_{i}\) and
Figure 2: Panel (a): Typical FS of the three-orbital model in the nematic phase. Colors indicate the orbital character. Panel (b): Buckling enlarges the unit cell which leads to a backfolded FS in the reduced Brillouin zone (black dashed). Panel (c): FS integrated inter- and intrapocket scattering strength for \((\Lambda)_{\mu\nu}=\delta_{\mu\nu}\) showing that the coupling \(W_{\beta_{x},\beta_{y}}\) is dominant. It increases for small momentum scattering, i.e. \(1/q_{0}\to\infty\). Any other type of interorbit scattering enlarges the coupling of \(\beta_{x}\) and \(\beta_{y}\) even further, see panel (d) where \((\Lambda)_{\mu\nu}=1\).
\(\gamma\). We next investigate the origin of this coupling in terms of the \(d\)-orbital dependent scattering. Therefore, we consider impurities in the orbital basis
\[H_{\text{imp}}=\sum_{\mathbf{r}}\sum_{\mathbf{r}_{i}}V(\mathbf{r}-\mathbf{r}_{i})\mathbf{\Psi}_{\mathbf{r }}^{\dagger}\Lambda_{\mathbf{r}_{i}}\mathbf{\Psi}_{\mathbf{r}} \tag{5}\]
with the scattering vertex \(\Lambda_{\mathbf{r}_{i}}\) a random hermitian matrix with mean \(0\) and variance \(\Lambda^{2}\). Note, impurities respect the \(\pi/2\)-rotation symmetry only on average. Similarly, impurities located at \(\mathbf{r}_{i}\) are distributed randomly and uniformly such that the systems remains on average translationally invariant. We model the interaction of electrons with impurities by a screened Coulomb interaction \(V_{\ell}\) of Yukawa type with screening length \(\ell\)[47].
We quantify the coupling \(W_{\alpha,\alpha^{\prime}}\) of FS orbits \(\alpha\) and \(\alpha^{\prime}\) by integrating the scattering amplitudes of all possible processes between them
\[W_{\alpha,\alpha^{\prime}} =\oint_{\mathbf{k}\in\alpha}\oint_{\mathbf{k}^{\prime}\in\alpha^{\prime}} \Big{|} \tag{6}\] \[=\oint_{\mathbf{k}\in\alpha}\oint_{\mathbf{k}^{\prime}\in\alpha^{\prime}} \big{|}\tilde{V}_{\mathbf{\ell}}(\mathbf{k}^{\prime}-\mathbf{k})\mathcal{U}(\mathbf{k}^{ \prime})^{\dagger}\Lambda\mathcal{U}(\mathbf{k})\big{|}. \tag{7}\]
Here, \(\mathcal{U}(\mathbf{k})\) is the transformation which diagonalizes \(H_{0}\) for a each momentum. The Fourier transform of the screened Coulomb interaction \(\tilde{V}_{\ell}=\mathcal{N}_{\ell}/(\mathbf{k}^{2}+1/\ell^{2})\) allows only scattering up to a maximal momentum \(q_{0}=1/\ell\) (\(\mathcal{N}_{\ell}\) is a normalization constant).
Iron-chalcogenides have a 2 site unit cell [37], which leads to a folding of the \(T(\mathbf{k}+(\pi,\pi))\) bands onto the \(T(\mathbf{k})\) bands. The FS in the reduced Brillouin zone is shown in Fig. 2 (b), where now the pockets \(\beta_{x}\) and \(\beta_{y}\) lay on top of each other. This admits a large scattering between the \(\beta_{x}\) and \(\beta_{y}\) pockets because the screened Coulomb interaction favors low-momentum scattering. In Fig. 2 (c) and (d) we show quantitatively that for diagonal or uniform scattering vertices \(\Lambda\) in the orbital components, the coupling \(W_{\beta_{x},\beta_{y}}\) is the biggest inter-pocket coupling for a sizeable screening length \(\ell\gtrsim 0.5\) and of the same size as the intra-orbit couplings.
There are several additional mechanisms which increase \(W_{\beta_{x},\beta_{y}}\) even further. Crucially, orbital-selective scattering, i.e. a dominating \(\Lambda_{xy,xy}\) component of the vertex, leads to a large coupling of exclusively \(\beta_{x}\) and \(\beta_{y}\) pockets. Additionally, any off-diagonal element of \(\Lambda\), i.e. \(xz/yz\) to \(xy\) and \(xz\) to \(yz\) scattering, strongly enhances the inter-pocket coupling \(W_{\beta_{x},\beta_{y}}\). Overall, there is generically a sizeable coupling between the electron pockets.
An exclusive coupling of the electron pockets \(\beta_{x},\beta_{y}\) can be modelled by orbital selective scattering over the \(\Lambda_{xy,xy}\) channel. The analysis above suggests that this coupling is indeed dominating. For our numerical simulation of the SdH effect we, therefore, focus on short-ranged impurities \(V(\mathbf{r})\propto\delta(\mathbf{r})\) with an orbital selective scattering vertex \((\Lambda)_{ij}=\delta_{i3}\delta_{j3}\Lambda_{xy}\) with only the \(xy\) component \(\Lambda_{xy,xy}\) being non-zero. We note that experiments indeed suggest that the \(xy\)-orbital part of the FS is heavy, leading to a large dominating density of \(d_{xy}\)-states for scattering [37].
_Slow QO frequency from orbital selective scattering.-_ Finally, we evaluate the conductance in orbital magnetic fields through samples of sizes up to \(400\times 400\) sites with orbital selective impurities within the nematic phase. The dominant SdH peaks in the Fourier spectrum, see Fig. 3 panel (a), are set by the FSs \(\beta_{x},\beta_{y},\gamma\) and higher harmonics thereof. The combination frequencies \(\beta_{x}-\beta_{y}\) and \(\beta_{x}+\beta_{y}\) are clearly visible and, additionally, a variety of subleading higher order terms appear whose strength depends on the strength of the impurity scattering. In the lower panel (b) we show the spectrum of the density of states, which corresponds to QO of thermodynamic observables like the de Haas-van Alphen (dHvA) effect. In contrast to the SdH effect, the slow difference frequency is absent in the dHvA effect. The reason is that the latter only depends on the scattering via the Dingle factor whereas scattering dominates transport [29; 33], which is also confirmed by the strong (weak) dependence of the QO signals for the upper (lower) panels. Thus, a careful comparison between QO frequencies of SdH and dHvA can confirm our unusual QO without a FS orbit.
_Discussion and Conclusion.-_ We have shown that a
Figure 3: Numerically computed QO spectra for the parameters regime generating the FSs shown in Fig. 2 (a). In (a) we analyzed the conductance whereas in (b) we analyzed the density of states \(\rho(\mu)\). The theoretical prediction for the three basis frequencies and the sum- and difference frequency, based on the area of the FSs, are indicated as grey dashed lines.
robust slow QO frequency emerges in minimal models of iron-chalcogenides. The key ingredients were the broken rotational symmetry between the electron pockets in the nematic phase and an efficient coupling between these pockets. The latter can originate from an orbital selective scattering, e.g. a dominating impurity contribution of the \(d_{xy}\) orbital. We provided full numerical lattice calculations with orbital magnetic fields, which also confirm recent analytical works on difference frequency QOs without semiclassical orbits beyond the Onsager relation [29; 33]. Further supporting evidence of our scenario is that the experimentally extracted masses from the temperature dependence of the QOs [11] is in accordance with our analytical predictions [33], namely the mass of the slow frequency roughly equals the difference of the ones of the electron pockets.
Of course, neither our effective two-band nor the three-orbital model (which is already challenging numerically) captures all details of the complicated electronic structure of iron-chalcogenides [37]. In fact, we have neglected any correlation effects, which could further increase scattering between the electron pockets, e.g. by collective spin fluctuations. However, our scenario requires no preconditions except a finite coupling of the electron pockets via scattering. Therefore, we expect our scenario to be reproducible in any microscopic model of iron-chalcogenides. In summary, we argue that our results are a robust feature of the nematic phase of iron-chalcogenides and elucidate that no additional pocket of a nematic Lifshitz transition is required to explain the QO experiments [11; 12].
The correct assignment of QO frequencies with putative FS orbits is crucial for correctly identifying the electronic structure in iron-chalcogenides and beyond. Alas, our scenario of sharp QOs without FS orbits further complicates the interpretation of QO data. However, it also provides novel insights into subtle details of quasiparticle scattering otherwise inaccessible in experiments.
We showed that the slow QO frequency of iron-chalcogenides can be explained by the presence of orbital selective impurity scattering, which has implications for the SC pairing symmetry. It is normally expected that impurities, as necessarily present in heavily disordered FeSe\({}_{1-x}\)S\({}_{x}\)[48], suppress s\({}^{\pm}\) superconductivity [49; 50]. However, the orbital selective scattering does _not_ couple the electron and hole pockets, which would be detrimental for s\({}^{\pm}\) pairing. Thus, the new QO mechanism possibly explains the robustness of superconductivity in the iron-chalcogenides.
We hope that the observation and quantification of similar QO frequencies can lead to a more precise identification of the electronic structure of other correlated electron materials.
_Data and code availability.-_ Code and data related to this paper are available on Zenodo [51] from the authors upon reasonable request.
###### Acknowledgements.
We acknowledge helpful discussion and related collaborations with N. Huber, M. Wilde and C. Pfleiderer. We thank A. Chubukov, A. Coldea and T. Shibauchi, for helpful discussions and comments on the manuscript. V. L. acknowledges support from the Studienstiftung des deutschen Volkes. J. K. acknowledges support from the Imperial-TUM flagship partnership. The research is part of the Munich Quantum Valley, which is supported by the Bavarian state government with funds from the Higthech Agenda Bayern Plus.
## Appendix A 3-orbital tight-binding model
The Hamiltonian features nearest- and next-nearest-neighbor hoppings:
\[H_{0}= \sum_{\mathbf{r}}\mathbf{\Psi}_{\mathbf{r}+\hat{\mathbf{x}}}^{\dagger}\begin{pmatrix} t_{2}&0&t_{7}\\ 0&t_{1}&0\\ -t_{7}&0&t_{5}\end{pmatrix}\mathbf{\Psi}_{\mathbf{r}}+\mathbf{\Psi}_{\mathbf{r}+\hat{\mathbf{y}}}^{ \dagger}\begin{pmatrix}t_{1}&0&0\\ 0&t_{2}&t_{7}\\ 0&-t_{7}&t_{5}\end{pmatrix}\mathbf{\Psi}_{\mathbf{r}}+\mathbf{\Psi}_{\mathbf{r}+\hat{\mathbf{x}}+ \hat{\mathbf{y}}}^{\dagger}\begin{pmatrix}t_{3}&-t_{4}&t_{8}\\ -t_{4}&t_{3}&t_{8}\\ -t_{8}&-t_{8}&t_{6}\end{pmatrix}\mathbf{\Psi}_{\mathbf{r}}\] \[+\mathbf{\Psi}_{\mathbf{r}+\hat{\mathbf{x}}-\hat{\mathbf{y}}}^{\dagger}\begin{pmatrix} t_{3}&t_{4}&t_{8}\\ t_{4}&t_{3}&-t_{8}\\ -t_{8}&t_{8}&t_{6}\end{pmatrix}\mathbf{\Psi}_{\mathbf{r}}+\text{h.c.}+\mathbf{\Psi}_{\mathbf{ r}}^{\dagger}\begin{pmatrix}-\mu-\delta\mu&-\text{i}h_{y}&0\\ \text{i}h_{y}&-\mu+\delta\mu&0\\ 0&0&\Delta_{xy}-\mu\end{pmatrix}\mathbf{\Psi}_{\mathbf{r}} \tag{10}\]
Defining \(\mathbf{\Psi}_{\mathbf{r}}=\frac{1}{\sqrt{N}}\sum_{\mathbf{k}}\mathbf{e}^{-\text{i}\mathbf{k }\mathbf{r}}\mathbf{\Psi}_{\mathbf{k}}\) we obtain
\[H_{0}=\sum_{\mathbf{k}}\mathbf{\Psi}_{\mathbf{k}}^{\dagger}(T(\mathbf{k})-\mu)\mathbf{\Psi}_{\mathbf{ k}}+\delta\mu\begin{pmatrix}d_{\mathbf{k},xz}\\ d_{\mathbf{k},yz}\end{pmatrix}^{\dagger}\sigma^{z}\begin{pmatrix}d_{\mathbf{k},xz}\\ d_{\mathbf{k},yz}\end{pmatrix} \tag{11}\]
where
\[T_{11}(k) =2t_{2}\cos k_{x}+2t_{1}\cos k_{y}+4t_{3}\cos k_{x}\cos k_{y} \tag{12}\] \[T_{22}(k) =2t_{1}\cos k_{x}+2t_{2}\cos k_{y}+4t_{3}\cos k_{x}\cos k_{y}\] (13) \[T_{33}(k) =2t_{5}(\cos k_{x}+\cos k_{y})+4t_{6}\cos k_{x}\cos k_{y}+\Delta _{xy}\] (14) \[T_{12}(k) =T_{21}(k)^{*}=4t_{4}\sin k_{x}\sin k_{y}+\text{i}h_{y}\] (15) \[T_{13}(k) =T_{31}(k)^{*}=2\text{i}t_{7}\sin k_{x}+4\text{i}t_{8}\sin k_{x} \cos k_{y}\] (16) \[T_{23}(k) =T_{32}(k)^{*}=2\text{i}t_{7}\sin k_{y}+4\text{i}t_{8}\cos k_{x} \sin k_{y}. \tag{17}\]
The values of the hopping parameters are shown in Tab. 1.
## Appendix B Numerical implementation and QO analysis
We have implemented the tight-binding models and calculated the conductance and the density of states for finite magnetic fields using the python package kwant [46]. The main methodological steps are shown in Fig. 4.
We compute the conductance \(G\) with a 2-point measurement, see Fig. 4 (a) via the built-in Landau-Buttiker algorithm, which is based on an S-matrix approach. For the 2-orbital model, we used a system size of \(300\times 300\) lattice sites and for the 3-orbital model a system size of \(400\times 400\) lattice sites.
In this work, we fit the \(N_{\text{data}}(I_{\mathbf{\Phi}})\) data points inside an interval \(2\pi/\Phi\in I_{\mathbf{\Phi}}\) with a 4th order polynomial. After subtracting the polynomial we scale the signal with a window and pad it symmetrically with \(4N_{\text{data}}(I_{\mathbf{\Phi}})\) zeros. Then the signal is Fourier transformed and we always show the absolute of the Fourier transformed signal. For Fig. 1 (c) and (d) we used \(I_{\mathbf{\Phi}}=[20,250]\) and a Hamming window. For Fig. 1 (e) we used \(I_{\mathbf{\Phi}}=[40,300]\) and a Hamming window. For Fig. 1 (e) we used \(I_{\mathbf{\Phi}}=[40,300]\) and a Hamming Biackman-Harris window.
We compute the density of states \(\rho\) with the kernel polynomial method [52]. For sampling the spectral density, we use 30 randomly chosen vectors such that only the bulk of the system is sampled. Cutting off the edges suppresses edge state effects over bulk effects, simulating the thermodynamic limit. We define the bulk by the set of all lattice points at least 50 sites away from the edges. We used 7000 Chebyshev moments. In Fig. 3 (b) we analyzed \(\rho(\omega=0,\Phi)\) inside the interval \(I_{\mathbf{\Phi}}=[90,720]\) using a Hamming window.
Note for experimental comparison that \(B=\frac{q}{2\pi}\frac{h}{ca^{2}}=\frac{\Phi}{2\pi}\times 4.63\)kT for a lattice constant \(a=3.77\)A of iron selenides [53]. We work in units where \(e=\hbar=a=1\). Hence, the analyzed \(I_{\mathbf{\Phi}}\) intervals translate to roughly 6 - 46T.
\begin{table}
\begin{tabular}{c|c c c c c c c c c} \(\mathcal{M}\) & \(t_{1}\) & \(t_{2}\) & \(t_{3}\) & \(t_{4}\) & \(t_{5}\) & \(t_{6}\) & \(t_{7}\) & \(t_{8}\) & \(\Delta_{xy}\) & \(\mu\) & \(h_{y}\) \\ \hline \(\mathcal{C}\) & \(0.2t\) & \(0.6t\) & \(0.3t\) & \(-0.1t\) & \(2t\) & \(3t\) & \(-2t\) & \(t\) & \(4t\) & \(2.7t\) & \(0\) \\ \end{tabular}
\end{table}
Table 1: The tight-binding parameters used throughout this manuscript. |
2309.14024 | Early proofs of Hilbert's Nullstellensatz | By Rabinowitsch' trick Hilbert's Nullstellensatz follows from the weak
Nullstellensatz (Rabinowitsch 1929). The weak version can be shown with
elimination theory. Hilbert's original proof is also based on successive
elimination. Lasker obtained a new proof using primary decomposition. We
describe these early proofs and place them in the development of commutative
algebra up to the appearance of van der Waerden's Moderne Algebra. We also
explain Hentzelt's Nullstellensatz. | Jan Stevens | 2023-09-25T10:41:14Z | http://arxiv.org/abs/2309.14024v1 | # Early proofs of Hilbert's Nullstellensatz
###### Abstract.
By Rabinowitsch' trick Hilbert's Nullstellensatz follows from the weak Nullstellensatz (Rabinowitsch 1929). The weak version can be shown with elimination theory. Hilbert's original proof is also based on successive elimination. Lasker obtained a new proof using primary decomposition. We describe these early proofs and place them in the development of commutative algebra up to the appearance of van der Waerden's Moderne Algebra. We also explain Hentzelt's Nullstellensatz.
Key words and phrases:Polynomial ideals, primary decomposition, Nullstellensatz, elimination theory, resultant, Hilbert, Lasker 2020 Mathematics Subject Classification: 14A05, 14-03, 13-03, 01A60, 01A55
## Introduction
Hilbert's theorem of zeros, or Nullstellensatz as it is usually called, states that if a polynomial \(f\in P=k[X_{1},\ldots,X_{n}]\), where \(k\) is an algebraically closed field, vanishes in all common zeros of an ideal \(I\subset P\), then \(f^{r}\in I\) for some natural number \(r\). Usually the proof is reduced to a special case, the weak Nullstellensatz, that an ideal without zeros is the whole ring, by an argument due to Rabinowitsch [14]. The weak Nullstellensatz follows by elimination. Hilbert's original proof [15] is also based on elimination. A different proof based on primary decomposition is due to Lasker [13]. We place these proofs in the early development of commutative algebra.
Rabinowitsch's proof [14] appeared just in time to be included in the second volume of van der Waerden's Moderne Algebra [12]. This book can be seen as marking the end of the early period. It made the subject widely known, and in fact it is still a good introduction to the results we discuss in this paper. Afterwards new proofs of the weak Nullstellensatz appeared with a totally different flavour, like Zariski's proof, based on his lemma that if a finite integral domain over a field \(K\) is a field then it is an algebraic extension of \(K\)[26]. The most common modern proofs are variations of this proof.
Rabinowitsch's half page paper claims in fact to give a (complete) proof of the Nullstellensatz and does not use the term weak Nullstellensatz. It refers to Emmy Noether's paper [13] for the statement that an ideal without zeros is the whole ring, with a footnote that it
also follows from Kronecker's elimination theory. Both the Hentzelt-Noether and the Kronecker theory are based on successive elimination of variables. This is also the technique Hilbert uses in his proof; he adapts Kronecker's device to the homogeneous case. In line with his intended application in invariant theory Hilbert formulates and proves in [10] the Nullstellensatz for homogeneous polynomials.
The Nullstellensatz was instrumental to the creation of the concept of primary ideal [11]. Lasker's definition is different from the modern one, which is due to Emmy Noether [12]. Macaulay paraphrases Lasker as follows: if the product of two ideals is contained in a primary ideal, and if one does not contain its zero set the other is contained in the ideal [13]. To be able to work with this definition it is essential to know that a prime ideal consists of all polynomials vanishing on its zero set. This is a special case of the Nullstellensatz. It is also possible to show the result directly and use it in turn to prove the Nullstellensatz. Both Lasker [11, 12] and Macaulay [13] do this. Their methods are different. Macaulay uses more specific computations and he uses Kronecker's theory of the resolvent to describe the zero set of ideals. Lasker reasons more abstractly.
The Moderne Algebra [14] contains a second proof, in Chapter 13 on the theory of polynomial ideals, based on van der Waerden's earlier paper [14]. The proof uses specialisation in fields of algebraic functions and avoids elimination theory. In the paperback edition Algebra II [14] of the Moderne Algebra the chapter on elimination theory is eliminated; only the resultant of two polynomials in one variable has been retained and moved to the first volume [14].
We witness here the rise of commutative algebra as separate discipline, in a period which starts with Noether and ends with Noether, to borrow from the title of the essay [1]. An important motivation for Lasker and Macaulay was the generalisation of Max Noether's "fundamental theorem on algebraic functions". The first proved special case of the Nullstellensatz (Netto's theorem [15]) follows from Bertini's refinement of Noether's theorem [1]. The most most far-reaching generalisation is Hentzelt's Nullstellensatz, proved by Emmy Noether's first official PhD student Grete Hermann [1]. Only after this theorem Hilbert's theorem is referred to as Nullstellensatz. The influence of Emmy Noether on van der Waerden is well known. She also influenced Rabinowitsch: Noether spent the winter 1928/29 in Moscow and led a seminar on algebraic geometry at the Communist Academy [1].
From the early proofs we first give Rabinowitsch's proof, which does not prove the weak Nullstellensatz. We describe Kronecker's elimination theory, from which the result does follow. It also gives ingredients for Hilbert's original proof of the theorem. These proofs use induction, but otherwise not more than the basic properties of the resultant of two binary forms. We recall those in the Appendix. Less elementary are
the proofs using primary decomposition. We describe the background needed. The last proof is van der Waerden's proof by specialisation. Finally we formulate but do not prove Hentzelt'z Nullstellensatz. Before discussing the proofs we place them in a historic context and describe the main characters and main developments, from Noether's fundamental theorem until Noether's modern algebra.
## 1. Notation
We use modern terminology, in particular the word ideal for the older term module, or modular system. Lasker [10] uses the term module for ideals in \(\mathbb{C}[x_{1},\ldots,x_{n}]\), whereas his ideals are ideals in \(\mathbb{Z}[x_{1},\ldots,x_{n}]\). The use of ideal was propagated by Emmy Noether, see [11, 12]. The name modular system is explained by the notation introduced by Kronecker [13, SS20]. He writes
\[G\equiv 0\bmod(F_{1},F_{2},\ldots,F_{k})\]
to express that a polynomial \(G\) can be written in the form
\[P_{1}F_{1}+P_{2}F_{2}+\cdots+P_{k}F_{k}\;,\]
that is, \(G\in(F_{1},F_{2},\ldots,F_{k})\). This is an extension of the notation \(a\equiv b\bmod c\) introduced by Gauss for integers to express that \(a-b\) is divisible by \(c\) without remainder.
We do not use the arithmetic terminology that an ideal \(a\) divides \(b\) if \(b\subset a\). Macaulay [10] says in this case that \(b\) contains \(a\).
The term polynomial is used for the the older term whole rational function. Homogeneous polynomials will also be called forms. The coefficients will be taken in \(K\), an algebraically closed field. In the older literature the base field is tacitly assumed to be that of the complex numbers. Most proofs work in the general case.
We have tried to use a uniform notation. In particular, following Emmy Noether we use a Fraktur font to denote ideals. Hermann Weyl addressed Noether at her funeral service: "Mit vielen kleinen deutschen Buchstaben hast Du Deinen Namen in die Geschichte der Mathematik geschrieben"1 (this is the version Grete Hermann gives in a letter to van der Waerden in 1982 [14, p. 20]; the originaltext of the speech is also in loc. cit.).
Footnote 1: With many German lower-case letters you wrote your name in the history of mathematics.
Throughout this paper we use the name Nullstellensatz, although it is of relatively recent date, see the MathOverflow question [11]. Before [10] the theorem was referred to as a known theorem of Hilbert, or an explicit reference was given to [15, SS3]. Macaulay [10] calls it the Hilbert-Netto Theorem. Interestingly van der Waerden refers to it as a well known theorem of Hilbert in the paper [10] (an extract of a letter to J. F. Ritt), which has the footnote
"See Macaulay, Modular Systems, p. 46. (J. F. R.)". The first mention of a Nullstellensatz concerns that of Hentzelt, in Noether's report on the thesis of Grete Hermann (Noether, 1.2.1925, Promotionsakte Hermann) [12, p. 320].
The use of the German name originates in the US. Emmy Noether advised her students "don't bother to translate, just read the German", according to Ruth McKee [13]. Hodge and Pedoe [14] use the term Hilbert's zero-theorem, but Zariski states in his review in Math Reviews [15] that they prove Hilbert's Nullstellensatz. Miles Reid explains the name as theorem of zeros, adding: "but stick to the German if you don't want to be considered as ignorant peasant" [16].
## 2. From Noether to Noether
### Noether's fundamental theorem
Lasker spends nine pages of his paper [11] to give an overview of the development of ideal theory. It was Max Noether who with his fundamental theorem "clearly and sharply apprehended and demonstrated the central position, which the method of modular systems has in all questions of algebra" [11, p. 44].
Max Noether's fundamental theorem on algebraic functions [13] concerns plane curves. The equation of an algebraic curve \(f=0\) passing through the intersection of two curves \(\varphi=0\), \(\psi=0\) can be written in the form \(f=A\varphi+B\psi\), if the intersections are simple, but as Noether noticed, the statement ceases to be valid otherwise. An easy example is provided by two conics, tangent in two points. Then the line passing through these two points is not contained in the pencil spanned by the two conics, but twice the line (given by the square of a linear function) is. The correct condition for \(f\) to lie in the ideal \((\varphi,\psi)\) is that in each of the intersection points \(f\) can be written as \(f=a\varphi+b\psi\) with \(a\) and \(b\) power series; it suffices to check this identity up to a certain order \(\rho\) only depending on the ideal \((\varphi,\psi)\), as was first observed by Bertini [1].
In a paper "Zur Theorie der Elimination" [12] Netto gives a different answer for the case of non-simple intersection, without making a connection to Noether's theorem; this is done by F. Meyer in his Jahrbuch review [JFM 17.0096.01]. Netto expresses his result geometricallyas follows: if an algebraic curve \(f(x,y)=0\) passes through all intersection points of two other algebraic curves \(\varphi(x,y)=0\), \(\psi(x,y)=0\), then some power of the polynomial \(f(x,y)\) can be expressed as linear homogeneous function of \(\varphi(x,y)\) and \(\psi(x,y)\), i.e.
\[f(x,y)^{\rho}=A(x,y)\varphi(x,y)+B(x,y)\psi(x,y)\;,\]
where \(A(x,y)\) and \(B(x,y)\) are also polynomials. As Hilbert [11] remarks, this is the special case of his Nullstellensatz for two inhomogeneous variables. Netto's proof gives that the required power is bounded by the highest intersection multiplicity.
To generalise Noether's fundamental theorem to \(n\) dimensions was one of the problems van der Waerden worried about when he came to Gottingen in 1924. In his paper [10] on the sources of his Moderne Algebra he says that a new world opened for him. Already in Amsterdam van der Waerden "discovered that the real difficulties of algebraic geometry cannot be overcome by calculating invariants and covariants" [10, p. 32]. In Gottingen he learned from Emmy Noether that suitable tools had been developed by Dedekind and Weber, by Hilbert, Lasker and Macaulay, by Steinitz and by Emmy Noether herself.
Noether's theorem was generalised by Konig, Lasker and Macaulay. The most general form is Hentzelt's Nullstellensatz, which provides a criterion indicating how much a polynomial has to vanish in the zeros of an ideal in order to belong to it. It was proved by Emmy Noether's first PhD student Grete Hermann. Van der Waerden gives a non-constructive proof in [10].
### Kronecker's elimination theory
The word ideal has its origin in Kummer's work on the factorisation of algebraic numbers. His theory has been developed by Kronecker, and in a different way by Dedekind. The story is told in detail by Edwards [1]. A more elementary account is in Klein [14, Ch. 7]. Kronecker's role in development of the algebraic tools in the period from Noether to Noether is discussed in [1].
Kronecker's theory of algebraic quantities applies not only to number fields but also to polynomial rings, and in particular to Elimination Theory. It gives a method to find all solutions of a set of algebraic equations by successive eliminations. We describe it in Section 4.
Kronecker lectured on his theory and disseminated it in private conversations, but published his results only in 1882 in a Festschrift for Kummer's doctor jubilee [15]. A part of Kronecker's theory was treated in detail in a long paper in Acta Mathematica by his student Jules Molk [16]. In this paper he mentions the result of Netto, which prompted Netto to publish his theorem also in Acta Mathematica [17]. Netto gives in the second volume of his book [17] an extensive account of elimination theory, also of the older work of Bezout, Liouville, Poisson and Cayley. He generalises his theorem from [17] to the case of a complete intersection of \(n\) variables, adding that for \(n=2\) Noether had shown this in a bit different, for his purposes less suitable form. Indeed, by Bertini's bound on Noether's theorem it follows that a sufficiently high power of the function satisfies the condition [1]. With a flawed argument Netto goes on to show that
the result still holds if there are more equations than variables, and derives the Nullstellensatz with Hilbert's argument for zero sets of higher dimensions (see Section 5).
Kronecker's theory (and much more) was presented in the 564 page book [14] by Julius (Gyula) Konig, published simultaneously in Hungarian [15] and German. His goal (as stated in the Introduction of [14]) was to popularise Kronecker's ideas ("if this expression for this difficult area of mathematics is allowed"). He indeed succeeded, as shown by the many comtemporary references. A footnote to the published paper [13] of Macaulay's talk at the ICM in Heidelberg (1904) states that Professor Noether and Professor Brill have kindly drawn his attention to the recently published book by Konig. This work is "remarkable for its precision and comprehensiveness and the large additions it makes to the subject". Konig's book is still mentioned in a footnote in [12, p. 167]. Nowadays it is almost completely forgotten. As [10] puts it, a useful book on a new field will, if succesful, draw others to discover better results, simpler and more general methods and if it does not become a classic the work will gradually be covered up and forgotten. For a modern reader Konig's book is hard to read.
Konig gives applications to geometry, notably a generalisation of Noether's theorem to higher dimensions. He also treats several fundamental results of Hilbert. In particular, he gives a new, simpler proof of the Nullstellensatz; but his proof is flawed, as he "makes an absurdly false assumption concerning divisibility" [13, p. 35]. Actually, Konig gives a faulty proof of an absurd divisibility statement [14, p. 399].
### Hilbert
The next important development comes from Hilbert's work on invariant theory, in his two famous papers in Mathematische Annalen [15, 16]. Klein [17, p. 329] writes that Hilbert takes up ideas from Kronecker with Dedekind's way of thinking, and applies them brilliantly on the problems of invariant theory. Indeed, Hilbert states explicitly in the Introduction of [15] that he uses methods of the general theory of polynomial ideals, so that the theory of invariants becomes a striking example of that theory, just as cyclotomic fields constitute a striking example in number theory, where the most important theorems about general number fields have first been found and proven. In the first paper [15], where he proves the basis theorem (stating that every ideal is finitely generated) and his syzygy theorem and introduces the Hilbert polynomial, many geometric examples are given, and only in the last section the results are applied to prove the finiteness of the system of invariants. One of the examples solves a problem of Salmon. Lasker points to the influence of the work of Salmon and Cayley and comments:
Man hat Salmons Werk unterschatzt, weil seinen Methoden die Strenge der Beweisfuhrung abging. Wie gross dieser Fehler auch sein mag, so darf man niemals die Bedeutung Salmons als des grossen Problemstellers und Wegweisers vergessen.2 ([14, p. 44])
Footnote 2: Salmon’s work has been underestimated because his methods lacked rigor of proof. However great this error may be, one must never forget the importance of Salmon as the great problem poser and guide.
At several places Hilbert [13] stresses the importance of generalising Noether's fundamental theorem to higher dimensions.
Hilbert formulates the basis theorem in a different way from what is usual nowadays.
**Theorem 2.1**.: _Given a non-terminating sequence of forms in the \(n\) variables \(x_{1},\ldots,x_{n}\), say \(F_{1},F_{2},F_{3},\ldots\), there always exists a number \(m\) such that every form of the sequence can be written as_
\[F=A_{1}F_{1}+A_{2}F_{2}+\cdots+A_{m}F_{m}\]
_where \(A_{1},A_{2},\ldots,A_{m}\) are suitable forms in the same \(n\) variables._
Hilbert also gives a second version of the basis theorem, for forms with integral coeffcients. Hilbert's formulation seems natural if one thinks about the explicit computation of invariants in special cases, which leads to lists. Moreover, Hilbert treats only homogeneous polynomials, or forms, whereas the modern formulation works with inhomogeneous polynomials. The theorem can be extended to the inhomogeneous case by making all polynomials homogeneous with a new variable of homogeneity [15, p. 38].
Hilbert explicitly states that the basis theorem applies in particular to homogeneous ideals in polynomial rings; he uses Dedekind's term module. Hilbert makes the connection with Kronecker's concept of modular systems, but stresses that his syzygy theory and the characteristic function (in modern terms the Hilbert polynomial) use homogeneity in an essential way.
Conversely the basis theorem for ideals, that every ideal in the polynomial ring \(K[x_{1},\ldots,x_{n}]\) is finitely generated, implies the theorem in Hilbert's formulation [12, SS 80], [12, SS 115]: the ideal generated by the \(F_{i}\) has a finite set of generators, each of which is a linear combination of only finitely many \(F_{j}\).
The basis theorem is the first step in proving that the ring of invariants is finitely generated. The invariants in question concern in modern terms the action of the group \(G=SL_{n}(\mathbb{C})\) on a vector space \(V\) which is a direct sum of the type \(S^{d_{1}}\mathbb{C}^{n}\oplus\cdots\oplus S^{d_{k}}\mathbb{C}^{n}\). The result is that the ring \(\mathbb{C}[V]^{G}\) of \(G\)-invariant polynomials on \(V\) is finitely generated. By the basis theorem every invariant \(i\) can be expressed as \(i=A_{1}i_{i}+\ldots A_{m}i_{m}\). By an averaging procedure the \(A_{i}\) can themselves
be taken as invariants, of lower degree than \(i\). By applying the same reasoning to these invariants one finally obtains that \(i\) is a sum of products of the \(i_{j}\). Nowadays one uses the Reynolds operator, which is a \(G\)-invariant projection \(\mathbb{C}[V]\to\mathbb{C}[V]^{G}\), but Hilbert had to construct it using Cayley's \(\Omega\)-process; for details we refer to [1].
Hilbert's proof was criticised for its nonconstructive character. The goal of [10] is to give a method to find the generators (in principle). By the homogeneous version of Noether normalisation (proved by Hilbert for this purpose) the ring of invariants is an integral extension of a polynomial ring \(k[J_{1},\ldots,J_{\kappa}]\) with the \(J_{i}\) invariants (of the same degree). The quotient field of the ring of invariants is the field of rational invariants and by the theorem of the primitive element it is an extension of the field \(L=K(J_{1},\ldots,J_{\kappa})\) of the form \(L(J)\) with \(J\) a polynomial invariant; Hilbert shows how to construct such a \(J\). To find the ring of invariants Hilbert gives three steps, of which the first is the most difficult, namely to find the system \(\{J_{1},\ldots,J_{\kappa}\}\) of invariants, such that every other invariant is integral over the \(J_{1},\ldots,J_{\kappa}\), that is, satisfies a monic equation with coefficients which are polynomials in the \(J_{1},\ldots,J_{\kappa}\). The second step is to find \(J\), such that all invariants are rational functions of \(J,J_{1},\ldots,J_{\kappa}\). The third step is to find the integral elements of the field \(L(J)\), which can be done according to a general theory of Kronecker: "If the invariants \(J,J_{1},\ldots,J_{\kappa}\) are known, finding the full invariant system only requires the solution of an elementary problem from the arithmetic theory of algebraic functions" [10, p. 320].
The system \(\{J_{1},\ldots,J_{\kappa}\}\) has the property that all invariants vanish if the \(J_{1},\ldots,J_{\kappa}\) vanish. Of fundamental importance for the whole theory is that the converse can be proved: if invariants \(I_{1},\ldots,I_{\mu}\) have the property that their vanishing implies the vanishing of all other invariants, then every invariant is integral over the \(I_{1},\ldots,I_{\mu}\). To prove this Hilbert first shows the Nullstellensatz (see Theorem 5.1 for Hilbert's version). This gives that powers of the generators of the ring of invariants lie in the ideal \((I_{1},\ldots,I_{\mu})\) and therefore every invariant of degree at least some fixed \(\rho\). The coefficients can again be taken as invariants. A finite number invariants of lower degree form therefore a basis of the ring of invariants as \(K[I_{1},\ldots,I_{\mu}]\)-module, so this ring is integral over \(K[I_{1},\ldots,I_{\mu}]\). Hilbert shows this with the now standard determinant trick.
A form for which all invariants vanishes is called a null-form. In the space of all forms they form an algebraic subset, and knowing it helps determining the invariants \((I_{1},\ldots,I_{\mu})\). For binary forms Hilbert determines the null-forms with elementary means: a form \(f(x,y)\) of degree \(d\) is a null-form if and only if \(f\) has a zero of multiplicity bigger than \(\frac{d}{2}\). This can easily shown with the Hilbert-Mumford criterion, see [1, Example 2.5.4]; Hilbert proved the criterion later in his paper to
handle forms of more variables. In fact, this part of the theory was only taken up 70 years later by Mumford in his Geometric Invariant Theory [14]. For these developments and their relation to Hilbert's text we refer to the comments by V.L. Popov to his Russian translation of Hilbert's paper [13].
### Lasker
Little is known about the origins of the highly original paper "Zur Theorie der Moduln and Ideale" [15], by the world chess champion Emanuel Lasker. Van der Waerden [16] states that Lasker took his Ph.D. degree under Hilbert's guidance in 1905, but that is not correct.
Lasker (1868-1941) studied mathematics from 1888 in Berlin and later in Gottingen (when Hilbert still was in Konigsberg), but 1891 he interrupted his studies and concentrated on chess, becoming world champion in 1894. He took up his studies again in 1897, first in Heidelberg (taking courses with Landsberg) and later in Berlin (courses with Hensel) [17].
Lasker submitted a manuscript for the Grand Prix des sciences mathematiques in 1898, where the question was "Chercher a etendre le role que peuvent jouer en analyse les series divergentes", but it was considered to be a bit beside the question [18]. He used the first 23 pages of this manuscript to get a doctoral degree. Max Noether in Erlangen was prepared to help him. Staying at the Hotel Erlanger Hof Lasker wrote to the dean on Monday January 29, 1900, who convened the examining committee for the next Wednesday. On the same Monday Noether already delivered his report. Lasker passed magma cum laude [19]. Lasker submitted the paper to the Philosophical Transactions of the Royal Society of London, where it was published in 1901 in German (Uber Reihen auf der Convergenzgrenze) [15]. So Lasker was neither a student of Hilbert nor of Noether.
Lasker wrote a small paper [15] on the theory of canonical forms, dated New York May 1903. His main mathematical work [15] might have been an attempt to start an academic career; he never had a permanent academic position in mathematics. The paper is dated Charlotteburg March 1904. Right after Lasker travelled to the US to play the chess tournament at Cambridge Springs.
Albert Einstein came to know Lasker in later life. He wrote on occasion of Lasker's sixtieth birthday:
"Emanuel Lasker ist einer der starksten Geister, denen ich auf meinem Lebenswege begegnet bin. Renaissance-Mensch, mit einem unbandigen Freiheitsdrang begabt, jeder sozialen Bindung abhold. So wurde er Schachmeister, wohl weniger aus besonderer hingebender Liebe zum Spiel. Letztere galt vielmehr der Philosophie, dem Verstehen uberhaupt. Er liebt als achter Eigenbrodler und
Eigenwilliger die Deduktion und steht der induktiven Forschung fremder gegenuber. Kein Wunder es liegt ihm nicht, im Objekt den Richter uber die Kinder seines Geistes zu sehen, sondern die Schonheit des Gedankens geht ihm uber jene Wahrheit, die ihren Anspruch aus der Beobachtung des Objektes ableitet. Der Amor dei intellektualis ist sein einziger Gott, verkorpert in Mathematik und spekulativer Philosophie. Ich liebe seine Schriften unabhangig von ihrem Wahrheitsgehalt als die Fruchte eines grossen originalen und freien Geistes.3 ([1])
Footnote 3: Emanuel Lasker is one of the strongest minds I have encountered in the course of my life. A Renaissance man, gifted with a boundless desire for freedom, averse to any social obligation. Thus he became a chess master probably not so much because of any particular devoted love for the game. What he loves, rather, is philosophy, understanding in general. As a true maverick with a mind of his own, he loves deduction, and inductive research is foreign to him. That is not surprising he does not see the object as the judge of his mind’s offspring instead, for him the beauty of the idea is more important than the truth, which derives its claim from the observation of the object. The amor dei intellectualis is his sole god, embodied in mathematics and speculative philosophy. I love his writings independently of their truth content, as the product of a great original and free mind.
Lasker's paper [14] is famous for the introduction of primary ideals, but contains much more. It does not have an Introduction, but from the "final remarks about some applications of the Theorems" we can conclude that the main objective of the theory is the extension of Noether's fundamental theorem to the case of several variables. The paper contains a new approach to the Hilbert polynomial, based on multiplication with non-zero divisors; the Hilbert polynomial is used for a new proof of the Nullstellensatz. There is also an extension of the theory to the ring of convergent power series in several variables, which is used to prove Lasker's generalisation of Noether's theorem. The last application sketched in the paper concerns Plucker formulas for curves with arbitrary singularities.
### Macaulay
Lasker proved that an ideal is the intersection of primary ideals, but gave no methods to compute these. This was the goal of Macaulay's paper [14]. F. S. Macaulay (1862-1937) was a school teacher until his retirement in 1911. For his mathematical work see [1].
While in [14] Macaulay uses the theories of Kronecker, Hilbert and Lasker, the goal of first three chapters of his 1916 Tract [14] is to present them. In the preface Macaulay writes:
The present state of our knowledge of the properties of Modular Systems is chiefly due to the fundamental theorems and processes of L. Kronecker, M. Noether,
D. Hilbert, and E. Lasker, and above all to J. Konig's profound exposition and numerous extensions of Kronecker's theory. ([16, Preface])
In this slim volume Macaulay only treats the case of polynomial ideals in \(\mathbb{C}[x_{1},\dots,x_{n}]\), what he calls the algebraic theory of modular systems; the "absolute theory" concerns the case of integer coefficients. This is the same distinction as Lasker makes between modules and ideals. The last chapter of the Tract introduces the Inverse system, according to Paul Roberts (in his Introduction to the 1994 reprint) one of the most original ideas in the book. A simplified treatment is given in Macaulay's last paper [16] (not mentioned by Roberts). In her Zentralblatt review [21] Emmy Noether writes, among other things, that again the many examples and counter-examples are important.
Macaulay's Tract was one of the works Emmy Noether advised B. L. van der Waerden to study when he came to Gottingen in 1924. It is the direct source for several sections in Moderne Algebra, according to [23]. Elsewhere van der Waerden recollects:
Most important work on the theory of Polynomial Ideals was done by Lasker, the famous chess champion, who had got his problem from Hilbert, and by Macaulay, a schoolmaster who lived near Cambridge, England, but who was nearly unknown to the Cambridge mathematicians when I visited Cambridge in 1933. I guess the importance of Macaulay's work was known only in Gottingen. ([23])
### Noether
Meanwhile a different treatment of explicit elimination theory was given by Kurt Hentzelt in his 1914 Ph.D. thesis "Zur Theorie der Polynomideale und Resultanen" under E. Fischer. Kurt Hentzelt (1889-1914, he went missing in action near Diksmuide) studied first in Berlin and then 1919-1913 in Erlangen [24, p. 320]. Presumably he was also a student of Emmy Noether [24, p. 15]. She acknowledges him in her paper on "fields and systems of rational functions" (dated May 1914, before the start of the First World War) [25]. In [25] she gives in a footnote the simplest example of a non-unique decomposition in primary ideals, adding that she got it from K. Hentzelt. Noether published a conceptual version [10] of Hentzelt's thesis, which she characterises:
Diese ganz auf Grund eigener Ideen verfasste Dissertation ist luckenlos aufgebaut; aber Hilfssatz reiht sich an Hilfssatz, alle Begriffe sind durch Formeln mit vier und funf Indizes umschrieben, der Text fehlt fast vollstandig,
so dass dem Verstandnis die grossten Schwierigkeiten bereitet werden:4 ([12, p. 53])
Footnote 4: This dissertation, entirely based on own ideas, is structured without gaps; but Lemma follows lemma, all concepts are represented by formulas with four and five subscripts, the text is almost completely absent, so that the greatest difficulties are caused for understanding.
The part concerning computation in a finite number of steps was reserved for a later publication. Noether gave this problem to her first PhD student Grete Hermann.
In [13] van der Waerden gives a new foundation for the theory of zeros of polynomial ideals, independent of elimination theory. Even though he later pleaded for the use of elimination theory in algebraic geometry [13], van der Waerden contributed to the elimination of elimination theory. In [13] he also gives a new proof of Hilbert's Nullstellensatz. Van der Waerden[13] recalls his use of generic points:
I wrote a paper [13] based upon this simple idea and showed it Emmy Noether. She at once accepted it for the Mathematische Annalen, without telling me that she had presented the same idea in a course of lectures just before I came to Gottingen. I heard it later from Grell, who had attended her course. ([13])
The above quote shows a greater participation of Emmy Noether in the development of algebraic geometry than visible from her published papers. Emmy Noether spent the winter 1928/29 in Moscow. She gave a course on abstract algebra at Moscow University and led a seminar on algebraic geometry at the Communist Academy (in 1936 merged with the Academy of Sciences of the Soviet Union) [1]. It is probable that J. L. Rabinowitsch was one of the participants. In her report on the thesis of Hans Fitting [13, p. 316] Noether mentions unpublished work of "Rabinowitsch-Moskau" on the subject of [12], which probably her Moscow lectures took up. Not much is known about him, but it can be Juli Lasarewitsch Rabinowitsch (in German transliteration), who was born in 1904 and graduated from Moscow State University in 1924, where he worked since 1932 and was awarded a Ph.D. and a title of Docent in 1935 [14]. He later mainly worked on Differental Equations, but one can imagine Noether attracting a rather wide audience. Curiously van der Waerden always writes A. Rabinowitsch, in his Moderne Algebra but also in [13]. Zariski repeats this mistake in [15], which shows that his source is van der Waerden's book.
It is not unlikely that Noether took Rabinowitsch' paper back to Germany and arranged for its publication in Mathematische Annalen, and that she provided the references.
**On Hilbert's Nullstellensatz.**
By
J. L. Rabinowitsch in Moscow.
Theorem: _If the polynomial \(f(x_{1},x_{2},\ldots,x_{n})\) vanishes in all zeros -- in an algebraically closed field -- of a polynomial ideal \(\mathfrak{a}\), then there is a power \(f^{\rho}\) of \(f\) belonging to \(\mathfrak{a}\)._
Proof: Let \(\mathfrak{a}=(f_{1},\ldots,f_{m})\), where \(f_{i}\) contain the variables \(x_{1},\ldots,x_{n}\). Let \(x_{0}\) be an auxiliary variable. We form the ideal \(\bar{\mathfrak{a}}=(f_{1},\ldots,f_{m},x_{0}f-1)\). As by assumption \(f=0\) whenever all \(f_{i}\) vanish, the ideal \(\bar{\mathfrak{a}}\) has no zeros.
Therefore \(\bar{\mathfrak{a}}\) has to coincide with the unit ideal. (Cf. for example K. Hentzelt, "Eigentliche Eliminationstheorie", SS 6, Math. Annalen **881**.) If then \(1=\sum_{i=1}^{m}F_{i}(x_{0},x_{1},\ldots,x_{n})f_{i}+F_{0}(x_{0}f-1)\) and we put \(x_{0}=\frac{1}{f}\) in this identity, so results:
Footnote 1: Follows also already from Kronecker’s elimination theory.
\[1=\sum_{i=1}^{m}F_{i}\left(\frac{1}{f},x_{1},\ldots,x_{n}\right)f_{i}=\frac{ \sum_{i=1}^{m}\bar{F_{i}}f_{i}}{f^{\rho}}\.\]
Therefore \(f^{\rho}\equiv 0\pmod{\mathfrak{a}}\), q.e.d.
## 3. Rabinowitsch' paper
The text of [10] consists of only 13 lines. Figure 1 shows it in translation.
The statement that a nontrivial ideal has zeros is nowadays called weak Nullstellensatz, but in the text and earlier it is not considered to be a special case. The proof of the result can be found in many places (e.g. [1, p. 21]), but the connection with Hilbert's theorem is not made. Only in [16] Macaulay gives a footnote to the statement that the only ideal without zeros is the unit ideal, where he writes that this follows as a particular case of the theorem in [11, SS3] but was known earlier from the theory of the resultant. In the accounts of Kronecker's elimination theory by Kronecker himself [14], Molk [15], Netto [17] and Konig [18] the conclusion is not drawn. In [19] we find in SS6 on p. 76 the statement that Theorem XII of that paper in particular shows that every ideal without zeros becomes the unit ideal, again without mentioning the Nullstellensatz.
Figure 1. Rabinowitsch’s paper [10]
## 4. Kronecker's elimination theory
In this section we explain Kronecker's theory, following the account given by Macaulay [16]. We show in particular how it implies the weak Nullstellensatz.
Let \(F_{1},\ldots,F_{k}\in K[x_{1},\ldots,x_{n}]\) with \(K\) an algebraically closed field and consider the equations \(F_{1}=\cdots=F_{k}=0\). The problem is to find all solutions. Coincidences due to special values of the coefficients, like equations not being regular (a polynomial of degree \(l\) is regular in \(x_{1}\) if the monomial \(x_{1}^{l}\) occurs with non-zero coefficient), can be avoided by a (general) linear change of coordinates. As Macaulay remarks this transformation is seldom needed in specific examples, but always assumed in theoretical reasoning.
If the polynomials \(F_{i}\) have a non-constant greatest common divisor \(D\), then the hypersurface \(D=0\) gives solutions of the equations. This greatest common divisor can be found with the Euclidean algorithm, by considering the polynomials as elements of \(K(x_{2},\ldots,x_{n})[x_{1}]\). If \(D\) is a constant we take it equal to \(1\). We divide the \(F_{i}\) by \(D\) and write \(F_{i}=D\phi_{i}\). We eliminate \(x_{1}\) from the equations \(\phi_{1}=\cdots=\phi_{k}=0\), using the following device. We form the expressions
\[\Phi_{1} =u_{1}\phi_{1}+\cdots+u_{m}\phi_{m}\] \[\Phi_{2} =v_{1}\phi_{1}+\cdots+v_{m}\phi_{m}\]
with the \(u_{i}\) and \(v_{i}\) indeterminates, and write the resultant \(R(\Phi_{1},\Phi_{2})\) (see Appendix A) as
\[w_{1}F_{1}^{(1)}+w_{2}F_{2}^{(1)}+\cdots+w_{k_{1}}F_{k_{1}}^{(1)}\;,\]
where the \(w_{i}\) are monomials in the \(u_{j}\) and \(v_{j}\) and \(F_{i}^{(1)}\in K[x_{2},\ldots,x_{n}]\). Any solution of \(F_{1}=\cdots=F_{k}=0\) is a solution of \(D=0\) or of \(\phi_{1}=\cdots=\phi_{k}=0\), and any solution of \(\phi_{1}=\cdots=\phi_{k}=0\) is a solution of \(F_{1}^{(1)}=\cdots=F_{k_{1}}^{(1)}=0\), since by the property A.4 of the resultant \(\sum w_{i}F_{i}^{(1)}=A_{1}\Phi_{1}+A_{2}\Phi_{2}\); by equating coefficients of the \(w_{i}\) we conclude that \(F_{i}^{(1)}\in(\phi_{1},\ldots,\phi_{k})\). Therefore \(D\,F_{i}^{(1)}\in(F_{1},\ldots,F_{k})\). Conversely, if \((\xi_{2},\ldots,\xi_{n})\) is a solution of \(F_{1}^{(1)}=\cdots=F_{k_{1}}^{(1)}=0\), then the resultant \(R(\Phi_{1},\Phi_{2})\) vanishes for \((x_{2},\ldots,x_{n})=(\xi_{2},\ldots,\xi_{n})\) and the equations \(\Phi_{1}=\Phi_{2}=0\) have a solution \((\xi_{1},\xi_{2},\ldots,\xi_{n})\) (and we find all solutions). As \((x_{1}-\xi_{1})\) is a factor of \(\Phi_{1}(x_{1},\xi_{2},\ldots,\xi_{n})\), it does not depend on the \(v_{i}\); nor does it depend on the \(u_{i}\), being a factor of \(\Phi_{2}\). Therefore \((\xi_{1},\xi_{2},\ldots,\xi_{n})\) is a solution of \(\phi_{1}=\cdots=\phi_{k}=0\), so also of \(F_{1}=\cdots=F_{k}=0\).
We may assume that the \(F_{i}^{(1)}\) are regular in \(x_{2}\); the needed linear transformation could have been performed at the start. We apply the same procedure and find the greatest common divisor \(D^{(1)}\) of the \(F_{i}^{(1)}=D^{(1)}\phi_{i}^{(1)}\) considered as polynomials in \(x_{2}\), and eliminate \(x_{2}\) to
get polynomials \(F_{1}^{(2)},\ldots,F_{k_{2}}^{(2)}\) in \(x_{3},\ldots,x_{n}\). Any solution of \(F_{1}=\cdots=F_{k}=0\) is a solution of \(D\,D^{(1)}=0\) or of \(F_{1}^{(2)}=\cdots=F_{k_{2}}^{(2)}=0\) and \(D\,D^{(1)}F_{i}^{(2)}\in(F_{1},\ldots,F_{k})\)
We continue and successively find \(D^{(j)}\) and eliminate \(x_{j+1}\). After eliminating \(x_{n-1}\) we have polynomials \(F_{i}^{(n-1)}\) in one variable \(x_{n}\), with greatest common divisor \(D^{(n-1)}\) and after dividing with this common factor the polynomials \(\phi_{i}^{(n-1)}\) have no common root. We find that any solution of \(F_{1}=\cdots=F_{k}=0\) is a solution of the single equation \(DD^{(1)}\ldots D^{(n-1)}=0\). Conversely, from the solutions of \(DD^{(1)}\ldots D^{(n-1)}=0\) we can find all solutions of \(F_{1}=\cdots=F_{k}=0\). As the \(DD^{(1)}\cdots D^{(n-2)}F_{i}^{(n-1)}=DD^{(1)}\cdots D^{(n-1)}\phi_{i}^{(n-1)}\) lie in the ideal \((F_{1},\ldots,F_{k})\) and \(1\in(\phi_{1}^{(n-1)},\ldots,\phi_{k}^{(n-1)})\), we conclude that
\[DD^{(1)}\cdots D^{(n-1)}\in(F_{1},\ldots,F_{k})\;.\]
**Definition 4.1**.: \(DD^{(1)}\cdots D^{(n-1)}\) is the complete (total) resolvent of the equations \(F_{1}=\cdots=F_{k}=0\), and \(D^{(i-1)}\) is the complete partial resolvent of rank \(i\). Any factor of \(D^{(i-1)}\) is a partial resolvent of rank \(i\).
The weak Nullstellensatz follows.
**Proposition 4.2** ([22, p. 21]).: _If the equations \(F_{1}=\cdots=F_{k}=0\) have no solution then the complete resolvent is equal to \(1\) and consequently \(1\in(F_{1},\ldots,F_{k})\)._
Macaulay continues to show by examples that the resolvent does not always detect embedded components or may indicate such when they do not exist. This problem does not occur with Hentzelt's elimination theory. Noether [10] describes how to form a resultant form with better properties, depending only on the ideal. We refer for details to Krull's report in the 1939 edition of the Enzyklopadie [11]. Whereas Kronecker's method seeks to solve the equations \(F_{1}=\cdots=F_{k}=0\), Hentzelt looks for zeros of the ideal \(\mathfrak{a}=(F_{1},\ldots,F_{k})\). We may suppose that \(\mathfrak{a}\) contains a polynomial \(F\), which is regular in \(x_{1}\) of order \(r\). Let \(R_{1},\ldots,R_{t}\) denote the remainders obtained by dividing the polynomials \(x_{1}^{j}F_{i}\), \(j=0,\ldots,r-1\), \(i=1,\ldots,k\), by \(F\), as polynomials in \(x_{1}\). Let \(M\) be the set of all polynomials in \(\mathfrak{a}\) with degree less than \(r\) in \(x_{i}\). It is a submodule of the free \(K[x_{2},\ldots,x_{n}]\)-module with basis \(1,\ldots,x_{1}^{r-1}\), with \(R_{1},\ldots,R_{t}\) generators of \(M\). The rank of \(M\) is less than \(r\) if and only if the polynomials in \(I\) have a common factor of positive degree in \(x_{1}\); this holds for undeterminate \(x_{2},\ldots,x_{n}\) but also for specialised values. Let \(\mathfrak{a}_{1}\) be the ideal of the minors of size \(r\) of the coefficient matrix of \(M\). Then \(\mathfrak{a}_{1}\subset\mathfrak{a}\) and \(\mathfrak{a}_{1}\) depends only on \(\mathfrak{a}\). If \(\mathfrak{a}_{1}\) is not the zero ideal, then we can proceed in the same way. This process stops if some ideal \(\mathfrak{a}_{r}=0\), or with \(\mathfrak{a}_{n}\). If \(\mathfrak{a}_{r}=0\), then \(x_{r+1},\ldots,x_{n}\) can be chosen arbitrarily, and the value of the other variables can be found by successively solving equations. As \(\mathfrak{a}_{n}\) does not depend on the variables,
it can only be the zero or unit ideal. In particular, if \(\mathfrak{a}\) has no zeroes, then it is the unit ideal [10, p. 76].
**Example 4.3**.: To illustrate the difference in the elimination procedures we consider Macaulay's example iii in section 17 [14, p. 23]. He considers the ideal \(\mathfrak{a}=(x_{1}^{3},x_{2}^{3},x_{1}^{2}+x_{2}^{2}+x_{1}x_{2}x_{3})\). A less symmetric, but more convenient basis of \(\mathfrak{a}\) is
\[(x_{1}^{2}+x_{2}^{2}+x_{1}x_{2}x_{3},x_{1}x_{2}^{2}(1-x_{3}^{2}),x_{2}^{3})\.\]
The ideal \(\mathfrak{a}\) has one isolated component, \(\mathfrak{a}^{\prime}=(x_{1}^{2}+x_{2}^{2}+x_{1}x_{2}x_{3},x_{1}x_{2}^{2},x_{2 }^{3})\) and two embedded components \(\mathfrak{a}^{\prime\prime}=(x_{3}-1,x_{1}^{2}+x_{2}^{2}+x_{1}x_{2},x_{2}^{3})\) and \(\mathfrak{a}^{\prime\prime\prime}=(x_{3}+1,x_{1}^{2}+x_{2}^{2}-x_{1}x_{2},x_{ 2}^{3})\). As the polynomial \(f_{1}=x_{1}^{2}+x_{2}^{2}+x_{1}x_{2}x_{3}\) is regular in \(x_{1}\), we may in Kronecker's method take the resultant of \(f_{1}\) and \(v_{2}f_{2}+v_{3}f_{3}\), where \(f_{2}\) and \(f_{3}\) are the other two generators of \(\mathfrak{a}\)[12, SS73, Remark 1]. We get the determinant
\[\begin{vmatrix}1&x_{2}x_{3}&x_{2}^{2}\\ v_{2}x_{2}^{2}(1-x_{3}^{2})&v_{3}x_{2}^{3}&0\\ 0&v_{2}x_{2}^{2}(1-x_{3}^{2})&v_{3}x_{2}^{3}\end{vmatrix}\]
which equals
\[x_{2}^{6}\left(v_{2}^{2}(1-x_{3}^{2})^{2}-v_{2}v_{3}x_{3}(1-x_{3}^{2})+v_{3}^{ 2}\right)\.\]
It follows that the complete resolvent is \(x_{2}^{6}\). For \(\mathfrak{a}^{\prime}\) the computation is almost the same, except that the factors \((1-x_{3}^{2})\) are to be removed from the determinant. Although \(\mathfrak{a}\varsubsetneq\mathfrak{a}^{\prime}\) both ideals have the same complete resolvent.
To eliminate according to Hentzelt-Noether we divide \(f_{2},x_{1}f_{2},f_{3},xf_{3}\) by \(f_{1}=x_{1}^{2}+x_{2}^{2}+x_{1}x_{2}x_{3}\) and find that \(f_{2},x_{1}f_{2},f_{3}\) form a basis of the module of polynomials in \(\mathfrak{a}\) of degree at most \(1\) in \(x_{1}\). The coefficient matrix is
\[\begin{bmatrix}x_{2}^{3}&0&0\\ 0&x_{2}^{3}&x_{2}^{2}(1-x_{3}^{2})\end{bmatrix}\]
with minors \(x_{2}^{6}\) and \(x_{2}^{5}(1-x_{3}^{2})\). For \(\mathfrak{a}^{\prime}\) we find the ideal generated by \(x_{2}^{5}\).
Successive elimination requires that the variables are general. This is achieved by Noether [15] by adjoining the coefficients \(u_{ij}\) of an indeterminate \(n\times n\) matrix \(U\) to the field \(K\) and the change of variables \(y_{i}=\sum u_{ij}x_{j}\). The same device is used by Hermann [11].
_Remark 4.4_.: Proposition 4.2 is here a consequence of a more general theory. A direct proof of the weak Nullstellensatz is shorter [12, SS74]. We use induction on the number of variables. For one variable the result is true by the extended Euclidean algorithm. Suppose the ideal \(\mathfrak{a}=(F_{1},\ldots,F_{k})\) has no common zeros, and that \(\mathfrak{a}\) contains a polynomial \(F\) regular in \(x_{1}\). We eliminate \(x_{1}\) as above (with Kronecker's or Hentzelt's method) and find polynomials
\((F_{1},\dots,F_{k})\), again without common zeros. By the induction hypothesis \(1\in(F_{1}^{(1)},\dots,F_{k_{1}}^{(1)})\subset(F_{1},\dots,F_{k})\).
## 5. Hilbert's proof
Hilbert's original proof is also based on elimination. The theorem is formulated with the application to invariants in mind. It looks different from the theorem stated by Rabinowitsch.
**Theorem 5.1**.: _Given \(m\) homogeneous polynomials \(f_{1},\dots,f_{m}\) in \(n\) variables \(x_{1},\dots,x_{n}\), let \(F,F^{\prime},F^{\prime\prime},\dots\) be homogeneous polynomials in the same variables with the property that they vanish for all those values of these variables for which the given \(m\) polynomials \(f_{1},\dots,f_{m}\) all are equal to zero: then it is always possible to determine an integer \(r\) such that any product \(\Pi^{(r)}\) of \(r\) arbitrary polynomials of the sequence \(F,F^{\prime},F^{\prime\prime},\dots\) can be expressed in the form_
\[\Pi^{(r)}=a_{1}f_{1}+a_{2}f_{2}+\dots+a_{m}f_{m}\;,\]
_where \(a_{1},a_{2},\dots,a_{m}\) are suitably chosen polynomials in the variables \(x_{1},\dots,x_{n}\)._
_Remark 5.2_.:
1. Hilbert formulates the projective Nullstellensatz. As the polynomials are homogeneous, their zeros are taken in projective space. The inhomogeneous version follows by making all polynomials in the sequence homogeneous with a new variable \(x_{0}\), and applying the homogeneous theorem to a set of generators of the homogenisation of the ideal \((f_{1},\dots,f_{m})\). Putting then \(x_{0}=1\) leads to the sought relation.
2. The Theorem in the above form implies in particular that for any polynomial among the \(F,F^{\prime},F^{\prime\prime},\dots\) the \(r\)-th power lies in the ideal \((f_{1},\dots,f_{m})\). Hilbert remarks that this fact was stated and proved for inhomogeneous polynomials of two variables by Netto [14]. The special case that the \(r\)-th power lies in the ideal implies the general case. Firstly, by Hilbert's basis theorem (Theorem I of [13]), the polynomials in the sequence are expressable in finitely many of them, say \(F^{(1)},\dots,F^{(k)}\). Any product of \(r\) polynomials \(F,F^{\prime},F^{\prime\prime},\dots\) becomes a sum of products of \(r\) polynomials \(F^{(1)},\dots,F^{(k)}\) with polynomials coefficients. If \((F^{(i)})^{r_{i}}\in(f_{1},\dots,f_{m})\), then put \(r=(r_{1}-1)+(r_{2}-1)+\dots+(r_{k}-1)+1\). Every product \((F^{(1)})^{l_{1}}\dots(F^{(k)})^{l_{k}}\) with \(\sum l_{i}=r\) contains at least one factor \((F^{(i)})^{r_{i}}\); otherwise \(\sum l_{i}\leq\sum(r_{i}-1)=r-1\). [12, SS75], [12, SS130].
3. The statement that \(r\) is independent of \(f\), but only depends on the \(f_{i}\), is nowadays normally not included. It follows from the fact that the ideal of polynomials vanishing at the zero set of the \(f_{i}\) is finitely generated.
For the proof of the theorem we first reduce to the case that there are only finitely many \(F^{(i)}\). By the basis theorem every polynomial in the
sequence is a linear combination of say \(F^{(1)},\ldots,F^{(k)}\), and a product of \(r\) polynomials is a sum of products of \(r\) polynomials \(F^{(1)},\ldots,F^{(k)}\). Hilbert writes this reduction at the end of his proof, and starts by assuming that there are only finitely many polynomials in the sequence.
The proof then splits in two parts. In the first it is assumed that the polynomials \(f_{1},\ldots,f_{m}\) have only finitely many common zeros. The main tool is elimination using the resultant of two binary forms (see Appendix A). Substituting in the \(f_{i}\) the expressions \(x_{1}\xi_{1},\ldots,x_{n-1}\xi_{1},\xi_{2}\) for the variables \(x_{1},\ldots,x_{n}\) makes them binary forms in the variables \(\xi_{1}\), \(\xi_{2}\), of degrees \(\nu_{1},\ldots,\nu_{m}\). Let \(\nu=\max_{j}\{\nu_{j}\}\).
To eliminate \(\xi_{1},\xi_{2}\) Hilbert uses Kronecker's device [10] and forms the expressions
\[F_{1} =u_{1}f_{1}+\cdots+u_{m}f_{m}\] \[F_{2} =v_{1}f_{1}+\cdots+v_{m}f_{m}\]
where the \(u_{i}\) and \(v_{i}\) are binary forms in \(\xi_{1}\), \(\xi_{2}\) of degree \(\nu-\nu_{i}\) with indetermined coefficients, making \(F_{1}\) and \(F_{2}\) homogeneous of degree \(\nu\). The resultant \(R(F_{1},F_{2})\) is a polynomial in the indeterminates occurring in the \(u_{i}\) and \(v_{i}\), whose coefficients are polynomials \(f^{\prime}_{1},\ldots,f^{\prime}_{m^{\prime}}\) depending only on the variables \(x_{1},\ldots,x_{n-1}\), and by putting \(\xi_{1}=1\), \(\xi_{2}=x_{n}\) one sees (from Proposition A.3) that the \(f^{\prime}_{i}\) lie in the ideal \((f_{1},\ldots,f_{m})\).
Hilbert does not consider the possibility that all \(f^{\prime}_{i}\) are identically zero. This happens if one of the common zeros is the point \((0:\cdots:0:1)\), for then \((\xi_{1}:\xi_{2})=(0:1)\) is a common zero of \(F_{1}\) and \(F_{2}\). The standard trick is to apply a general linear transformation. We may therefore assume that no common zero of the transformed system lies in a coordinate hyperplane, simplifying somewhat Hilbert's argument.
If the polynomials \(f_{i}\) have the common zeros \((\alpha_{1}:\ldots:\alpha_{n-1}:\alpha_{n})\), \((\beta_{1};\ldots:\beta_{n-1}:\beta_{n})\),..., \((\kappa_{1}:\ldots:\kappa_{n-1}:\kappa_{n})\), then the \(f^{\prime}_{i}\) have only the common zeros \((\alpha_{1}:\ldots:\alpha_{n-1})\),..., \((\kappa_{1}:\ldots:\kappa_{n-1})\).
In the same way the variable \(x_{n-1}\) can be eliminated from the \(f^{\prime}_{i}\), leading to polynomials \(f^{\prime\prime}_{1},\ldots,f^{\prime\prime}_{m^{\prime\prime}}\), and so on until a system of binary forms \(f^{(n-2)}_{1},\ldots,f^{(n-2)}_{m^{(n-2)}}\) in the variables \(x_{1}\), \(x_{2}\) is reached.
Hilbert uses this procedure to prove the result by induction on the number of common zeros. The base of the induction is the case that the \(f_{i}\) have no common zeros at all. Then the forms \(f^{(n-2)}_{1},\ldots,f^{(n-2)}_{m^{(n-2)}}\) have no common zeros. This implies that every binary form of sufficiently high degree lies in the ideal \((f^{(n-2)}_{1},\ldots,f^{(n-2)}_{m^{(n-2)}})\) and therefore in the ideal \((f_{1},\ldots,f_{m})\); in particular \(x^{r_{1}}_{1}\) and \(x^{r_{2}}_{2}\) lie in the ideal for some \(r_{1}\) and \(r_{2}\). In the same way it follows that \(x^{r_{3}}_{3}\),..., \(x^{r_{n}}_{n}\) lie in the ideal for sufficiently large \(r_{3}\),..., \(r_{n}\). Therefore every homogeneous polynomial in \(x_{1},\ldots,x_{n}\) of degree at least \(\sum(r_{i}-1)+1\) lies in the ideal, proving the base case.
This result can be called the weak projective Nullstellensatz, and we formulate it separately.
**Proposition 5.3**.: _A homogeneous ideal \(\mathfrak{a}\) has no zeros if and only if there exists an integer \(r\) such that every form of degree at least \(r\) lies in \(\mathfrak{a}\)._
We continue with the proof of the theorem. The induction step is that the statement holds if the polynomials have a given number of common zeros, say \((\beta_{1}:\ldots:\beta_{n-1}:\beta_{n})\),..., \((\kappa_{1}:\ldots:\kappa_{n-1}:\kappa_{n})\). Suppose that there is an additional common zero \((\alpha_{1}:\ldots:\alpha_{n-1}:\alpha_{n})\). Then every \(F^{(i)}\) can be written in the form
\[(\alpha_{2}x_{1}-\alpha_{1}x_{2})F^{(i)}_{12}+(\alpha_{3}x_{1}-\alpha_{1}x_{3} )F^{(i)}_{13}+\cdots+(\alpha_{n}x_{n-1}-\alpha_{n-1}x_{n})F^{(i)}_{n-1,n}.\]
By our assumption \(\alpha_{1}\) and \(\alpha_{2}\) are both non-zero, so elimination as above leads to forms \(f^{(n-2)}_{1},\ldots,f^{(n-2)}_{m^{(n-2)}}\) in the variables \(x_{1}\), \(x_{2}\) lying in the ideal \((f_{1},\ldots,f_{m})\), which only have the common zeros \((\alpha_{1}:\alpha_{2})\),..., \((\kappa_{1}:\kappa_{2})\). Choose one of these forms and write it in the form \((\alpha_{2}x_{1}-\alpha_{1}x_{2})^{r_{12}}\varphi_{12}\) with \(\varphi_{12}\) a binary form not vanishing for \(x_{1}=\alpha_{1}\), \(x_{2}=\alpha_{2}\). In the same way one finds \(r_{ij}\) and \(\varphi_{ij}\) for the other \(1\leq i<j\leq n\).
Put \(r^{\prime}=r_{12}+r_{13}+\cdots+r_{n-1,n}\) and \(\Phi=\varphi_{12}\,\varphi_{13}\cdots\varphi_{n-1,n}\). Then
\[\Phi\,\Pi^{(r^{\prime})}\in(f_{1},\ldots,f_{m})\;,\]
where \(\Phi\) is a polynomial that does not vanish in the point \((\alpha_{1}:\ldots:\alpha_{n})\) and \(\Pi^{(r^{\prime})}\) is an \(r^{\prime}\)-fold product of \(F^{(i)}\). The polynomials \(\Phi,f_{1},\ldots,f_{m}\) have only the common zeros \((\beta_{1}:\ldots:\beta_{n})\),..., \((\kappa_{1}:\ldots:\kappa_{n})\). Therefore there exists a number \(r^{\prime\prime}\) such that \(\Pi^{(r^{\prime\prime})}\in(\Phi,f_{1},\ldots,f_{m})\). Then \(\Pi^{(r)}\in(f_{1},\ldots,f_{m})\) for \(r=r^{\prime}+r^{\prime\prime}\), which proves the induction step.
In the second step the theorem is proved in general by induction on the number of variables. The induction hypothesis is that the result holds for \(n-1\) variables and that the number \(r\) can be chosen below a bound only depending on the degrees and the (finite) number of the forms \(f_{1},\ldots,f_{m},F,F^{\prime},\ldots\), but not on their coefficients. The base of the induction is \(n=2\), where the result is true by the first part of the proof, as binary forms can only have a finite number of zeros.
For the induction step put \(x_{1}=tx_{2}\). The polynomials \(f_{1},\ldots,f_{m}\), \(F,F^{\prime},\ldots\) become polynomials \(g_{1},\ldots,g_{m},G,G^{\prime},\ldots\) in the \(n-1\) variables \(x_{2},\ldots,x_{n}\) with coefficients polynomials in the parameter \(t\). If \(t\) takes a specific value then every \(G^{(i)}\) vanishes whenever all \(g_{i}\) vanish (as polynomials in \(x_{2},\ldots,x_{n}\)). By the induction hypothesis there is a number \(r_{12}\) such that every product \(\Pi^{(r_{12})}\) of polynomials \(G^{(i)}\) for every special value of \(t\) has a representation
\[\Pi^{(r_{12})}=b_{1}g_{1}+\cdots+b_{m}g_{m}\]
with the \(b_{i}\) polynomials in \(x_{2},\ldots,x_{n}\). Considering the coefficients of the \(b_{i}\) as indeterminates \(u_{j}\) and taking in this equation coefficients of the monomials in \(x_{2},\ldots,x_{n}\) yields a system of linear inhomogeneous equations for the \(u_{j}\). The coefficients of these linear equations are polynomials in \(t\), and for every value of \(t\) solutions exist.
At this point Hilbert uses an "easily proved" lemma: If a given a system of linear equations
\[c_{11}u_{1}+\cdots+c_{1p}u_{p} =c_{1},\] \[\vdots\] \[c_{q1}u_{1}+\cdots+c_{qp}u_{p} =c_{q}\]
with \(c_{ij}\), \(c_{k}\in K[t]\), has solutions for every value of \(t\) (\(K\) being an infinite field) then there exists a solution with \(u_{i}\in K(t)\).
Indeed, as Popov remarks in his comments [10], a solution exists if and only if the rank of the coefficient matrix is equal to the rank of the augmented matrix. As \(k\) is infinite, one can find a \(t_{0}\in k\) such the rank of these matrices over \(k(t)\) is the same as the rank over \(k\) with \(t_{0}\) substituted for \(t\). Applying this lemma to the equations for the coefficients of the \(b_{i}\) and substituting \(t=\frac{x_{1}}{x_{2}}\) gives after clearing denominators that
\[\psi_{12}\Pi^{(r_{12})}\in(f_{1},\ldots,f_{m})\]
with \(\psi_{12}\) a binary form in \((x_{1},x_{2})\), and \(\Pi^{(r_{12})}\) the product of \(r_{12}\) polynomials \(F^{(i)}\) corresponding to the chosen \(G^{(i)}\). In the same way one finds \(r_{ij}\) and a binary form \(\psi_{ij}\) in \((x_{i},x_{j})\) with \(\psi_{ij}\Pi^{(r_{ij})}\in(f_{1},\ldots,f_{m})\). Now put \(r^{\prime}=\max\{r_{ij}\}\) and choose \(r^{\prime}\) polynomials \(F^{(i)}\). The corresponding binary forms \(\psi_{12},\ldots,\psi_{n-1,n}\) have only finitely many common zeros, so also the polynomials \(\psi_{12}\ldots\psi_{n-1,n},f_{1},\ldots,f_{m}\). By the first part of the proof there exists a number \(r^{\prime\prime}\) such that
\[\Pi^{(r^{\prime\prime})}\in(\psi_{12}\ldots\psi_{n-1,n},f_{1},\ldots,f_{m})\;.\]
As \(\psi_{ij}\Pi^{(r_{ij})}\in(f_{1},\ldots,f_{m})\) one has that for \(r=r^{\prime}+r^{\prime\prime}\) that
\[\Pi^{(r)}\in(f_{1},\ldots,f_{m})\;.\]
This concludes the induction step, and with that the proof of the Nullstellensatz.
## 6. Proofs using primary decomposition
Primary ideals were introduced by Lasker [11] in the setting of polynomial rings. He used primary decomposition to give a new proof of the Nullstellensatz. Macaulay [11] follows this strategy, but with different proofs.
The modern definition of primary ideals is due to Emmy Noether [14], and applies to all Noetherian rings.
**Definition 6.1**.: An ideal \(\mathfrak{q}\) in a Noetherian ring \(R\) is primary if whenever \(ab\in\mathfrak{q}\) but \(a\notin\mathfrak{q}\) it follows that \(b^{k}\in\mathfrak{q}\) for some \(k>0\).
The radical \(\sqrt{\mathfrak{q}}\) of \(\mathfrak{q}\), that is \(\{a\in R\mid a^{k}\in\mathfrak{q}\text{ for some }k>0\}\), is a prime ideal \(\mathfrak{p}\) and \(\mathfrak{q}\) is said to be \(\mathfrak{p}\)-primary.
By the Lasker-Noether Theorem (which Lasker proposed to call the Noether-Dedekind Theorem) every ideal \(\mathfrak{a}\) has an irredundant primary decomposition into primary ideals \(\mathfrak{a}=\mathfrak{q}_{1}\cap\cdots\cap\mathfrak{q}_{n}\) (for a proof see van der Waerden's Algebra II [22, Ch. 15]). The ideals \(\mathfrak{p}_{i}=\sqrt{\mathfrak{q}_{i}}\) are the associated primes of \(\mathfrak{a}\).
The decomposition is not unique. The simplest example is the ideal \((x^{2},xy)\subset K[x,y]\), which can be written as \((x)\cap(x^{2},xy,y^{2})\) but also as \((x)\cap(x^{2},y)\), and even \((x)\cap(x^{2},y+\lambda x)\). The number of components is always the same, and also the associated prime ideals; in the example the ideals \((x)\) and \((x,y)\). According to [19, footnote 10] Noether learned this example from K. Hentzelt. This means that Emmy Noether occupied herself with primary ideals already in 1913/14. Macaulay [18, 19] has more complicated examples. Maybe Noether knew about the manuscript for [18] through her father; the paper has footnotes on p. 71 and p. 86, mentioning Max Noether, saying among others: "I am also indebted to Professor Noether for kindly suggesting other alterations which I have carried out". This sounds as a reaction to a (non-anonymous) referee report.
By the definition of a primary ideal and the fact that the associated prime ideal is finitely generated one immediately obtains
**Proposition 6.2**.: _If \(\mathfrak{q}\) is a primary ideal and \(\mathfrak{p}\) the associated prime ideal, then some finite power of \(\mathfrak{p}\) is contained in \(\mathfrak{q}\)._
Noether [19] applies her results to polynomial ideals; in this paper she still only considers complex coefficients. The connection with elimination theory and ideal theory, say as described by Macaulay in his Tract [18], is given by the following special case of the Nullstellensatz.
**Proposition 6.3**.: _A prime ideal \(\mathfrak{p}\) consists of all polynomials vanishing on its zero set._
Conversely, the Nullstellensatz follows from this Proposition. Let \(\mathfrak{q}_{i}\) be a primary ideal in the decomposition of \(\mathfrak{a}\) with associated prime ideal \(\mathfrak{p}_{i}\). If \(\mathfrak{b}\) is an ideal vanishing on the zero set of \(\mathfrak{a}\), then it vanishes on the zero set of \(\mathfrak{p}_{i}\) and therefore \(\mathfrak{b}\subset\mathfrak{p}_{i}\) and \(\mathfrak{b}^{k_{i}}\subset\mathfrak{p}_{i}^{k_{i}}\subset\mathfrak{q}_{i}\). Let \(k\) be the maximum of the \(k_{i}\), then \(\mathfrak{b}^{k}\subset\bigcap\mathfrak{q}_{i}=\mathfrak{a}\).
Lasker stated Proposition 6.3 in [17], adding that it follows, say, from the Nullstellensatz, but in the addendum [17] he explained how to prove it directly. It seems that Macaulay [18] did not notice this, as he criticises Lasker's proof of Proposition 6.2, saying that Lasker first assumes the result and then proves it.
Macaulay and Lasker have a different definition of primary ideals, which makes the proof of Proposition 6.2 non-trivial. Macaulay [18] defines a primary ideal by the property that no product of two ideals is contained in it without one of them contained in it or both containing
its zero set. Hence if one does not contain the zero set the other is contained in the ideal. By the Nullstellensatz Macaulay's definition is equivalent to Noether's.
Lasker's original definition was stated for homogeneous ideals in \(S=K[x_{1},\dots,x_{n}]\), making the statements about primary decomposition more complicated. A primary ideal \(\mathfrak{q}\) and the associated prime ideal \(\mathfrak{p}\) occur both in his definition: whenever \(ab\in\mathfrak{q}\) and \(a\notin\mathfrak{p}\) it follows that \(b\in\mathfrak{q}\). To make the zero set \(C\) of \(\mathfrak{p}\) an irreducible component of the zero set of \(\mathfrak{q}\) it is required that the dimension of \(\mathfrak{q}\) is at most that of \(\mathfrak{p}\). According to Lasker an algebraic set \(C\) has dimension \(m\) if it has a finite number of points in common with \(m\) general linear forms, and if the forms in an ideal \(\mathfrak{a}\) vanish on sets of dimension \(m\), but not on sets of higher dimension, then \(\mathfrak{a}\) has dimension \(m\). Actually Lasker uses the quantity \(m+1\), which he calls "Mannigfaltigkeit" and Macaulay [10] translates with manifoldness or dimensionality. The value \(0\) is allowed for dimensionality, and it occurs in [11] in an essential way, although it is not defined what it means.
Lasker's approach to primary decomposition is as follows. Let \(C_{1}\),..., \(C_{j}\) be the irreducible components of the zero set of \(\mathfrak{a}\) of highest dimension and let \(\mathfrak{p}_{i}\) be the prime ideal corresponding to \(C_{i}\). Define \(\mathfrak{a}_{i}\) as the set of all \(f\in S\) such that \(C_{i}\) is not a component of the zero set of the ideal quotient \(\mathfrak{a}:(f)=\{g\in S\mid gf\in\mathfrak{a}\}\), so there exists a \(\phi\) not vanishing on \(C_{i}\) with \(f\phi\in\mathfrak{a}\). Then \(\mathfrak{a}_{i}\) is an ideal, whose zero set only consists of \(C_{i}\). Furthermore \(\mathfrak{a}_{i}\) is primary, for if \(ab\in\mathfrak{a}_{i}\) with \(a\notin\mathfrak{p}_{i}\), then \(ab\phi\in\mathfrak{a}\) for a \(\phi\) not vanishing on \(C_{i}\), and as \(a\phi\notin\mathfrak{p}_{i}\), we have \(b\in\mathfrak{a}_{i}\) by the definition of \(\mathfrak{a}_{i}\).
The set \(C_{i}\) is not a component of the zero set of the ideal quotient \(\mathfrak{a}^{\prime}_{j}=\mathfrak{a}:\mathfrak{a}_{j}\). Let \(\psi\in\mathfrak{a}^{\prime}_{1}+\dots+\mathfrak{a}^{\prime}_{j}\) be a form which does not vanish on any of the \(C_{i}\). Then we claim that \(\mathfrak{a}=\mathfrak{a}_{1}\cap\dots\cap\mathfrak{a}_{j}\cap(\mathfrak{a},\psi)\). If \(f\) is an element of the right hand side, then \(f\in(\mathfrak{a},\psi)\) so \(f-g\psi\in\mathfrak{a}\subset\mathfrak{a}_{i}\), and as \(f\in\mathfrak{a}_{i}\), we get \(g\psi\in\mathfrak{a}_{i}\); because \(\mathfrak{a}_{i}\) is primary, and \(\psi\) does not vanish on \(C_{i}\), we get \(g\in\mathfrak{a}_{i}\) for all \(i\). As \(\psi\in\mathfrak{a}^{\prime}_{1}+\dots+\mathfrak{a}^{\prime}_{j}\) we find that \(g\psi\in\mathfrak{a}\) and therefore \(f\in\mathfrak{a}\). The dimension of \((\mathfrak{a},\psi)\) is lower, and we can repeat the process.
In this proof essential use is made of Proposition 6.3. Lasker proves it in [11], Macaulay in [10, Section 31] and formulates it as follows: there is only one prime ideal with a given (irreducible) zero set, viz. the ideal consisting of all polynomials vanishing on the zero set. In [10, Section 32] he shows Proposition 6.2. With primary decomposition the Hilbert-Netto Theorem (Macaulay's name for the Nullstellensatz) then follows.
Macaulay proves Propositions 6.3 and 6.2 with classical methods of elimination theory, which are unfamiliar to the modern reader. The main ingredient is the so-called \(u\)-resolvent. The goal is to describe the irreducible components of the zero set of an ideal. Consider
\((F_{1},\ldots,F_{k})\), which is as always supposed to be prepared by a general linear coordinate change. Macaulay shortly writes "The solutions of \(F_{1}=F_{2}=\cdots=F_{k}=0\) are obtained in the most useful way by introducing a general unknown \(x\) standing for \(u_{1}x_{1}+u_{2}x_{2}+\cdots+u_{n}x_{n}\), where \(u_{1},u_{2},\ldots,u_{n}\) are undetermined coefficients". This is known as the Liouville substitution [10], and its use is explained in detail in Netto's book [14]. One substitutes
\[x_{1}=\frac{x-u_{2}x_{2}-\cdots-u_{n}x_{n}}{u_{1}}\]
in the equations and multiplies with suitable powers of \(u_{1}\) to make the new equations \(f_{1}=f_{2}=\cdots=f_{k}=0\) in \(x,x_{2},\ldots,x_{n}\) polynomial. The solutions of the first system determine those of the second and vice versa. The complete resolvent \(D_{u}D_{u}^{(1)}\cdots D_{u}^{(n-1)}(=F_{u})\) of \((f_{1},\ldots,f_{k})\) obtained by eliminating \(x_{2},x_{3},\ldots,x_{n}\) in this order is called the complete \(u\)-resolvent of \((F_{1},\ldots,F_{k})\).
The \(u\)-resolvent \(F_{u}\) is a polynomial in \(x,x_{2},\ldots,x_{n},u_{1},\ldots,u_{n}\). As a polynomial in \(x\), when the \(x_{i}\) and \(u_{i}\) have specific values, it splits in linear factors. Such a linear factor of \(D_{u}^{(r-1)}\) has the form
\[x-u_{1}\xi_{1}-u_{r}\xi_{r}-u_{r+1}x_{r+1}-\cdots-u_{n}x_{n}\]
where \(\xi_{1},\ldots,\xi_{r},x_{r+1},\ldots,x_{n}\) is a solution of \(F_{1}=F_{2}=\cdots=F_{k}=0\). Macaulay calls a linear factor a true linear factor if \(\xi_{1},\ldots,\xi_{r}\) are independent of \(u_{1},\ldots,u_{n}\), that is, if it is linear in \(x,u_{1},\ldots,u_{n}\).
**Example 6.4**.: Consider the ideal \((x_{1}^{2}+x_{2}^{2}-2,x_{1}^{2}-x_{2}^{2})\). Substituting \(x=u_{1}x_{1}+u_{2}x_{2}\) and eliminating \(x_{2}\) using the Sylvester determinant gives \(F_{u}=4u_{1}^{2}(x-u_{2}-u_{1})(x-u_{2}+u_{1})(x+u_{2}-u_{1})(x+u_{2}+u_{1})\). From the factor \(x-u_{2}-u_{1}\) one finds the solution \((x_{1},x_{2})=(1,1)\) by substituting the values \((1,0)\) and \((0,1)\) for \((u_{1},u_{2})\). Using the same substitution for the principal ideal \((x_{1}^{2}+x_{2}^{2}-2)\) gives \(F_{u}=(x-u_{1}\sqrt{2-x_{2}^{2}}-u_{2}x_{2})(x+u_{1}\sqrt{2-x_{2}^{2}}-u_{2}x _{2})\), which is indeed of the form \((x-u_{1}\xi_{1}-u_{2}x_{2})(x-u_{1}\xi_{2}-u_{2}x_{2})\).
Macaulay goes on to prove that the solution supplied by a factor which is not a true linear one is an embedded solution. In particular all the linear factors of the first complete partial \(u\)-resolvent are true linear factors. According to Macaulay Kronecker states without proof that all linear factors of \(F_{u}\) are true linear factors, while Konig's proof contains an error, and Macaulay doubts whether the statement is true. In fact, Kronecker's claim is true for the resultant form in the elimination theory of Hentzelt-Noether [13, Satz XIII].
An irreducible factor \(R_{u}\) of rank \(r\) of \(F_{u}\) having a true linear factor leads to a parametrisation of the corresponding irreducible component of the solution set. One can take \(x_{r+1},\ldots,x_{n}\) arbitrary and for each set of values there are solutions \(x_{1i},\ldots,x_{ri}\), \(i=1,\ldots,d\), with \(d\) the
degree of the component. Therefore we can (formally) write
\[R_{u}=A\,\Pi_{i=1}^{d}(x-u_{1}x_{1i}-\cdots-u_{r}x_{ri}-u_{r+1}x_{r+1}-\cdots-u_{n }x_{n})\]
so
\[(R_{u})_{x=u_{1}x_{1}+\cdots+u_{n}x_{n}}=A\,\Pi_{i=1}^{d}(u_{1}(x_{1}-x_{1i})+ \cdots+u_{r}(x_{r}-x_{ri}))\]
The last expression is independent of \(u_{r+1},\ldots,u_{n}\) and vanishes identically at all points of the solutions set and at no other points, that is irrespective of \(u_{1},\ldots,u_{r}\). The coefficients of the monomials in the \(u_{i}\) in \((R_{u})_{x=u_{1}x_{1}+\cdots+u_{n}x_{n}}\) are polynomials in the \(x_{i}\) which all vanish at all points of the solution set and do not all vanish at other points. This gives equations for the solution set. We single out some of them. The coefficient of \(u_{r}^{d}\) is \(\phi(x_{r},x_{r+1},\ldots,x_{n})=A\,\Pi(x_{r}-x_{ri})\). The coefficient of \(u_{1}u_{r}^{d-1}\) is \(\phi\sum\frac{x_{1}-x_{1i}}{x_{r}-x_{ri}}\), which we write as \(x_{1}\phi^{\prime}-\phi_{1}\), where \(\phi^{\prime}\) is the derivative of \(\phi\) w.r.t. \(x_{r}\) and \(\phi_{1}=\phi\sum\frac{x_{1i}}{x_{r}-x_{ri}}\). Similarly we have \(x_{2}\phi^{\prime}-\phi_{2}\),..., \(x_{r-1}\phi^{\prime}-\phi x_{r-1}\). Outside \(\phi^{\prime}=0\) we have therefore the equations
\[\phi=0,\quad x_{i}=\frac{\phi_{i}}{\phi^{\prime}},\;i=1,\ldots,r-1\;.\]
With these preparations we can give the proof of Proposition 6.3.
#### Macaulay's proof of Proposition 6.3
The zero set is irreducible, otherwise the complete \(u\)-resolvent would contain at least two factors corresponding to different irreducible components, contradicting that the ideal is prime.
Let \(\mathfrak{p}=(F_{1},\ldots,F_{k})\) be a prime ideal. It will be sufficient to prove that \(F\in\mathfrak{p}\) for every polynomial \(F\) that vanishes on the zero set of \(\mathfrak{p}\). The first complete partial \(u\)-resolvent of \(\mathfrak{p}\) will be a power \(R_{u}^{m}\) of an irreducible polynomial \(R_{u}\) in \(x,x_{r+1},\ldots,x_{n}\). The complete \(u\)-resolvent lies in the prime ideal \((f_{1},\ldots,f_{k})\), and for dimension reasons the other factors do not vanish on the zero set of \((f_{1},\ldots,f_{k})\). Hence \(R_{u}^{m}\) and therefore \(R_{u}\) itself belongs to \((f_{1},\ldots,f_{k})\). This gives that \((R_{u})_{x=u_{1}x_{1}+\cdots+u_{n}x_{n}}\in(F_{1},\ldots,F_{k})=P\), and the same holds for the polynomial coefficients of the monomials in the \(u_{i}\). In particular \(\phi\in\mathfrak{p}\) and \(\psi_{i}:=x_{i}\phi^{\prime}-\phi_{i}\in\mathfrak{p}\), \(i_{1},\ldots,r-1\).
Let now \(F\) vanish on the zero set of \(\mathfrak{p}\) and substitute \(x_{i}=\phi_{i}/\phi^{\prime}\), \(i=1,\ldots,r-1\); then \(F\) becomes a rational function of \(x_{r},x_{r+1},\ldots,x_{n}\) with denominator \(\phi^{\prime l}\), where \(l\) is the degree of \(F\). This rational function vanishes for all points of the zero set of \(\mathfrak{p}\) where \(\phi^{\prime}\) does not vanish and its numerator is therefore divisible by \(\phi\). We conclude that
\[\phi^{\prime l}F(\tfrac{\phi_{1}}{\phi^{\prime}},\ldots,\tfrac{\phi_{r-1}}{ \phi^{\prime}},x_{r},\ldots,x_{n})=G\phi\]
for some polynomial \(G\) in \(x_{r},\ldots,x_{n}\). Therefore \(\phi^{\prime l}F(x_{1},\ldots,x_{n})\in(\psi_{1},\ldots,\psi_{r-1},\phi)\subset \mathfrak{p}\) and hence \(F\in\mathfrak{p}\).
Macaulay's proof of Proposition 6.2 follows the same steps, but now we can only conclude that \(R_{u}^{m}\in(f_{1},\ldots,f_{k})\). Taking suitable coefficients of \((R_{u}^{m})_{x=u_{1}x_{1}+\cdots+u_{n}x_{n}}\) we find that \(\phi^{m}\in\mathfrak{q}=(F_{1},\ldots,F_{k})\) and \(\psi_{i}^{m}-G\phi\in\mathfrak{q}\) so \(\psi_{i}^{m^{2}}\in Q\). If \(F\in P\) with \(P\) the associated prime ideal then we have just seen that \(\phi^{\prime l}F\in(\psi_{1},\ldots,\psi_{r-1},\phi)\). Therefore \((\phi^{\prime l}F)^{rm^{2}}\in(\psi_{1}^{m^{2}},\ldots,\psi_{r-1}^{m^{2}},\phi ^{m^{2}})\subset\mathfrak{q}\) and since \(\mathfrak{q}\) is primary and no power of \(\phi^{\prime}\) is contained in \(\mathfrak{q}\) we conclude that \(F^{rm^{2}}\in\mathfrak{q}\).
Lasker gives in [10] a totally different proof of Proposition 6.3, based on properties of the Hilbert polynomial [12]. Let \(\mathfrak{a}\) be a homogeneous ideal in \(S=K[x_{1},\ldots,x_{n}]\). Define the Hilbert function \(H_{\mathfrak{a}}(\nu)=\dim(S/\mathfrak{a})_{\nu}\), where \((S/\mathfrak{a})_{\nu}\) is the degree \(\nu\) part of \(S/\mathfrak{a}\). Lasker proves from scratch the now familar properties of this function [10, Kap. II]. For large \(\nu\) the function \(H_{\mathfrak{a}}(\nu)\) is a polynomial in \(\nu\). Lasker shows that \(H_{\mathfrak{a}}(\nu)=0\) for sufficiently large \(\nu\) if the ideal \(\mathfrak{a}\) has no zeros. This follows from the fact that under this assumption all monomials of large degree belong to \(\mathfrak{a}\), that is, the weak projective Nullstellensatz 5.3. Lasker proves it using his version of the resultant of \(n-1\) forms in \(n\) variables. Furthermore, if \(u\) is a form of degree \(d\), which is not a zero divisor in \(S/\mathfrak{a}\), then \(H_{(\mathfrak{a},u)}(\nu)=\Delta_{d}H_{\mathfrak{a}}(\nu)\), where \(\Delta_{d}\) is defined by \(\Delta_{d}f(\nu)=f(\nu)-f(\nu-d)\) for a function \(f\).
In Lasker's version of Proposition 6.3 (the zero set \(C\) of a prime ideal is irreducible and every form vanishing on \(C\) belongs to \(\mathfrak{p}\)) irreducibility has to be proved. Suppose that the zero set of the prime ideal \(\mathfrak{p}\) consists only of points. If \(u\notin\mathfrak{p}\) is a linear form then \(H_{\mathfrak{p}}(u)=\Delta_{1}H_{\mathfrak{p}}\). If \(u\) does not vanish in any of the points, then \(H_{(\mathfrak{p},u)}=0=\Delta_{1}H_{\mathfrak{p}}\) and \(H_{\mathfrak{p}}\) is constant. If there would exist a form \(u\notin\mathfrak{p}\) vanishing in one of the points, then \(H_{(\mathfrak{p},u)}\neq 0\), so such a form does not exist. We conclude that every linear form vanishing in one of the points vanishes in all, which is only possible if there is only one point, showing irreducibility. And every form vanishing in the point belongs to \(\mathfrak{p}\).
If \(\mathfrak{p}\) has dimension \(1\) and \(u\) does not contain any \(1\)-dimensional component of the zero set, then again \(H_{(\mathfrak{p},u)}=\Delta_{1}H_{\mathfrak{p}}\), so \(H_{\mathfrak{p}}\) a linear function of \(\nu\). If there exists a form \(u\notin\mathfrak{p}\), vanishing on a \(1\)-dimensional component \(C\), then \(H_{(\mathfrak{p},u)}\) is independent of \(\nu\). The forms in \((\mathfrak{p},u)\) all vanish on \(C\) and therefore are all contained in the prime ideal \(\Pi\) of forms vanishing on \(C\). The Hilbert polynomial of \(\Pi\) is linear, so \(H_{(\mathfrak{p},u)}\) cannot be constant. This shows that every form vanishing on the zero set of \(\mathfrak{p}\) belongs to \(\mathfrak{p}\). Irreducibility also follows: suppose on the contrary that there exist \(a,b\) with \(ab\) vanishing while \(a\) and \(b\) do not vanish in all points. Let \(a\) contain a \(1\)-dimensional component, but not all zeros. Then \(a\notin\mathfrak{p}\), so such a form cannot exist, and therefore \(a\) vanishes in all points.
In this way the Proposition can be shown by induction.
The proof of Proposition 6.2 uses (or rather proves) general properties of primary ideals, and is in this way closer to the modern approach than Macaulay's.
Lasker's proof of Proposition 6.2.: Let \(F_{1},\ldots,F_{h}\) be a basis of \(\mathfrak{p}\). Form \(F=p_{1}F_{1}+\cdots+p_{h}F_{h}\) with indetermined coefficients, of suitable degree. The ideal quotient \(\mathfrak{q}^{\prime}=\mathfrak{q}:(F)=\{a\in S\mid aF\in\mathfrak{q}\}\) can be found despite the fact that \(F\) has indetermined coefficients, as the condition \(aF\in\mathfrak{q}\) leads in each each degree to only linear equations, which can be solved with indetermined coefficients. Put \(\mathfrak{q}^{\prime\prime}=\mathfrak{q}^{\prime}:(F)\), \(\mathfrak{q}^{\prime\prime\prime}=\mathfrak{q}^{\prime\prime}:(F)\) and so on. In this sequence every ideal is contained in the next, so there is a number \(k\) with \(\mathfrak{q}^{(k)}=\mathfrak{q}^{(k+1)}\), by the Ascending Chain Condition (proved from the Basis Theorem by Lasker [14, p. 56]). Every ideal \(\mathfrak{q}^{(i)}\) is \(\mathfrak{p}\)-primary. Lasker proves this by doing the example \(\mathfrak{q}^{\prime}\). Let \(a\notin\mathfrak{p}\) and \(ab\in\mathfrak{q}^{\prime}\), so \(Fab\in\mathfrak{q}\) and because \(\mathfrak{q}\) is primary, \(Fb\in\mathfrak{q}\), giving \(b\in\mathfrak{q}^{\prime}\). Also the dimension of \(\mathfrak{q}^{\prime}\) is at most that of \(\mathfrak{p}\). Therefore \(\mathfrak{q}^{\prime}\) is \(\mathfrak{p}\)-primary (according to Lasker's definition).
Moreover, according to Lasker's definition, if an ideal \(\mathfrak{q}\) is \(\mathfrak{p}\)-primary then the zero set of \(\mathfrak{q}\) contains the zero set of \(\mathfrak{p}\) or \(\mathfrak{q}\) is the whole ring: if \(a\in\mathfrak{q}\), but \(a\notin\mathfrak{p}\) and if \(f\) is an arbitrary form, then \(af\in\mathfrak{q}\) so \(f\in\mathfrak{q}\). Now the above constructed \(F\) is not a zero divisor on \(S/\mathfrak{q}^{(k)}\), so by the properties of the Hilbert polynomial the dimension of \((\mathfrak{q}^{(k)},F)\) should be less than that of \(\mathfrak{q}^{(k)}\). The conclusion is that \(\mathfrak{q}^{(k)}\) is the whole ring, so \(1\in\mathfrak{q}^{(k)}\).
As \(\mathfrak{q}^{(k)}=\mathfrak{q}^{(k-1)}:(F)\), we get \(F\in\mathfrak{q}^{(k-1)}\), and then \(F^{2}\in\mathfrak{q}^{(k-2)}\), until finally \(F^{k}\in\mathfrak{q}\). As the coefficients of the \(p_{i}\) are indeterminates we conclude that \(F_{1}^{k}\), \(F_{1}^{k-1}F_{2}\),..., \(F_{h}^{k}\) lie in \(\mathfrak{q}\). Therefore \(f^{k}\in\mathfrak{q}\) for any form \(f=q_{1}F_{1}+\cdots+q_{h}F_{h}\) in \(\mathfrak{p}\).
## 7. Modern algebra
In Moderne Algebra II [21] van der Waerden gives two proofs of the Nullstellensatz, the first one using Rabinowitsch' trick and proving the weak version by elimination theory. The second proof is based on [21] and belongs therefore to the proofs before Rabinowitsch. It proves Proposition 6.3, and using Noether's definition of primary ideals the Nullstellensatz follows as above. In later editions, in Algebra II [21], the elimination theory proof is removed, and the weak version is shown with same type of ideas as in the second proof, but avoiding primary decomposition. This proof was first described in [21].
Whereas Noether in [13] still considered complex coefficients in the application to algebraic geometry, she takes in later papers always an arbitrary field as base field. In [21] this point is stressed by a footnote (Footnote 13), stating that the new definition holds for unusual spaces, like those where the fourth harmonic point always coincides with the third; this happens in characteristic two: if the first
three points on the line are normalised to be \(0,\infty\) and \(1\), then the fourth harmonic has coordinate \(-1\).
Let \(K\) be a field and \(R=K[x_{1},\dots,x_{n}]\) the polynomial ring in \(n\) variables over \(K\). Consider points in the affine space \(\mathbb{A}^{n}(L)\) with coordinates in an algebraic extension \(L\) of \(K\). Besides such points one has also to consider 'undetermined' points, where the coordinates are indeterminates or algebraic functions of parameters, that is elements in a transcendental extension \(\Omega\) of \(K\).
Let therefore \(\Omega=K(\xi_{1},\dots,\xi_{n})\) be a field extension. The polynomials \(f\in R\) for which \(f(\xi_{1},\dots,\xi_{n})=0\) form a prime ideal \(\mathfrak{p}\) in \(R\): if
\[f(\xi_{1},\dots,\xi_{n})g(\xi_{1},\dots,\xi_{n})=0\]
and \(g(\xi_{1},\dots,\xi_{n})\neq 0\), then \(f(\xi_{1},\dots,\xi_{n})=0\), as a field does not contain zero divisors. Van der Waerden [26] gives a simple example: let \(\xi_{1},\dots,\xi_{n}\) be linear functions of one indeterminate \(t\) with coefficients in \(K\):
\[\xi_{i}=\alpha_{i}+\beta_{i}t\;.\]
Then \(\mathfrak{p}\) consists of all polynomials vanishing on the line given by the above parametrisation. This example is not contained in Moderne Algebra II [26], but occurs again in Algebra II [26].
The field \(\Omega\) is isomorphic to the quotient field \(\Pi\) of \(R/\mathfrak{p}\), in such a way that the \(\xi_{1}\) correspond to the \(x_{i}\). Conversely, for every prime ideal \(\mathfrak{p}\neq 0\) there exists a field \(\Omega=K(\xi_{1},\dots,\xi_{n})\) such that \(\mathfrak{p}\) consists of all polynomials \(f\in R\) for which \(f(\xi_{1},\dots,\xi_{n})=0\). The point \((\xi_{1},\dots,\xi_{n})\) is the general zero of \(\mathfrak{p}\).
The dimension of \(\mathfrak{p}\) is the transcendence degree of \(\Omega\) over \(K\). Let \(t_{1},\dots,t_{r}\) be a transcendence basis of \(\Omega\), so \(\Omega\) is an algebraic extension of the field of rational functions \((t_{1},\dots,t_{r})\). Let \(f_{1},\dots,f_{s}\) be elements of the function field \(\Omega\). For given values \(\tau_{1},\dots,\tau_{r}\) of the argument one can solve and find values \(\varphi_{1},\dots,\varphi_{s}\) in a suitable extension of \(k\), but only those systems of values are allowed for which all relation \(F(f_{1},\dots,f_{s},t_{1},\dots,t_{r})=0\) also hold for the specific values, that is \(F(\varphi_{1},\dots,\varphi_{s},\tau_{1},\dots,\tau_{r})=0\). For example, if \(f_{1}=\sqrt{t}\), \(f_{2}=-f_{1}\), then we have the equations \(f_{1}^{2}=t\) and \(f_{2}^{2}=t\), giving for \(t=1\) the values \(\varphi_{1}=\pm 1\) and \(\varphi_{2}=\pm 1\), but it is not allowed to combine \(\varphi_{1}=1\) with \(\varphi_{2}=1\), as this violates the relation \(f_{1}+f_{2}=0\)[26, SS88]. The existence of such systems is shown by adjoining the \(f_{i}\) successively. The denominators in the resulting monic equations for the \(f_{i}\) can be taken to depend only on the \(t_{j}\). Let \(V(t_{1},\dots,t_{r})\) be the lowest common multiple of the denominators. Consider only parameter values for which \(V(\tau_{1},\dots,\tau_{r})\neq 0\). But also the converse is valid: if a relation \(F(\varphi_{1},\dots,\varphi_{s},\tau_{1},\dots,\tau_{r})=0\) holds for all regular systems ofvalues for the arguments and all admissable corresponding function values, then also the relation \(F(f_{1},\dots,f_{s},t_{1},\dots,t_{r})=0\) holds in the function field [26, SS88].
Every system of algebraic functions \(\xi_{1},\dots,\xi_{n}\) of \(t_{1},\dots,t_{r}\) can be specialised in the above way to \(\xi_{1}^{\prime},\dots,\xi_{n}^{\prime}\) and this determines a point \(\xi^{\prime}\) in affine space over a suitable algebraic extension of \(k\). Let \(V\) be the Zariski closure of these points, that is the zero set of all polynomials \(F\) for which \(F(\xi_{1}^{\prime},\dots,\xi_{n}^{\prime})=0\). This means that \(\xi_{1},\dots,\xi_{n}\) determines the algebraic variety \(V\) in parameter form and its prime ideal \(\mathfrak{p}\) has the general zero \((\xi_{1},\dots,\xi_{n})\). As every prime ideal \(\mathfrak{p}\) has a general zero \((\xi_{1},\dots,\xi_{n})\), where the \(\xi_{i}\) are algebraic functions of parameters \(t_{1},\dots,t_{r}\), Proposition 6.3 follows: every prime ideal is the ideal of its zero set. In particular, the only prime ideal without zeros is the whole ring.
As application van der Waerden proves first the generalisation of Noether's fundamental theorem to zero dimensional ideals in arbitrary dimension. Konig proved the theorem for the case of a complete intersection [14, p. 385], and Macaulay observed that the general case easily follows using primary decomposition [11, p. 61].
**Theorem 7.1**.: _Let \(\mathfrak{a}\) be an ideal in \(R=K[x_{1},\dots,x_{n}]\) with finitely many zeros \(P_{i}\) in \(\mathbb{A}^{n}(K)\), \(K\) algebraically closed. For a zero \(P=(\xi_{1},\dots,\xi_{n})\) let \(\mathfrak{m}_{P}=(x_{1}-\xi_{1},\dots,x_{n}-\xi_{n})\). There is an integer \(\rho\) depending only on \(\mathfrak{a}\) such that \(f\in\mathfrak{a}+\mathfrak{m}_{P_{i}}^{\rho}\) for all \(i\) implies \(f\in\mathfrak{a}\)._
Proof.: Let \(\mathfrak{a}=\bigcap_{i}\mathfrak{q}_{i}\) be the primary decomposition of \(\mathfrak{a}\). The associated prime ideal of \(\mathfrak{q}_{i}\) is \(\mathfrak{m}_{P_{i}}\). For each \(i\) there is an exponent \(\rho_{i}\) such that \(\mathfrak{m}_{P_{i}}^{\rho_{i}}\subset\mathfrak{q}_{i}\) and then \(\mathfrak{q}_{i}=\mathfrak{a}+\mathfrak{m}_{P_{i}}^{\rho_{i}}\). With \(\rho=\max\rho_{i}\) the condition in the theorem implies that \(f\in\mathfrak{q}_{i}\) for all \(i\) and therefore \(f\in\mathfrak{a}\).
Lasker generalised Noether's theorem [15, 16] to what Macaulay calls the Lasker-Noether Theorem [11, p. 61]. He formulates it roughly as follows.
**Theorem 7.2**.: _If \(\mathfrak{a}=(F_{1},F_{2},\dots,F_{k})\) and \(F\) can be written as \(F=P_{1}F_{1}+P_{2}F_{2}+\dots+P_{k}F_{k}\), where the \(P_{i}\) are power series, then there exists a polynomial \(\phi\) not vanishing at the origin such that \(F\phi\in\mathfrak{a}\)._
It follows that \(F\) lies in every primary component containing the origin. For a criterion that \(F\in\mathfrak{a}\) it suffices to impose the power series condition in a finite number of points.
According to van der Waerden both Lasker's and Macaulay's proofs are insufficient; he adds a note in proof that the gaps in proof by Macaulay are filled in correspondence between them [13]. The proof still needs convergence of the power series involved, a condition not necessary in the proof van der Waerden says to have. The easiest proof seems to be due to Krull [15], and it is this proof which Macaulay gives in [11], and refers to as a hitherto unpublished result. This makes it probable that Macaulay learned it from van der Waerden.
A different generalisation is due to Hentzelt, and elaborated by Hermann [10]. We give it in the formulation of Krull [15, Nr. 20].
**Theorem 7.3**.: _For every ideal \(\mathfrak{a}=(F_{1},\ldots,F_{k})\) in \(R=K[x_{1},\ldots,x_{n}]\) exists an exponent \(\rho\) depending only on \(n\) and \(k\) and the degrees of the \(F_{i}\) such that \(F\in\mathfrak{a}\), if \(F\in\mathfrak{a}+\mathfrak{p}_{i}^{\rho}\) for all associated prime ideals \(\mathfrak{p}_{i}\) of \(\mathfrak{a}\)._
In this formulation the hard part is to establish that the bound only depends on the stated quantities. To make clear that it is a Nullstellensatz, the condition can be formulated as \(F\in\mathfrak{a}R_{i}+(x_{1}-\xi_{1},\ldots,x_{n}-\xi_{n})^{\rho}\), where \((\xi_{1},\ldots,\xi_{n})\) is the general zero of the prime ideal \(\mathfrak{p}_{i}\) and \(R_{i}=K(\xi_{1},\ldots,\xi_{n})[x_{1},\ldots,x_{n}]\). Hentzelt originally formulated the condition for all (infinitely many) geometric zeros \((\xi_{1},\ldots,\xi_{n})\) of \(\mathfrak{a}\), that \(F\in\mathfrak{a}+(x_{1}-\xi_{1},\ldots,x_{n}-\xi_{n})^{\rho}\).
A non-constructive proof, not establishing the degree bound, was given by van der Waerden [26]. It uses reduction to the zero dimensional case. It is explained in [26, SS133].
## Appendix A The resultant
Let \(A\) be a unique factorisation domain. We are interested in the question when two binary forms \(F(X,Y),G(X,Y)\in A[X,Y]\) have a common factor.
**Proposition A.1**.: _The binary forms \(F\) and \(G\) in \(A[X,Y]\) have a non-constant factor \(H\) in common, if and only if there exist forms \(U\) and \(V\) of degree less than \(\deg F\), resp. \(\deg G\), not both vanishing, such that \(VF+UG=0\)._
Proof.: Suppose \(VF=-UG\). All irreducible factors of \(F\) have to occur in \(UG\), and not all can occur in \(U\), because \(\deg U<\deg F\); therefore \(F\) and \(G\) have a factor in common. Conversely, given \(H\) one finds a \(U\) and a \(V\) such that \(F=-UH\) and \(G=VH\), so the equation \(VF+UG=0\) is satisfied, with \(\deg U<\deg F\) and \(\deg V<\deg G\).
Suppose \(\deg F=m\) and \(\deg G=n\) and consider the free module \(A[X,Y]_{n+m-1}\) of forms of degree \(m+n-1\),
The existence of a relation \(VF+UG=0\) is equivalent to the fact that the forms \(X^{n-1}F\), \(X^{n-2}YF\),..., \(Y^{n-1}F\), \(X^{m-1}G\),..., \(Y^{m-1}G\) are linearly dependent in vector space \(Q(A)[X,Y]_{n+m-1}\) of dimension \(m+n\), where \(Q(A)\) is the quotient field of \(A\). We represent a form \(c_{0}X^{n+m-1}+\cdots+c_{n+m-1}Y^{n+m-1}\) by the row vector \((c_{0},\ldots,c_{n+m-1})\); multiplying with the column vector \(\mathcal{X}=(X^{n+m-1},\ldots,Y^{n+m-1})^{t}\) gives back the form.
Put
\[F =a_{0}X^{m}+a_{1}X^{m-1}Y+\cdots+a_{m}Y^{m},\] \[G =b_{0}X^{n}+b_{1}X^{n-1}Y+\cdots+b_{n}Y^{n}.\]
Writing out the forms \(X^{n-1}f\),..., \(Y^{n-1}F\), \(X^{m-1}G\),..., \(Y^{m-1}g\) in the basis \(X^{n+m-1},\ldots,Y^{n+m-1}\) leads in this way to a matrix equation
\(S_{F,G}\mathcal{X}=\mathcal{F}\), with \(\mathcal{F}=(X^{n-1}f,\dots,Y^{m-1}g)^{t}\) and \(S_{F,G}\) the Sylvester matrix
\[S_{F,G}=\begin{pmatrix}a_{0}&a_{1}&a_{2}&\dots&a_{m}\\ &a_{0}&a_{1}&a_{2}&\dots&a_{m}\\ &&\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span \omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit \span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\omit \span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit \omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit \span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\omit\span\omit\span\omit\omit\span\omit \span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span \omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit\span \omit\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span \omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit \span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\omit \span\omit\span\omit\span\omit\omit\span\omit\span\omit\span \omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span \omit\span\ |
2309.12455 | LongDocFACTScore: Evaluating the Factuality of Long Document Abstractive
Summarisation | Maintaining factual consistency is a critical issue in abstractive text
summarisation, however, it cannot be assessed by traditional automatic metrics
used for evaluating text summarisation, such as ROUGE scoring. Recent efforts
have been devoted to developing improved metrics for measuring factual
consistency using pre-trained language models, but these metrics have
restrictive token limits, and are therefore not suitable for evaluating long
document text summarisation. Moreover, there is limited research and resources
available for evaluating whether existing automatic evaluation metrics are fit
for purpose when applied in long document settings. In this work, we evaluate
the efficacy of automatic metrics for assessing the factual consistency of long
document text summarisation. We create a human-annotated data set for
evaluating automatic factuality metrics, LongSciVerify, which contains
fine-grained factual consistency annotations for long document summaries from
the scientific domain. We also propose a new evaluation framework,
LongDocFACTScore, which is suitable for evaluating long document summarisation.
This framework allows metrics to be efficiently extended to any length document
and outperforms existing state-of-the-art metrics in its ability to correlate
with human measures of factuality when used to evaluate long document
summarisation data sets. We make our code and LongSciVerify data set publicly
available: https://github.com/jbshp/LongDocFACTScore. | Jennifer A Bishop, Qianqian Xie, Sophia Ananiadou | 2023-09-21T19:54:54Z | http://arxiv.org/abs/2309.12455v2 | # LongDocFACTScore: Evaluating the Factuality of Long Document Abstractive Summarisation
# LongDocFACTScore: Evaluating the Factuality of Long Document Abstractive Summarisation
Jennifer A Bishop, Qianqian Xie, Sophia Ananiadou
Department of Computer Science, The University of Manchester
jennifer.bishop-2@postgrad.manchester.ac.uk, xqq.sincere@gmail.com
sophia.ananiadou@manchester.ac.uk
###### Abstract
Maintaining factual consistency is a critical issue in abstractive text summarisation, however, it cannot be assessed by traditional automatic metrics used for evaluating text summarisation, such as ROUGE scoring. Recent efforts have been devoted to developing improved metrics for measuring factual consistency using pre-trained language models, but these metrics have restrictive token limits, and are therefore not suitable for evaluating long document text summarisation. Moreover, there is limited research evaluating whether existing automatic evaluation metrics are fit for purpose when applied to long document data sets. In this work, we evaluate the efficacy of automatic metrics at assessing factual consistency in long document text summarisation and propose a new evaluation framework LongDocFACTScore. This framework allows metrics to be extended to any length document. This framework outperforms existing state-of-the-art metrics in its ability to correlate with human measures of factuality when used to evaluate long document summarisation data sets. Furthermore, we show LongDocFACTScore has performance comparable to state-of-the-art metrics when evaluated against human measures of factual consistency on short document data sets. We make our code and annotated data publicly available: [https://github.com/jbship/LongDocFACTScore](https://github.com/jbship/LongDocFACTScore).
## 1 Introduction
Factual inconsistency, i.e., when a generated summary is not entailed by its source document, is a well-documented limitation of modern neural summarisation methods Maynez et al. (2020); Wallace et al. (2021). Although Large Language Models (LLMs) have shown greatly superior performance on a range of NLP tasks, including summarisation Zhang et al. (2023), there remains outstanding issues with their ability to remain factually consistent Bang et al. (2023).
Human evaluation is generally regarded as the gold standard for evaluating generative models, yet Krishna et al. (2023) found that 73\(\%\) of long document summarisation studies do not perform a human evaluation on long document data sets, thus highlighting the need for effective automatic evaluation metrics. Automatic evaluation metrics provide an alternative option to human evaluation, which is timely and costly to conduct. However, although ROUGE scoring Lin (2004) is the traditional metric for automatic evaluation of text summarisation, it is flawed and does not correlate well with human judgement Yuan et al. (2021); Huang et al. (2020); Kryscinski et al. (2019) due to not effectively capturing semantic, grammatical, and factual errors.
Although there have been efforts to develop improved metrics for measuring factual consistency Scialom et al. (2021); Yuan et al. (2021); Kryscinski et al. (2020); Qin et al. (2022); Fu et al. (2023), the studies proposing these metrics only conduct evaluation on short document summarisation data sets Hermann et al. (2015); Grusky et al. (2018); Narayan et al. (2018) and there is limited research evaluating the efficacy of automatic metrics for assessing factuality on long document data sets. Recent automatic evaluation metrics intended for evaluating factuality are reference-free, that is they use the source document, rather than a gold summary, in their calculation of factual consistency. However, as these metrics make use of pre-trained language models Scialom et al. (2021); Yuan et al. (2021); Kryscinski et al. (2020); Qin et al. (2022); Fu et al. (2023), they are only able to process a limited number of tokens at a time and must truncate, on average, over half of the tokens of a source document in long document data sets in their calculations Koh et al. (2022). Thus, they do not perform well when applied to long document summarisation Koh et al. (2022).
Despite the prevalence of LLMs for summari
sation of increasingly long documents in the real world, they remain flawed in their factual consistency. Therefore, there is a growing need for metrics which can assess the factual consistency of long document text summarisation. In this work, we aim to address these concerns and our main contributions are:
* A reference-free evaluation framework, LongDocFACTScore, intended for assessing the factual consistency of abstractive summarisation of long documents, which we show to outperform all other automatic methods evaluated in its correlation with human annotations of factuality on long document data sets.
* An evaluation of the efficacy and efficiency of LongDocFACTScore and other automatic evaluation metrics for the evaluation of the factual consistency of summarisation on a range of long and short document data sets.
* A long document data set of the scientific domain with fine-grained human annotations of factual consistency, which is made available alongside our code.
## 2 Methods
In this section, we describe the LongDocFACTScore framework. This framework can be used to extend existing metrics by comparing each sentence in the generated summary with the most similar sections of the source document, making use of sentence embeddings and their cosine similarity to scale efficiently through a source document.
To calculate LongDocFACTScore, both the source document \(D=\left\langle{s_{i},i\in I}\right\rangle\) and its generated summary \(S=\left\langle{s_{j},j\in J}\right\rangle\) are split into sentences using the nltk library1. For each of these sentences, sentence embeddings (Reimers and Gurevych, 2019) are generated using the sentence-transformers library2 initialised with the bert-base-nmli-mean-tokens model3. For each sentence in the predicted summary \(s_{j}\), the cosine similarity between its sentence embedding and the sentence embedding of each sentence in the source document \(s_{i}\) is calculated. \(D\) is then re-indexed by the cosine similarity scores, so that the new index \(k\) is sorted by:
Footnote 1: [https://www.nltk.org](https://www.nltk.org)
Footnote 2: [https://github.com/UKPLab/sentence-transformers](https://github.com/UKPLab/sentence-transformers)
Footnote 3: [https://huggingface.co/sentence-transformers/bert-base-nli-mean-tokens](https://huggingface.co/sentence-transformers/bert-base-nli-mean-tokens)
\[arg\max_{i\in I}\left(cosine\_similarity\left(s_{j},s_{i}\right)\right). \tag{1}\]
The \(K\) most similar source document sentences are then selected and are each concatenated with
Figure 1: Calculation of the LongDocFACTScore.
their preceding and following sentences, thus giving \(s_{k}^{*}=s_{k-1}+s_{k}+s_{k+1}\), to create the sequence of slightly longer text snippets. The metric score is then calculated between each of the text snippets \(s_{k}^{*}\) and the summary sentence \(s_{j}\). In this work, we set \(K=3\), a decision which we justify in Section 4.5.
For each sentence, \(s_{j}\) in \(S\), of the generated summary, the process is repeated, resulting in one score per generated summary sentence. The mean of these scores is then calculated, providing an overall summary score given by the equation:
\[\frac{1}{J}\sum_{j=1}^{J}\max_{k=\{1,2,3\}}(metric(s_{j}\,|s_{k}^{*})). \tag{2}\]
Figure 1 illustrates the calculation of this framework, showing for a single sentence in the generated summary, the similarity scores being calculated for every sentence in the source document, and the resulting three highest scoring sentences being concatenated with their surrounding sentences. Using the metric that is being extended, a score is then calculated between these three source document text snippets and the summary sentence. Figure 1 indicates that this process is repeated for every sentence in the generated summary and that the scores are averaged. In contrast, Figure 2 shows the method for directly applying an automatic scoring metric, designed to evaluate short document summaries, without the LongDocFACTScore framework, to a long document. The entire generated summary and the truncated long source document are directly input to the metric, resulting in one score. Consequently, there are two fundamental differences between LongDocFACTScore and an automatic metric designed for short document evaluation:
* The first difference is that LongDocFACTScore considers sections of text from the full length of the source document in its calculation (using sentence embeddings to select the most relevant from across the document) whereas other metrics truncate the source document. For metrics applied without the LongDocFACTScore framework, if a generated summary includes content from the latter part of a long document, it will be ignored, which is a problem when assessing factual consistency of long document summarisation.
* The second significant difference is that LongDocFACTScore calculates a metric score on short sections of text at one time, comparing one sentence in the predicted summary to a short section of the source document, rather than long, truncated sections.
## 3 Experimental data sets
We evaluate the automatic metrics in their ability to assess factual consistency on two long document data sets and several short document data sets. We collected one long document data set, consisting of documents from the biomedical and scientific domains annotated by six expert human annotators with fine-grained factual consistency labels. We refer to this data set as the LongSciVerify data set and provide further details of its curation in Section 3.1. The data set will be made available alongside our code. We further evaluate our methods on the LongEval PubMed data set Krishna et al. (2023), another long document data set with fine-grained factual consistency annotations. Finally, we conduct an evaluation on a range of short document data sets with human annotations of factuality, which have been used to evaluate automatic metrics in prior works Yuan et al. (2021).
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & Doc. & Doc. & Sum. & Sum. \\ & tokens & sentences & tokens & sentences \\ \hline PM & 3209 & 124 & 208 & 9 \\ AX & 6515 & 249 & 279 & 11 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Average number of tokens and sentences in the evaluated data sets. PM denotes the PubMed data set and AX denotes the ArXiv data set.
Figure 2: Calculation of a traditional automatic metric for assessing factual consistency.
### The LongSciVerify Data Set
To support the evaluation of factuality metrics for long documents, we build a new data set called LongSciVerify, with multiple summaries generated from long documents, and fine-grained human annotation scores of their factual correctness. This data set consists of 270 annotated summaries generated from the long document, English-language PubMed and ArXiv data sets Cohan et al. (2018). A description of the PubMed and ArXiv data sets can be found in Table 1.
From each of the PubMed and ArXiv data sets, fifteen articles were randomly sampled. Summaries were generated for these data sets using three different abstractive methods which were all able to consider the entire long document in the generation of their summaries. These methods were selected to enable an effective evaluation of the performance of the automatic metrics in long document settings. Details of the abstractive methods used to generate the summaries can be found in Appendix A.
As the PubMed and ArXiv data sets included in this data set are highly domain specific, we recruited six expert annotators, three per data set, to review the automatically generated summaries. At the time of evaluation, all of the expert annotators reviewing the PubMed data set were, or were in the final years of study to be, qualified clinicians. The expert annotators for the ArXiv data set had all achieved a minimum of an undergraduate degree in a physical science. The annotators who participated in our study were colleagues of the authors and therefore volunteered to participate in the study without payment. It was made clear to the annotators that this human evaluation was for scientific research on abstractive summarisation with the intention for use in a scientific publication.
The definition of factual consistency we provided to annotators was taken from Fabbri et al. (2021): _"Factual consistency: The factual alignment between the summary and the summarised source. A factually consistent summary contains only statements that are entailed by the source document. Annotators are also asked to penalise summaries that contain redundancy."_. We additionally made annotators aware of the main types of consistency errors which they should expect, taking the definitions from Huang et al. (2021): _"Intrinsic errors: A fact that is contradicted to the source document, which is also referred to as "intrinsic hallucination"_, e.g., a numerical value in the source document being repeated in the wrong fact in the summary"_ and _"Extrinsic errors: A fact that is neutral to the source document (i.e., the content that is neither supported nor contradicted by the source document), i.e., a statement which seems to have been completely made up"_.
We opted to capture a fine-grained binary classification metric (entailed vs not entailed), due to this having been shown to be effective in prior work Krishna et al. (2023) and asked annotators to mark a sentence as 'not entailed' if there were any factual inconsistencies. We make use of their finding that partial, fine-grained annotations (i.e., marking subset of sentences from the generated summary as entailed or not entailed) improves efficiency of the human annotation without comprising accuracy. For each generated summary included in the study, we sampled three summary sentences. We selected the most similar two text snippets (1-3 sentences) from the source document to each sampled sentence by comparing the cosine similarity of their sentence embeddings Reimers and Gurevych (2019). The human annotators were then given the three sentences sampled from the generated summary and the corresponding two text snippets from the source document and were asked to decide whether, given the text snippets, if each sentence was entailed or not. We provide an example screenshot of the factuality scoring for the three summaries in a PubMed sample in Figure 3.
For each of the PubMed and ArXiv samples, each human annotator evaluated the same three summaries generated from the same fifteen randomly sampled documents. To calculate the correlation between the human factuality scores and the automatic metrics, the fine-grained human scores were averaged per summary, thus resulting in 270 annotated summaries. During evaluation, the annotators were unaware of which method was used to create each summary.
Table 2 shows the inter-annotator agreement (IAA) of the fine-grained human annotated data, calculated using the Krippendorff's alpha metric4Krippendorff (2011). The IAA of the fine-grained factual consistency annotations is relatively high, averaging at 0.65 across the two data sets. The IAA in the ArXiv data set is lower than for PubMed. We hypothesise this could be due to the noise in the ArXiv data set Koh et al. (2022) and the highly domain-specific nature of the data set.
Footnote 4: [https://github.com/grrrr/krippendorff-alpha](https://github.com/grrrr/krippendorff-alpha)
### The LongEval Data Set
We additionally evaluated LongDocFACTScore and the other automatic metrics included in our study on the publicly available long document PubMed LongEval data set Krishna et al. (2023). This data set consists of summaries generated from two abstractive models: LongTS-large Guo et al. (2022) and BigBird-PEGASUS Zaheer et al. (2020). Three non-expert annotators were hired to give fine-grained annotations of factuality on forty summaries. An average standard deviation across the different annotators of 7.3 points on a 100-point rating scale was reported. As in the LongSciVerify data set, the fine-grained annotations were averaged per summary to give a total of 120 annotated summaries.
## 4 Experimental results
### Computational set-up
As baselines, ROUGE5Lin (2004) and BERTScore Zhang et al. (2020) were implemented. ROUGE scores measure the overlap in sequences of words between two texts, whilst BERTScore uses measures of cosine similarity between BERT-based Devlin et al. (2019) token embeddings to assess the similarity. We used, implemented and evaluated the following state-of-the-art reference-free metrics, which have previously shown improved correlation with the human judgement of factual consistency for short document summarisation data sets: (i) FactCC Kryscinski et al. (2019), which uses a fine-tuned BERT-based classifier to predict, for each sentence of a summary, whether it is correct or incorrect, given its source document6, (ii) QuestEval7 Scialom et al. (2021), which uses T5-based models Raffel et al. (2020) for a question generation and answering approach, and (iii) BARTScore Yuan et al. (2021), a method which uses BART Lewis et al. (2020) to calculate the log probability of generating a sequence of text, given a second sequence, implemented using the 'bart-large' model8. In this work, we apply the LongDocFACTScore framework to extend the state-of-the-art metric BARTScore. LongDocFACTScore was also implemented with the 'bart-large' model. All experiments were run on a single NVIDIA v100 GPU and all metrics, apart from ROUGE, made use of the GPU compute.
\begin{table}
\begin{tabular}{c c c} \hline \hline
**ArXiv** & **PubMed** & **Average** \\ \hline
0.76 & 0.54 & 0.65 \\ \hline \hline \end{tabular}
\end{table}
Table 2: IAA of the human-annotated data for the LongSciVerify data set.
Figure 3: Example of a summary annotated for factual consistency by an expert human annotator. “E” indicates that a sentence is “Entailed” and “NE” indicates a score is “Not Entailed”.
For the long document data set evaluation, all metrics were applied in a reference-free setting, i.e., comparing the predicted summary to the source document.
### Long document data set results
To calculate the correlations between the human measure of factual consistency and automatic metrics, the human-annotated factuality scores were averaged over the three different annotators for each unique summary, thus giving a single score for each unique summary. The scores for each metric were then compared per each unique summary. Consequently, for each pair of metrics, the correlation is calculated between 45 summaries (3 unique summaries generated by different methods, created for 15 source documents) for each of the PubMed and ArXiv subsets of the LongSciVerify data set, and 40 summaries (2 unique summaries generated by different methods for 20 source documents) for the LongEval data set.
Table 3 gives Kendall's tau [11] correlations9 between the human measures of factuality and the automatic metrics for the LongSciVerify data sets. Table 4 gives the results of the same evaluation on the LongEval data set. Kendall's tau correlations were calculated, rather than Spearman correlations, due to being more robust for data sets with smaller sample sizes. A pairwise correlation matrix across all documents (130 pair-wise correlations per metric) of the LongEval and LongSciVerify data sets, and all metrics included in the study, is given in Figure 4. In Table 3, Table 4, and Figure 4, LongDocFACTScore, implemented by extending BARTScore, can be seen to correlate better with the human judgement of factual consistency than any other metric. Comparatively, we find that both FactCC and QuestEval show a low correlation with human judgement. BARTScore has a reasonable correlation with the human factual consistency annotations, however, since it is required to truncate the source document, we expect that it would become decreasingly correlated with human judgement as it is used to score texts of increasing length. ROUGE-2 and BERTScore perform best out of the baseline metrics evaluated, but no baseline metrics show a strong correlation with human measures of factual consistency. Interestingly, Figure 4 shows that several automatic metrics have strong correlations with each other, suggesting that there is overlap in what they are measuring, but there is lower correlation between LongDocFACTScore and the other automatic metrics, suggesting that by providing coverage of a long document, LongDocFACTScore captures new information which the other metrics miss.
Footnote 9: [https://scipy.org](https://scipy.org)
automatic metrics on a variety of human annotated, short document, abstractive summarisation data sets, to validate its performance in this setting. We repeat the analysis conducted by Yuan et al. (2021), on the data sets containing human measures of factuality and use their human annotated data and code10, to report the Spearman correlation results for the SummEval data set's factuality measure (Fabbri et al., 2021), the accuracy scores for the Rank19 data set's factuality annotations (Falke et al., 2019), and the Pearson correlation between the automatic metrics and the human factuality annotations for the two QAGS data sets (Wang et al., 2020). We used same the measures of correlation for each data set as in the original analysis conducted by Yuan et al. (2021), rather than Kendall's tau correlations, to enable a direct comparison to their reported scores. Table 6 gives the results of this analysis. In this table, we can see that LongDocFACTScore performs comparably in its ability to measure factual consistency to the original BARTScore model for short document summaries, indicating that the metric can be used in both long and short document settings.
Footnote 10: [https://github.com/neulab/BARTScore](https://github.com/neulab/BARTScore)
### Parameter Study
We study effects that different parameter settings have on the LongDocFACTScore metric. We report the impact of different parameter settings on the Kendall's tau correlations when evaluating the LongSciVerify data set.
Table 7 shows the effect of varying \(K\), the maximum number of candidate similar source sentences considered per summary sentence, on the Kendall's tau correlation with the human measures of factual consistency. In LongDocFACTScore, the maximum metric score over the \(K\) candidate source document text snippets is used as the score for a given summary sentence. Row 1 of Table 7 gives the correlation for BARTScore, and row 2 gives the correlation for a baseline, denoted as LongDocFACTScore\(*\), which is implemented by calculating the metric being extended (in this case, BARTScore) between each sentence of the predicted summary and the truncated original source article11 and averaging the scores over each sentence of the predicted summary. The correlations of both BARTScore and LongDocFACTScore\(*\) are relatively low, highlighting the benefit of using the similarity metric to identify relevant text snippets from across the length of the document. The last row, \(K=I\), gives the Kendall's tau correlation when all sentences in the source document are considered. \(K=3\) is shown to be the best parameter, however, the effects of varying \(K\) are seen to be small. This is somewhat expected as the maximum BARTScore value of the \(K\) text snippets is carried forward in the LongDocFACTScore metric, and it is likely that the highest scoring sentences with BARTScore correlate well with the most similar
\begin{table}
\begin{tabular}{c c c c c} \hline & SE & R19 & QAGS & QAGS \\ & Fact & Acc & CNN & XSum \\ \hline ROUGE-1 & 0.17 & 0.59 & 0.34 & -0.01 \\ ROUGE-2 & 0.16 & 0.63 & 0.46 & 0.10 \\ ROUGE-L & 0.13 & 0.57 & 0.36 & 0.02 \\ MoverScore & 0.18 & 0.71 & 0.41 & 0.05 \\ BERTScore & 0.28 & 0.71 & 0.58 & 0.02 \\ FactCC & - & 0.70 & - & - \\ OAGS & - & **0.72** & 0.55 & **0.18** \\ BARTScore & 0.31 & 0.68 & **0.66** & 0.01 \\ LongDocFACTScore & **0.36** & 0.68 & 0.65 & 0.04 \\ \hline \end{tabular}
\end{table}
Table 6: Correlation between human measures of factuality on short document data sets.
Figure 4: Pairwise Kendall’s tau correlations of metrics. LongDocFACTScore is denoted ‘LDFACTS’, human annotations are denoted ‘factuality’.
\begin{table}
\begin{tabular}{c c} \hline
**Metric** & **Time taken (s)** \\ \hline FactCC & 24 \\ QuestEval & 160 \\ BARTScore & **1** \\ LongDocFACTScore & \$ \\ \hline \end{tabular}
\end{table}
Table 5: Time (s) to run each metric on 15 samples.
sentence embeddings.
Furthermore, by selecting \(K=3\) candidate sentences rather than cycling through all sentences in the source document, although sentence embeddings and cosine similarities are required to be calculated, the score calculation in LongDocFACTScore is only calculated for around 1-\(2\%\) of sentences from the source articles in the PubMed and ArXiv data sets. Therefore, by increasing the number of candidate similar sentences \(K\), as well as slightly decreasing its performance, LongDocFACTScore becomes increasingly less efficient and, by extension, less suitable for use on long documents. To illustrate this point, in Table 8 we give the results of the repeated efficiency calculation from Table 5, where LongDocFACTScore is implemented with \(K=3\) and \(K=I\). If \(K=I\), there is no need to calculate sentence embeddings or perform the sentence similarity calculation, therefore we additionally report the time taken without the similarity calculation. Table 8 shows that, for the LongSciVerify PubMed long document data set, performing the sentence similarity calculation to select the \(K=3\) most similar text snippets speeds up the metric over 15x.
In Table 9, the number of candidate sentences is kept constant at \(K=3\) and the effect of concatenating the source sentence with the previous and following sentence(s) to generate a text snippet is examined on the documents from the LongSciVerify data set. Table 9 shows that although concatenating one sentence either side of a selected sentence performs best, there is little variation in the Kendall's tau correlation between the different settings.
## 5 Conclusion
The prevalence of LLMs and other neural methods for abstractive summarisation of long documents in real world settings is rapidly increasing, however, the abstractive methods used to generate these summaries have known issues with factual inconsistency and hallucination. In this work, we begin to address the lack of research on the suitability of automatic evaluation metrics for assessing factual consistency of long document summarisation, and make the following contributions: (i) we show that existing automatic metrics for assessing factual consistency, which have previously shown good performance on short document data sets, do not perform well in long document settings, (ii) we propose a new framework, LongDocFACTScore, which is able to consider an entire source document in its calculation, without the need to truncate it, and outperforms existing state-of-the-art metrics in its ability to evaluate factual consistency of long document summarisation data sets whilst still being more efficient than many state-of-the-art automatic evaluation metrics, (iii) we release our code and the LongSciVerify human-annotated data set. We hope that this work promotes further research into automatic metrics for evaluating abstractive summarisation of long documents. In future work, we hope to apply the LongDocFACTScore framework to extend other automatic metrics for measuring factual consistency to long document settings and evaluate their performance.
\begin{table}
\begin{tabular}{c c} \hline
**LongDocFACTScore setting** & **Score** \\ \hline BARTScore & 0.440 \\ LongDocFACTScore* & 0.405 \\ LongDocFACTScore \(K=1\) & 0.605 \\ LongDocFACTScore \(K=3\) & **0.610** \\ LongDocFACTScore \(K=5\) & 0.600 \\ LongDocFACTScore \(K=7\) & 0.600 \\ LongDocFACTScore \(K=9\) & 0.595 \\ LongDocFACTScore \(K=11\) & 0.590 \\ LongDocFACTScore \(K=I\) & 0.575 \\ \hline \end{tabular}
\end{table}
Table 7: The effect of varying \(K\), the number of similar sentences considered for the LongDocFACTScore calculation, on the Kendall’s tau correlation with human judgements of factuality.
\begin{table}
\begin{tabular}{c c} \hline
**LongDocFACTScore setting** & **Time taken (s)** \\ \hline \(K=3\) & 8 \\ \(K=I\) & 134 \\ \(K=I\) & 125 \\ (no similarity calculation) & \\ \hline \end{tabular}
\end{table}
Table 8: Time taken (s) to run LongDocFACTScore on 15 samples, when implemented with different settings.
\begin{table}
\begin{tabular}{c c} \hline
**Method** & **Score** \\ \hline \(s_{k}^{*}=s_{k}\) & 0.605 \\ \(s_{k}^{*}=s_{k-1}+s_{k}+s_{k+1}\) & **0.610** \\ \(s_{k}^{*}=s_{k-2}+s_{k}+s_{k+12}\) & 0.595 \\ \hline \end{tabular}
\end{table}
Table 9: The effect of varying the number of source document sentences concatenated for the LongDocFACTScore calculation on the Kendall’s tau correlation with human judgement of factuality.
### Limitations
Firstly, we review the limitations of our human evaluation study. In our study, we recruited expert annotators as the long document data sets are domain specific. It is difficult to recruit large numbers of expert annotators and therefore an improvement on this work would be to conduct a larger human evaluation study with more annotators evaluating more documents. We also note that two out of three annotators of the ArXiv data set have a first language which is not English, although they are both fluent in English. Furthermore, although the annotators of the ArXiv data set had all achieved a minimum of an undergraduate degree in a physical science, they did not necessarily study physics, which was the domain of most articles randomly sampled for human evaluation.
Secondly, we comment on the limitations of the LongDocFACTScore metric. One issue with this metric, and other state-of-the-art factuality metrics, is that they favour extractive summaries. Therefore, although this metric is shown to be effective at measuring the factual consistency of long document abstractive summaries, we suggest that this metric is used in conjunction with other metrics such as abstractiveness, fluency, coherence, and relevance to assess the overall quality of a summary.
Lastly, we discuss the computational cost of our work. We were able to monitor our GPU usage and found that for all experiments tun in this period, we used approximately 1200 GPU hours. Despite our metric, LongDocFACTScore, being comparably efficient (see Table 5 and Table 8), we acknowledge that working with large neural models, as well as having environmental implications, is not economically possible for many researchers.
### Ethics Statement
Throughout our research, we complied with our institution's ethical guidelines. We used open-source data and software, for which no ethical approvals were required.
In our study, we conduct a human evaluation. As detailed in Section 3, we were fortunate enough to be able to recruit colleagues, who are domain-experts in the field of the data sets. They volunteered to participate in the study without payment, so we did not need to consider the ethics of crowd-worker payment.
Our work proposes a metric for assessing the factual consistency of abstractive summaries generated for long documents. This metric can be used to help researchers assess the performance of their summarisation methods, however, to minimize any harm which may be caused by deploying an abstractive summarisation model in a live setting, we suggest that the method should be thoroughly evaluated by humans in the setting it is intended to be deployed.
|
2309.05129 | On correlation functions of higher-spin currents in arbitrary dimensions
$d>3$ | We revisit the problem of classification and explicit construction of the
conformal three-point correlation functions of currents of arbitrary integer
spin in arbitrary dimensions. For the conserved currents, we set up the
equations for the conservation conditions and solve them completely for some
values of spins, confirming the earlier counting of the number of independent
structures matching them with the higher-spin cubic vertices in one higher
dimension. The general solution for the correlators of conserved currents we
delegate to a follow-up work. | Melik Karapetyan, Ruben Manvelyan, Karapet Mkrtchyan | 2023-09-10T20:29:51Z | http://arxiv.org/abs/2309.05129v2 | # On correlation functions of higher-spin currents in arbitrary dimensions \(d>3\)
###### Abstract
We revisit the problem of classification and explicit construction of the conformal three-point correlation functions of currents of arbitrary integer spin in arbitrary dimensions. For the conserved currents, we set up the equations for the conservation conditions and solve them completely for some values of spins, confirming the earlier counting of the number of independent structures matching them with the higher-spin cubic vertices in one higher dimension. The general solution for the correlators of conserved currents we delegate to a follow-up work.
## 1 Introduction
### General setup and two-point function
#### Three-point function: the structure of the ansatz
* 4 Three-point function: conservation condition
* 5 Conservation condition as a differential equation
* 6 Conclusions
* A Short review of Osborn-Petkou formulation and adaptation to higher spin case
* B Examples
Introduction
The holographic duality [1; 2] remains one of the most promising approaches to Quantum Gravity. Particular interest is attracted by Higher-Spin (HS) Gravity [3; 4; 5] as the AdS dual candidate [6; 7; 8; 9] of the simplest CFT -- \(O(N)\) vector model [10; 11]. Lagrangian formulation of Vasiliev's HS Gravity is not available so far. However, the classification of interaction vertices between symmetric HS fields in arbitrary dimensions has been an impressive collective effort. See [12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68] for some key references.
The holographic dictionary relates interaction vertices in AdS space-time to the conformal correlators on the boundary. Massless HS fields in AdS correspond to conserved currents on the boundary. The classification of the correlators of the (conserved) currents of arbitrary spin has been an independent parallel program. See [69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 101] for some key references.
Generally, in conformal field theory, two and three-point correlation functions are fixed by conformal symmetry leaving no functional freedom. While the two-point function is fixed up to a normalization constant for any spin conformal operator (or traceless current of any rank) the three-point function depends on several constants for each triplet of currents. It is natural to expect that the number of independent structures here should match the number of independent vertices of cubic interaction in the bulk AdS gravity, via AdS/CFT dictionary. Moreover, the cubic vertices in AdS are uniquely determined from the flat space cubic vertices, by adding curvature corrections fixed by the requirement of AdS covariance [32; 33; 34; 45; 46; 52; 56; 59]. Hence, there should be a one-to-one correspondence between cubic vertices in \(d+1\)-dimensional Minkowski space and conformal corellators in \(d\) dimensions. At least, the number of structures on both sides should match. This one-to-one correspondence between three-point correlators of conserved currents of arbitrary spin in \(d>3\) dimensions and cubic vertices of massless symmetric fields in \(d+1\) dimensional Minkowski space [25; 38] was conjectured and elaborated upon in [89] (see also [90; 94]).
Four-dimensional bulk spacetime corresponding to three-dimensional CFT has some peculiarities (see, e.g., [57; 88; 98]), while similar correspondence has been established in \(d=2\) (with three-dimensional bulk) not only at cubic order but also for arbitrary higher-order interactions [102] with the help of the full classification of cubic [103; 104] and higher-order [105] independent vertices involving massless bosonic HS fields.
The holographic reconstruction of HS Gravity has also progressed in the last decades: see [106; 107; 108; 109; 110; 111; 11; 112; 113; 114; 115; 116; 117; 118] for some key references.
In this work, we revisit the construction and investigation of two and three-point correlation functions for HS conformal currents in arbitrary dimensions via Osborn-Petkou general formulation [77]. In Appendix A we briefly review this formulation
adopted for higher spin case. But here we would like to note that the main advantage of formulation developed in [77] is the reduction of the problem to construction instead of correlation function depending on three space-time points to the tensor depending on three sets of symmetrized indices but depending only on one variable which is roughly the difference of two coordinates inverted around the third point. In this way, we have a much simpler object for investigation depending on one variable polynomially with certain symmetry properties and satisfying conservation conditions.
In this work, we present a general Ansatz for the local object that defines the correlation functions* of arbitrary-spin currents. This Ansatz is a sum of the most general tensorial polynomials in _one_ space-time variable and Kronecker symbols. Then we apply the symmetry conditions described in [77] (see also Appendix A) for general three-point correlation function with different spins \(s_{1},s_{2},s_{3}\). Natural triangle inequalities stem from the locality of our Ansatz. The solution of the latter is not simple, as expected (the approach of [77] is known to lead to complications). However, we present the general solution in Section 3, reproducing all low spin examples presented in [77].
Footnote *: We work with symmetric currents in arbitrary dimensions and do not consider lower-dimensional aspects like Schouten identities (relevant in \(d\leq 3\)) and parity-odd correlators (relevant in \(d\leq 4\)).
Then in the next section (Section 4) we derive conservation conditions for our general ansatz. This allows investigation by computer calculation of the rank of an equivalent linear system of equations for getting independent parameters of the ansatz. One obtains general restriction on the number of independent parameters of the three-point function. Our results align with those of [89] (establishing one-to-one correspondence with the Minkowski vertices of massless fields [25, 38]): _The number of independent parameters of the parity-even three-point function of three conserved currents depends only on the minimal spin of the involved currents and is equal to: \(\min(s_{1},s_{2},s_{3})+1\)._
We further formulate the conservation condition in the form of a differential equation on the generating function of the correlators instead of a recursion relation for coefficients of the ansatz. We leave the full solution of these relations to future work.
Some technical details and derivations are delegated to Appendices.
## 2 General setup and two-point function
We present very shortly the key points of our technical setup and construction of the two-point function as a preliminary exercise before our main task: the three-point function. As customary when dealing with HS fields, we introduce auxiliary vector variables \(a_{\mu},b_{\mu},\dots\) to handle an arbitrary number of symmetrized indices. As usual, we utilize instead of symmetric tensors such as \(h^{(s)}_{\mu_{1}\mu_{2}\dots\mu_{s}}(x)\) the homogeneous polynomials
in a vector \(a^{\mu}\) of degree \(s\) at the base point \(x\):
\[h^{(s)}(x;a)=h^{(s)}_{\mu_{1}\mu_{2}\ldots\mu_{s}}(x)a^{\mu_{1}}a^{\mu_{2}}\ldots a ^{\mu_{s}}. \tag{1}\]
Then the symmetrized gradient, divergence, and trace operations are given as+
Footnote †: To distinguish easily between “a” and “x” spaces we introduce the notation \(\nabla_{\mu}\) for space-time derivatives \(\frac{\partial}{\partial x^{\mu}}\).
\[Grad:h^{(s)}(x;a)\Rightarrow(Grad\,h)^{(s+1)}(x;a)=(a\nabla)h^ {(s)}(x;a)\,, \tag{2}\] \[Div:h^{(s)}(x;a)\Rightarrow(Div\,h)^{(s-1)}(x;a)=\frac{1}{s}( \nabla\partial_{a})h^{(s)}(x;a)\,,\] (3) \[Tr:h^{(s)}(x;a)\Rightarrow(Tr\,h)^{(s-2)}(x;a)=\frac{1}{s(s-1)} \Box_{a}h^{(s)}(x;a)\,. \tag{4}\]
Moreover we introduce the notation \(*_{a},*_{b},\dots\) for a full contraction of \(s\) symmetric indices:
\[*_{a}^{(s)} = \frac{1}{(s!)^{2}}\prod_{i=1}^{s}\overleftarrow{\partial} \,\overset{\mu_{i}}{a}\,\overrightarrow{\partial}\,\overset{a}{\mu_{i}}. \tag{5}\]
These operators, together with their duals++ will be the building blocks of the correlation functions of higher spin currents. As it was mentioned before, we use the formulation of [77] reviewed in Appendix A. Here we just extend this formulation of the two-point correlation function for the case of general spin-\(s\) conformal conserved (traceless-transverse) currents.
Footnote ‡: It is easy to see that the operators \((a\partial_{b}),a^{2},b^{2}\) are dual (or adjoint) to \((b\partial_{a}),\Box_{a},\Box_{b}\) with respect to the “star” product of tensors with two sets of symmetrized indices (5)
\[\frac{1}{n}(a\partial_{b})f^{(m-1,n)}(a,b)*_{a,b}g^{(m,n-1)}(a,b )=f^{(m-1,n)}(a,b)*_{a,b}\frac{1}{m}(b\partial_{a})g^{(m,n-1)}(a,b),\] \[a^{2}f^{(m-2,n)}(a,b)*_{a,b}g^{(m,n)}(a,b)=f^{(m-2,n)}(a,b)*_{a,b }\frac{1}{m(m-1)}\Box_{a}g^{(m,n)}(a,b).\]
In the same fashion gradients and divergences are dual with respect to the full scalar product in the space \((x,a,b)\), where we allow for integration by parts: \[(a\nabla)f^{(m-1,n)}(x;a,b)*_{a,b}g^{(m,n)}(x;a,b) = -f^{(m-1,n)}(x;a,b)*_{a,b}\frac{1}{m}(\nabla\partial_{a})g^{(m,n )}(x;a,b).\]
Analogous equations can be formulated for the operators \(b^{2}\) or \(b\nabla\).
Starting from the ansatz
\[\mathcal{E}^{(s)}(a,b)=\sum_{p=0}^{s/2}\lambda_{p}(ab)^{s-2p}(a^{2}b^{2})^{p}, \quad\lambda_{0}=1 \tag{7}\]
and solving the tracelessness condition
\[\Box_{a}\mathcal{E}^{(s)}(a,b)=\Box_{b}\mathcal{E}^{(s)}(a,b)=0 \tag{8}\]
we arrive at a set of coefficients \(\{\lambda_{p}\}_{p=0}^{s/2}\) which are the object of the recursion equation:
\[\lambda_{p}=-\frac{(s-2p+2)(s-2p+1)}{4p(d/2+s-p-1)}\lambda_{p-1} \tag{9}\]
with solution corresponding to the initial condition from (7):
\[\lambda_{p}=\frac{(-1)^{p}[s]_{2p}}{2^{2p}p![d/2+s-2]_{p}} \tag{10}\]
Here we use notations \([a]_{n}\) for falling factorials (Phochhammer symbols):
\[[a]_{n}=\frac{a!}{(a-n)!}=\frac{\Gamma(a+1)}{\Gamma(a-n+1)} \tag{11}\]
Then it is easy to construct spin \(s\) representation for inversion matrix given by:
\[I(a,b;x)=(ab)-2(a\hat{x})(b\hat{x}),\quad\hat{x}_{\mu}=\frac{x_{\mu}}{\sqrt{x^ {2}}} \tag{12}\]
To do that we just take the traceless part of the \(s\)-th power of the inversion matrix:
\[\mathcal{I}^{(s)}(a,b;x) = \big{(}I(a,c;x)\big{)}^{s}*_{c}^{s}\mathcal{E}^{(s)}(c,b)= \mathcal{E}^{(s)}(a,c)*_{c}^{s}(I(c,b;x))^{s} \tag{13}\] \[\Box_{a,b}\mathcal{I}^{(s)}(a,b;x) = 0 \tag{14}\]
The result is easy to handle
\[\mathcal{I}^{(s)}(a,b;x)=\sum_{p=0}^{s/2}\lambda_{p}\big{(}I(a,b;x)\big{)}^{s- 2p}(a^{2}b^{2})^{p}\,,\quad\lambda_{0}=1\,. \tag{15}\]
Then we search for two point function of conformal conserved currents with spin \(s\):
\[\mathcal{J}^{(s)}(a;x)=\mathcal{J}^{(s)}_{\mu_{1}\mu_{2}\dots\mu _{s}}(x)a^{\mu_{1}}a^{\mu_{2}}\dots a^{\mu_{s}} \tag{16}\] \[(\nabla\partial_{a})\mathcal{J}^{(s)}(a;x)=0\] (17) \[\Box_{a}\mathcal{J}^{(s)}(a;x)=0 \tag{18}\]
The natural proposal is
\[\left\langle\mathcal{J}^{(s)}(a;x_{1})\mathcal{J}^{(s)}(b;x_{2}) \right\rangle=\frac{C_{\mathcal{J}}}{(x_{12}^{2})^{\Delta_{(s)}}}\mathcal{I}^{(s )}(a,b;x_{12}) \tag{19}\]
This expression is traceless by construction due to (14). The scaling number \(\Delta_{(s)}\) we can obtain from conservation condition (17) applied to (19) :
\[0=(\nabla_{1}\partial_{a})\frac{\mathcal{I}^{(s)}(a,b;x_{12})}{ (x_{12}^{2})^{\Delta_{(s)}}}\] \[=\frac{2(\Delta_{(s)}-s-d+2)}{(x_{12}^{2})^{\Delta_{(s)}+1}} \sum_{k=0}^{s/2-1}\lambda_{k}(s-2k)\big{(}I(a,b;x_{12})\big{)}^{s-2k-1}(b\hat{ x}_{12})(a^{2}b^{2})^{k} \tag{20}\]
So we see that we should choose for the conformal dimension of spin \(s\) field standard value:
\[\Delta_{(s)}=s+d-2 \tag{21}\]
Equivalently we can say that the conservation of the two-point function (19) comes from the following relation :
\[\big{[}(\nabla_{x}\partial_{a})-2\frac{(\hat{x}\partial_{a})}{ \sqrt{x^{2}}}\big{]}\mathcal{I}^{(s)}(a,b;x) \tag{22}\]
The interesting point here is that if we start with expression (19), where we take the correct conformal dimension (21) but in expression (15) undefined general set of coefficients \(\lambda_{k}\) then after implementation of conservation condition we arrive to the same recursion (15) for set \(\lambda_{k}\) which we obtained before from the tracelessness condition (9) or equivalently (14).
For the odd spin case, the generalization is straightforward: we should just replace \(s/2\) in summation limit by integer part \([s/2]\), which means that the highest trace, in this case, produces a vector instead of a scalar.
## 3 Three-point function: the structure of the ansatz
For the construction of the three-point function we should investigate structure, symmetry, and conservation condition for object \(t^{j_{1}j_{2}i_{3}}(X)\), which lives in three different representations of different spins but depends locally from one point in space-time (see [77] or Appendix A for details). New important restrictions on the correlators enter
the game for conserved currents: the corresponding conservation conditions should be implemented independently, restricting the correlators further. These we consider in the next section.
First note that restricting our structure to the
\[t^{i_{1}i_{2}i_{3}}(X)=t^{i_{1}i_{2}i_{3}}(\hat{X}), \tag{10}\]
where
\[\hat{X}_{\mu}=\frac{X_{\mu}}{\sqrt{X^{2}}}\,,\qquad X_{12\mu}=-X_{21\mu}= \frac{x_{13\mu}}{x_{13}^{2}}-\frac{x_{23\mu}}{x_{23}^{2}}\,, \tag{11}\]
is unit vector, we have \(q=0\) in (14) and (15)-(16). Taking into account that the nonsingular, tensorial part of the two-point function is given by the inversion matrix which is a function of the same unit vector (17), we see from (13), (14) that the scaling behavior of conformal correlators depends on dimensions of fields only.
Now we formulate a general three-point function for the case of the correlation functions of three different higher-spin traceless currents. Rewriting the (13) for different spins \(s_{1},s_{2},s_{3}\), we get:
\[\langle\mathcal{J}^{(s_{1})}(a;x_{1})\,\mathcal{J}^{(s_{2})}(b; x_{2})\,\mathcal{J}^{(s_{3})}(c;x_{3})\rangle=\] \[=\frac{\mathcal{I}^{(s_{1})}(a,a^{\prime};x_{13})\mathcal{I}^{(s _{2})}(b,b^{\prime};x_{23})\ast_{a^{\prime}}^{(s_{1})}\ast_{b^{\prime}}^{(s_{ 2})}t^{(s_{3})}(a^{\prime},b^{\prime};c;\hat{X}_{12})}{x_{12}^{\Delta_{(s_{1}) }+\Delta_{(s_{2})}-\Delta_{(s_{3})}}x_{23}^{\Delta_{(s_{2})}+\Delta_{(s_{3})} -\Delta_{(s_{1})}}x_{31}^{\Delta_{(s_{1})}+\Delta_{(s_{3})}-\Delta_{(s_{2})}}} \tag{12}\]
where for \(t^{(s_{3})}(a,b;c;\hat{X}_{12})\) we should propose a general ansatz. For that we note that this object is traceless in all three sets of symmetrized indices, therefore we can define it as a "kernel" object \(\tilde{t}^{(s_{3})}(a,b;c;\hat{X})\) enveloped by three traceless projectors
\[t^{(s_{3})}(\tilde{a},\tilde{b};\tilde{c};\hat{X})=\mathcal{E}^{(s_{1})}( \tilde{a},a)\ast_{a}\mathcal{E}^{(s_{2})}(\tilde{b},b)\ast_{b}\tilde{t}^{(s_{ 3})}(a,b;c;\hat{X})\ast_{c}\mathcal{E}^{(s_{3})}(c,\tilde{c}) \tag{13}\]
Then for \(\tilde{t}^{(s_{3})}(a,b;c;\hat{X})\) we propose the following ansatz:
\[\tilde{t}^{(s_{3})}(a,b;c;\hat{X})=I^{s_{3}}(c,c^{\prime};\hat{X})\ast_{c^{ \prime}}\tilde{H}(a,b,c^{\prime};\hat{X}) \tag{14}\]
where
\[\tilde{H}(a,b,c;\hat{X})=\sum_{\ell_{1},\ell_{2},\ell_{3}\in \mathcal{A}}\tilde{C}_{\ell_{1}\ell_{2}\ell_{3}}(\hat{X}a)^{\ell_{1}}(\hat{X} b)^{\ell_{2}}(\hat{X}c)^{\ell_{3}}(ab)^{\alpha}(bc)^{\beta}(ca)^{\gamma} \tag{15}\]
To define scope of indices \(\mathcal{A}\) we note that natural restriction:
\[\alpha+\gamma+\ell_{1}=s_{1}\] \[\alpha+\beta+\ell_{2}=s_{2}\] \[\gamma+\beta+\ell_{3}=s_{3} \tag{16}\]
completely fix \(\alpha,\beta,\gamma\) for any choice of \(\ell_{1},\ell_{2},\ell_{3}\):
\[2\alpha=s_{1}+s_{2}-s_{3}+\ell_{3}-\ell_{1}-\ell_{2} \tag{11}\] \[2\beta=s_{2}+s_{3}-s_{1}+\ell_{1}-\ell_{2}-\ell_{3}\] (12) \[2\gamma=s_{1}+s_{3}-s_{2}+\ell_{2}-\ell_{1}-\ell_{3}\] (13) \[2(\alpha+\beta+\gamma)=\sum s_{i}-\sum\ell_{i} \tag{14}\]
So introducing:
\[n_{i}=s_{i}-\ell_{i},\quad i=1,2,3. \tag{15}\]
we have:
\[2\alpha=n_{1}+n_{2}-n_{3} \tag{16}\] \[2\beta=n_{2}+n_{3}-n_{1}\] (17) \[2\gamma=n_{1}+n_{3}-n_{2} \tag{18}\]
and therefore from positiveness of \(\alpha,\beta,\gamma\) we have triangle inequalities:
\[n_{i}+n_{j}\geq n_{k},\quad i\neq j\neq k. \tag{19}\]
These inequalities completely fix the scope of \(\ell_{i}\) and define the number of nonzero independent parameters in our ansatz (10). For general conformal dimensions of our currents, these are the only restrictions on the number of structures. The short representations, corresponding to (partially-)conserved currents, will be discussed later.
We analyzed the inequalities given above for arbitrary triplets of spins and were able to guess the analytical expressions for the number of terms in the ansatz. Interestingly, this number is not a smooth function of spins, which manifests itself by gaps when some spins coincide and different dependence of even and odd spins. We will use the step function in the following:
\[\eta(s)=\frac{1-(-1)^{s}}{2} \tag{20}\]
Then the solution for numbers of allowed monomials in the case when all spins are the same \(s_{1}=s_{2}=s_{3}=s\) is
\[N_{sss}=\frac{1}{24}(s+2-\eta(s))(s+3)(s+4+\eta(s)) \tag{21}\]
Then we turn to the case when two out of three spins are equal. There is a special point in this case: \(s_{1}=s_{2}=s,s_{3}=2s\). The number of structures in this case is:
\[N_{ss2s}=\frac{1}{6}(s+1)(s+2)(s+3) \tag{22}\]
There are two cases beyond this point:
* \(s_{3}>s=s_{1}=s_{2}\) \[N_{sss_{3}}^{s_{3}>s}=\frac{1}{6}(s+1)(s+2)(s+3)-\frac{1}{24}p(p+2)(p+4)-\frac{1} {8}(p+2)\eta(p)\] (3.20) where \(p=2s-s_{3}\), and
* \(s_{1}<s=s_{2}=s_{3}\) \[N_{s_{1}ss}^{s_{1}<s}=\frac{1}{8}[(s_{1}+2)^{2}-\eta(s_{1})](2s-s_{1}+2)\] (3.21)
Then the next observation from computer calculation is for the case \(s_{1}+s_{2}=s_{3}\):
\[N_{s_{1}s_{2}s_{3}}^{s_{1}+s_{2}=s_{3}}=\frac{1}{2}(s_{1}+1)(s_{1}+2)(s_{2}- \frac{1}{3}(s_{1}-3)) \tag{3.22}\]
And finally the last observation is about numbers of monomials for the case with just general ordering \(s_{1}<s_{2}<s_{3}\):
\[N_{s_{1}s_{2}s_{3}}^{s_{1}<s_{2}<s_{3}}=N_{s_{1}s_{2}s_{3}}^{s_{1} +s_{2}=s_{3}}-\frac{1}{24}P(P+2)(2P+5)-\frac{1}{8}\eta(P) \tag{3.23}\] \[P=s_{1}+s_{2}-s_{3}\]
So we see that (3.18)-(3.23) completely cover all scope of indices \(\mathcal{A}\) and we have analytic formula for number of all monomials in our ansatz with indices satisfying triangle inequalities. The last question remains, what happens when in our different spin case the greatest one stops to satisfy triangle inequality \(s_{3}>s_{1}+s_{2}\)? The answer is that number of monomials in this case stabilized with latest one satisfying triangle inequality \(N_{s_{1}s_{2}s_{3}}^{s_{1}+s_{2}<s_{3}}=N_{s_{1}s_{2}s_{3}}^{s_{1}+s_{2}=s_{3}}\).
Finalizing this consideration we present some geometric arguments for cubic behaviour and discontinuities in points with coincident spins. Let us rewrite our inequality (3.16) in the form of equations introducing three new nonnegative variables \(\lambda_{i}\)
\[n_{i}+n_{j}=n_{k}+\lambda_{k},\quad i\neq j\neq k \tag{3.24}\]
then summing any pair of these equations we come to the important relation:
\[\lambda_{i}+\lambda_{j}=2n_{k},\qquad i\neq j\neq k \tag{3.25}\] \[n_{i}\in[0,1,\ldots s_{i}] \tag{3.26}\]
So replacing r.h.s. with maximal value we see that the scope of allowed indices is integer numbers with the following restrictions:
* From (3.25) we see that allowed \(\lambda_{i}\) are all even or odd, so we have separate even or odd lattice.
* these even or odd pairs restricted by positiveness and inequality \[\lambda_{i}+\lambda_{j}\leq 2s_{k}\qquad i\neq j\neq k.\] (3.27)
The important point here is that allowed points fulfill all vertexes of our lattice triangle in Figure 1 completely. So the number of these points should be proportional to the area of this triangle. To get the general picture of the numbers of allowed monomials in our ansatz, we should expand our discrete triangle in the third direction in the form of a triangle prism with a hight in the third direction. Then the full solution will be intersection of three different prisms constructed on planes \((\lambda_{1},\lambda_{2})\), \((\lambda_{2},\lambda_{3})\) and \((\lambda_{3},\lambda_{1})\) with corresponding legs \(2s_{3},2s_{1},2s_{2}\) of right triangle bases (See Figure 2). This picture explains everything about the non-smooth behavior of our formulas above because of an irregular intersection of these prisms for different spins \(s_{1},s_{2},s_{3}\).
Then we can understand that in coincident cases the geometrical figures we get as a result of intersections of our prisms are more symmetric. We illustrate this for the cases \(s_{1}=s_{2}\leq s_{3}\) and \(s_{1}\leq s_{2}=s_{3}\) (see Figure 3) and the most symmetric case \(s_{1}=s_{2}=s_{3}\) (Figure 4).
So we see that something like "phase transitions" happen in our formulas. On the other hand, this geometrical three-dimensional picture with previous consideration of Figure 1 leads to the understanding that the full number of monomials allowed by triangle inequalities is proportional to the volume of our intersection and therefore should be a _cubic function of spins_.
n the end we note that all the examples considered in [77] can be exactly produced from our general formulas (3.4)-(3.6) with corresponding choice of the value of spins and solution of the triangle inequality. For illustration, we discuss the important case of coinciding spins in Appendix B.
Figure 4: Intersection of Prisms in the case \(s_{1}=s_{2}=s_{3}\)
Figure 2: Intersection of Prisms in the case \(s_{1}\leq s_{2}\leq s_{3}\)
Three-point function: conservation condition
Now we turn to the investigation of the conservation condition for higher spin three point function. To formulate it for higher spin case we first introduce short notation for combinations of dimensions:
\[\Delta_{12} = \Delta_{(s_{1})}+\Delta_{(s_{2})}-\Delta_{(s_{3})} \tag{39}\] \[\Delta_{23} = \Delta_{(s_{2})}+\Delta_{(s_{3})}-\Delta_{(s_{1})}\] (40) \[\Delta_{31} = \Delta_{(s_{3})}+\Delta_{(s_{1})}-\Delta_{(s_{2})}\] (41) \[\Delta_{(s_{i})} = d+s_{i}-2,\qquad i=1,2,3 \tag{42}\]
The latter expressions are the dimensions of conserved currents. Then redirecting readers for details of derivation to the last part of Appendix A, we can write conservation condition
\[(\nabla_{x_{1}}\partial_{a})\langle\mathcal{J}^{(s_{1})}(a;x_{1} )\,\mathcal{J}^{(s_{2})}(b;x_{2})\,\mathcal{J}^{(s_{3})}(c;x_{3})\rangle=0 \tag{43}\]
in the form:
\[(\nabla_{X}\partial_{a})t^{(s_{3})}(a,b;c;X) =\Delta_{12}\tfrac{(X\partial_{a})}{X^{2}}t^{(s_{3})}(a,b;c;X) \tag{44}\]
The last one is the equation for structural tensor object \(t^{(s_{3})}(a,b;c;X)\) which is completely equivalent to the conservation condition for the three-point function. Then, separating the traceless projector from the "kernel" part of (32) (see also Appendix A for details) and introducing the \(k-\)th trace of our ansatz:
\[\square_{a}^{k}\tilde{t}^{(s)}(a,b,c;\hat{X}) =\sum_{\begin{subarray}{c}\ell_{1}\in[2k,\dots s_{1}];\ell_{2}, \ell_{3}\in[0,\dots s_{2},s_{3}]\\ \{\ell_{i}\}\in\mathcal{A}\end{subarray}}T^{(k)}_{\ell_{1},\ell_{2},\ell_{3} }\begin{bmatrix}\ell_{1}-2k,\ell_{2},\ell_{3}\\ \alpha;\beta,\gamma\end{bmatrix} \tag{45}\]
where we shortened the formulas using the notation:
\[\begin{bmatrix}\ell_{1},\ell_{2},\ell_{3}\\ \alpha;\beta,\gamma\end{bmatrix} =(\hat{X}a)^{\ell_{1}}(\hat{X}b)^{\ell_{2}}(\hat{X}c)^{\ell_{3} }(ab)^{\alpha}I^{\beta}(b,c;\hat{X})I^{\gamma}(c,a;\hat{X}) \tag{46}\]
and \(T^{(k)}_{\ell_{1},\ell_{2},\ell_{3}}\) is \(k-\)th trace map of \(\tilde{C}_{\ell_{1},\ell_{2}\ell_{3}}\) from (29). In this way using important formula (A.30) and expression (45) after long manipulations we write conservation condition
(4.6) in terms of equations on \(T^{(k)}_{\ell_{1},\ell_{2},\ell_{3}}\):
\[(\ell_{1}-2k)(s_{3}-s_{2})T^{(k)}_{\ell_{1},\ell_{2},\ell_{3}}\] \[+(\alpha+1)(2\ell_{3}-2k-d-2s_{2}+2)T^{(k)}_{\ell_{1}-1,\ell_{2}-1,\ell_{3}}+(\gamma+1)(2\ell_{2}-2k-d-2s_{3}+2)T^{(k)}_{\ell_{1}-1,\ell_{2},\ell _{3}-1}\] \[+(\alpha+1)(\ell_{3}+1)T^{(k)}_{\ell_{1}-1,\ell_{2},\ell_{3}+1}+( \gamma+1)(\ell_{2}+1)T^{(k)}_{\ell_{1}-1,\ell_{2}+1,\ell_{3}}\] \[+\frac{1}{d+2s_{1}-2k-4}\left[2(\ell_{2}-\ell_{3})T^{(k+1)}_{\ell _{1},\ell_{2},\ell_{3}}+2(\beta+1)\big{(}T^{(k+1)}_{\ell_{1}+1,\ell_{2},\ell_{3 }-1}+T^{(k+1)}_{\ell_{1}+1,\ell_{2}-1,\ell_{3}}\big{)}\right.\] \[-\left.(\ell_{2}+1)T^{(k+1)}_{\ell_{1}+1,\ell_{2}+1,\ell_{3}}-( \ell_{3}+1)T^{(k+1)}_{\ell_{1}+1,\ell_{2},\ell_{3}+1}\right]=0 \tag{4.9}\]
where the traces themselves satisfy the following recursion relation:
\[T^{(k+1)}_{\ell_{1},\ell_{2},\ell_{3}}=(\ell_{1}-2k)(\ell_{1}-2k -1)T^{(k)}_{\ell_{1},\ell_{2},\ell_{3}}+2(\alpha+1)(\gamma+1)T^{(k)}_{\ell_{1} -2,\ell_{2},\ell_{3}}\] \[+2(\alpha+1)(\ell_{1}-2k-1)T^{(k)}_{\ell_{1}-1,\ell_{2}-1,\ell_{3 }}-2(\gamma+1)(\ell_{1}-2k-1)T^{(k)}_{\ell_{1}-1,\ell_{2},\ell_{3}-1} \tag{4.10}\]
That is not the whole story. The bad news here is that the equation (4.9) should be supplemented by a conservation condition for the second current in the correlation function when the latter is also conserved. This can be done in (4.6) by replacements of \(s_{1}\leftrightarrow s_{2}\) and \(x_{1}\leftrightarrow x_{2}\) and \(a_{\mu}\leftrightarrow b_{\mu}\), or directly in (4.9), (4.10) replacing \(s_{1}\leftrightarrow s_{2},\ell_{1}\leftrightarrow\ell_{2}\).
The good news here is that we do not need to solve all recursion equations (4.9) for all \(T^{(k)}_{\ell_{1},\ell_{2},\ell_{3}}\,(k=0,1\ldots[s_{1}/2])\). In fact, we need to solve only the first conservation condition for \(k=0\), all others will be satisfied automatically because they are higher (\(k-\)th) traces of the first one with \(k=0\).
Using the helpful ansatz-normalization:
\[T^{(0)}_{\ell_{1},\ell_{2},\ell_{3}}=\frac{(-1)^{\ell_{3}}}{ \alpha!\beta!\gamma!}C_{\ell_{1},\ell_{2},\ell_{3}} \tag{4.11}\] \[T^{(1)}_{\ell_{1},\ell_{2},\ell_{3}}=\frac{(-1)^{\ell_{3}}}{ \alpha!\beta!\gamma!}\Big{[}\ell_{1}(\ell_{1}-1)C_{\ell_{1},\ell_{2},\ell_{3}} +2\beta C_{\ell_{1}-2,\ell_{2},\ell_{3}}\] \[+2(\ell_{1}-1)C_{\ell_{1}-1,\ell_{2}-1,\ell_{3}}+2(\ell_{1}-1)C_{ \ell_{1}-1,\ell_{2},\ell_{3}-1}\Big{]}=\frac{(-1)^{\ell_{3}}}{\alpha!\beta! \gamma!}T_{\ell_{1},\ell_{2},\ell_{3}} \tag{4.12}\]
we obtain effective conservation condition:
\[\ell_{1}(s_{3}-s_{2})C_{\ell_{1},\ell_{2},\ell_{3}}\] \[+(2\ell_{3}-d-2s_{2}+2)C_{\ell_{1}-1,\ell_{2}-1,\ell_{3}}-(2\ell_ {2}-d-2s_{3}+2)C_{\ell_{1}-1,\ell_{2},\ell_{3}-1}\] \[+(\ell_{2}+1)C_{\ell_{1}-1,\ell_{2}+1,\ell_{3}}-(\ell_{3}+1)C_{ \ell_{1}-1,\ell_{2},\ell_{3}+1}\] \[+\frac{1}{d+2s_{1}-4}\left[2(\ell_{2}-\ell_{3})T_{\ell_{1},\ell_{2 },\ell_{3}}+2(\beta+1)\big{(}T_{\ell_{1}+1,\ell_{2},\ell_{3}-1}+T_{\ell_{1}+1, \ell_{2}-1,\ell_{3}}\big{)}\right.\] \[-\left.(\ell_{2}+1)T_{\ell_{1}+1,\ell_{2}+1,\ell_{3}}-(\ell_{3}+1 )T_{\ell_{1}+1,\ell_{2},\ell_{3}+1}\right]=0 \tag{4.13}\]
which we should amend with the same type of equation but now for \(s_{2}\), if the second current is also conserved:
\[\ell_{2}(s_{3}-s_{1})C_{\ell_{1},\ell_{2},\ell_{3}}\] \[+(2\ell_{3}-d-2s_{1}+2)C_{\ell_{1}-1,\ell_{2}-1,\ell_{3}}-(2\ell_ {1}-d-2s_{3}+2)C_{\ell_{1},\ell_{2}-1,\ell_{3}-1}\] \[+(\ell_{1}+1)C_{\ell_{1}+1,\ell_{2}-1,\ell_{3}}-(\ell_{3}+1)C_{ \ell_{1},\ell_{2}-1,\ell_{3}+1}\] \[+\frac{1}{d+2s_{2}-4}\left[2(\ell_{1}-\ell_{3})\bar{T}_{\ell_{1}, \ell_{2},\ell_{3}}+2(\gamma+1)\big{(}\bar{T}_{\ell_{1},\ell_{2}+1,\ell_{3}-1}+ \bar{T}_{\ell_{1}-1,\ell_{2}+1,\ell_{3}}\big{)}\right.\] \[-\left.(\ell_{1}+1)\bar{T}_{\ell_{1}+1,\ell_{2}+1,\ell_{3}}-( \ell_{3}+1)\bar{T}_{\ell_{1},\ell_{2}+1,\ell_{3}+1}\right]=0 \tag{4.14}\]
where \(T_{\ell_{1},\ell_{2},\ell_{3}},\bar{T}_{\ell_{1},\ell_{2},\ell_{3}}\) are corresponding trace maps:
\[T_{\ell_{1},\ell_{2},\ell_{3}}=\Big{[}\ell_{1}(\ell_{1}-1)C_{ \ell_{1},\ell_{2},\ell_{3}}+2\beta C_{\ell_{1}-2,\ell_{2},\ell_{3}}\] \[+2(\ell_{1}-1)C_{\ell_{1}-1,\ell_{2}-1,\ell_{3}}+2(\ell_{1}-1)C_{ \ell_{1}-1,\ell_{2},\ell_{3}-1}\Big{]} \tag{4.15}\] \[\bar{T}_{\ell_{1},\ell_{2},\ell_{3}}=\Big{[}\ell_{2}(\ell_{2}-1)C_ {\ell_{1},\ell_{2},\ell_{3}}+2\gamma C_{\ell_{1},\ell_{2}-2,\ell_{3}}\] \[+2(\ell_{2}-1)C_{\ell_{1}-1,\ell_{2}-1,\ell_{3}}+2(\ell_{2}-1)C_{ \ell_{1},\ell_{2}-1,\ell_{3}-1}\Big{]} \tag{4.16}\]
We do not yet have a full solution for this system of equations. But we analyzed these equations using a computer program and investigated the rank of this linear system for different triplets of spins using our ansatz (3.4)-(3.6) and normalization (4.11), (4.12). This system of linear equations for \(C_{\ell_{1},\ell_{2},\ell_{3}}\) has a number of independent parameters satisfying triangle inequalities described in the previous section. Then computing the rank of the corresponding system for multiple cases we obtain a universal answer: the rank of the system (4.13), (4.14) depends only on the minimal spin:
* _The number of independent parameters of the three-point function (or linearly independent correlators) of conserved currents with spins \(s_{1},s_{2},s_{3}\) is equal to_ \[N_{s_{1},s_{2},s_{3}}=min\{s_{1},s_{2},s_{3}\}+1\,.\]
We refer to Appendix B for some details on the special case of coincident spins.
## 5 Conservation condition as a differential equation
In this section, we first construct differential equations for the correlators of conserved currents in the case of coincident spins and then generalize them to the cases with different spins. First, we transform our recursion equation (B.20) to a differential
equation multiplying it by the following powers of formal variables \(x^{\ell_{1}-1}y^{\ell_{2}}z^{\ell_{3}}\) and summing on all possible values of \(\ell^{i}\)
\[D(\partial_{x},\partial_{y},\partial_{z};C(x,y,z))=\sum_{\{\ell_{i}\}}D_{\ell_{1 }\ell_{2}\ell_{3}}x^{\ell_{1}-1}y^{\ell_{2}}z^{\ell_{3}}=0 \tag{5.1}\]
in other words we should obtain differential equation for the functions
\[C(x,y,z)=\sum_{\{\ell_{i}\}}C_{\ell_{1}\ell_{2}\ell_{3}}x^{\ell_{1}}y^{\ell_{2} }z^{\ell_{3}} \tag{5.2}\]
and
\[T(x,y,z)=\sum_{\{\ell_{i}\}}T_{\ell_{1}\ell_{2}\ell_{3}}x^{\ell_{1}-2}y^{\ell_ {2}}z^{\ell_{3}} \tag{5.3}\]
In all these equations \(\{\ell_{i}\}\) means value of indeces \(\ell_{i},i=1,2,3\) satisfying the triangle inequality
\[s+\ell_{i}\geq\ell_{j}+\ell_{k},\quad i\neq j\neq k \tag{5.4}\]
Comparing (5.3) with (B.21)we obtain:
\[T(x,y,z)=[\partial_{x}^{2}+(x+2y+2z)\partial_{x}-y\partial_{y}-z\partial_{z} +s+2]C(x,y,z) \tag{5.5}\]
Then we can obtain differential equation version of our conservation equation (B.20):
\[D(\partial_{x},\partial_{y},\partial_{z};C(x,y,z))\] \[=\Big{[}(\Delta_{s}+s)(z-y)+\frac{1}{2}(s+1-4yz+x\partial_{x}-y \partial_{y}-z\partial_{z})(\partial_{y}-\partial_{z})\Big{]}C(x,y,z)\] \[+\frac{1}{d+2s-4}\Big{[}(2x+y+z+\frac{1}{2}[\partial_{y}+\partial _{z}])(y\partial_{y}-z\partial_{z})\] \[+(s-x\partial_{x})(y-z-\frac{1}{2}[\partial_{y}-\partial_{z}]) \Big{]}T(x,y,z)=0 \tag{5.6}\]
We see that our differential operator is antisymmetric in \(z\) and \(y\) although the functions \(C(x,y,z)\) and \(T(x.y.z)\) are symmetric. A generalization to different spins is straightforward: instead of (5.6) we have an equation obtained with the same scheme from the recursion equation (4.13):
\[D^{(s_{1},s_{2},s_{3})}(\partial;C(x,y,z))=\big{[}(s_{3}-s_{2}) \partial_{x}+(\Delta_{s_{3}}+s_{3})z-(\Delta_{s_{2}}+s_{2})y\big{]}C(x,y,z)\] \[+\frac{1}{2}(s_{2}+s_{3}-s_{1}+1-4yz+x\partial_{x}-y\partial_{y}- z\partial_{z})(\partial_{y}-\partial_{z})C(x,y,z)\] \[+\frac{1}{d+2s_{1}-4}\Big{[}(2x+y+z+\frac{1}{2}[\partial_{y}+ \partial_{z}])(y\partial_{y}-z\partial_{z})\] \[+(s_{1}-x\partial_{x})(y-z-\frac{1}{2}[\partial_{y}-\partial_{z}] )+\frac{1}{2}(s_{3}-s_{2})[y+z+\partial_{y}+\partial_{z}]\Big{]}T(x,y,z)=0 \tag{5.7}\]
where \(T(x,y,z)\) in this case is
\[T(x,y,z)=[\partial_{x}^{2}+(x+2y+2z)\partial_{x}-y\partial_{y}-z\partial_{z}+s_{2 }+s_{3}-s_{1}+2]C(x,y,z) \tag{5.8}\]
The equation (5.7) should be supplemented by a conservation condition for the second current, when the latter is conserved. This can be obtained from (5.7) and (5.8) by replacements \(s_{1}\leftrightarrow s_{2}\) and \(x\leftrightarrow y\). The solution to these general equations for the correlators of conserved currents will be addressed in an upcoming work.
## 6 Conclusions
We have established a general ansatz for the tensorial structure of the conformal three-point function for general spins and general dimensions. This allows us to calculate the exact numbers of conformal structures corresponding to all cases of AdS dual bulk interaction vertices. We present explicit formulas for three-point functions of conformal correlators of three non-conserved currents, corresponding to massive fields in the bulk. The number of structures for non-conserved currents is equivalent to the number of vertices with massive fields in the bulk, counting the number of contractions of three symmetric fields of ranks \(s_{1},s_{2},s_{3}\) with each other and derivatives acting on them, with a condition that the traces and divergences are excluded, and the derivatives do not contract between themselves (this latter condition, stemming from field-redefinition freedom, limits the possible Lorentz scalars to a finite number: see, e.g., [25; 48; 104]).
The special cases of (partially) conserved currents, corresponding to the short representations or (partially-)massless fields in the bulk, will be studied elsewhere: the extra constraints on the correlators stemming from the conservation of the currents imply non-trivial differential equations, for which the general solutions will be treated in future work. However, we worked out and further studied the structure of the constraints in the case of the conserved currents, both as differential equations and as recursion relations on the coefficients of the ansatz. The latter form allowed us to tackle a large number of cases numerically. Our results confirm the expectation from earlier works [89; 90; 92; 94] about the number of structures in the correlators of conserved currents, which, in turn, coincides with the number of massless vertices in the bulk [25; 38]. We hope to solve analytically the conservation conditions to fully classify the correlators of (partially-)conserved currents and make a match with the \(AdS\) vertices involving (partially-)massless fields [53]. The case of all massive fields is fully covered by our ansatz in one-to-one correspondence with the vertices in the bulk [25; 48].
The correlation functions of three conserved currents were derived earlier using different approaches in [92; 94]. In even dimensions, they were described by the correlators in free theories of so-called singletons -- conformal fields describing the short
conformal representations described by the (self-dual) multi-forms, corresponding to rectangular Young diagrams of the half-maximal height of the massless little group in even dimensions (see, e.g., [119]). In four dimensions, these are the spin-s massless fields, which are representations of the conformal algebra \(SO(4,2)\) despite the lack of conformal symmetry in their standard off-shell descriptions (see, e.g., [120; 121]).SS The situation is different in the odd dimensions [94], where the singletons are missing or, presumably, correspond to some generalized free field theories lacking locality: free field equations containing square root of d'Alambertian operator (see, e.g., [123]).
Footnote §: Explicit descriptions of the singleton theories in terms of covariant Lagrangians are so far only well-studied for the spin-one case (see [122] for a review).
The formulation [77] and our generalization for higher spins are also suitable for the investigation of the singular part of the correlation function to get a route to the trace anomaly structure in the higher-spin case. We leave this to future investigations.
## Acknowledgements
R. M. would like to thank Stefan Theisen, Rubik Poghossian and Aleksey Isaev for many valuable discussions during long period of preparation of this paper, and special gratitude to Ruben Mkrtchyan for productive and focused on result discussions. R. M. and M. K. where supported by the Science Committee of RA, in the frames of the research project # 21AG-1C060. K. M. was supported by the European Union's Horizon 2020 Research and Innovation Programme under the Marie Sklodowska-Curie Grant No. 844265, UKRI and STFC Consolidated Grant ST/T000791/1.
## Appendix A Short review of Osborn-Petkou formulation and adaptation to higher spin case
In this appendix, we present a short review of useful formulas and constructions proposed in article [77] (see also [78; 79]).
### Conformal Transformations
The conformal transformations (combination of translation, rotation, scale transformation, and special conformal boosts) are diffeomorphisms preserving metric up to a local scale factor:
\[x_{\mu}\to x^{\prime}_{\mu}(x),\quad g_{\mu\nu}dx^{\prime\mu}dx^{\prime\nu} \rightarrow\Omega(x)^{-2}g_{\mu\nu}dx^{\mu}dx^{\nu} \tag{10}\]
Combining this transformation with local dilatation we arrive at local rotations:
\[R_{\mu}^{\;\alpha}(x)=\Omega(x)\frac{\partial x_{\mu}^{\prime}}{\partial x_{ \alpha}},\quad R_{\mu}^{\;\alpha}(x)R_{\alpha}^{\;\nu}(x)=\delta_{\mu}^{\nu}\,. \tag{10}\]
Adding inversion to this picture :
\[x_{\mu}^{\prime}=\frac{x_{\mu}}{x^{2}},\quad\Omega(x)=x^{2},\quad R_{\mu\nu}(x )=I_{\mu\nu}(x)=\delta_{\mu\nu}-2\frac{x_{\mu}x_{\nu}}{x^{2}} \tag{11}\]
we see that the rotation operator, in this case, is the Inversion matrix \(I_{\mu\nu}\). A combination of inversion, rotation, and translation can describe any conformal transformation.
We will show below how the conformal symmetry fixes the form of the two and three-point correlation functions for arbitrary quasi-primary fields \(\mathcal{O}^{i}(x)\), where \(i\) is an index counting corresponding representation of the rotation group \(O(d)\) (see [77] for details). The symmetric representation of the conformal group is defined by two quantum numbers: the spin and the conformal dimension. The two-point function of two operators is fixed by conformal symmetry up to an overall constant:
\[<\mathcal{O}^{i}(x_{1})\bar{\mathcal{O}}_{j}(x_{2})>=\frac{C_{\mathcal{O}}}{ (x_{12}^{2})^{\eta}}D_{j}^{i}(I(x_{12})),\quad x_{12\mu}=x_{1\mu}-x_{2\mu} \tag{12}\]
Here \(\bar{\mathcal{O}}_{j}(x)\) is conjugate representation for \(\mathcal{O}^{i}(x)\) with the same conformal dimension. Another important object here is \(D(I(x_{12}))\) which is corresponding representation for the inversion matrix \(I_{\mu\nu}(x)=\delta_{\mu\nu}-2x_{\mu}x_{\nu}/x^{2}\).
#### Three point function
Since conformal transformations transform any three points into any others, the three-point function is also essentially defined in general dimension \(d\). Our discussion for arbitrary representations for the fields \(\mathcal{O}_{1},\mathcal{O}_{2},\mathcal{O}_{3}\) with dimensions \(\eta_{1},\eta_{2},\eta_{3}\) is based on the following formula from [77]
\[\langle\mathcal{O}_{1}^{i_{1}}(x_{1})\,\mathcal{O}_{2}^{i_{2}}(x _{2})\,\mathcal{O}_{3}^{i_{3}}(x_{3})\rangle =\frac{1}{(x_{12}^{2})^{\delta_{12}}\,(x_{23}^{2})^{\delta_{23}} \,(x_{31}^{2})^{\delta_{31}}}\] \[\times D_{1\;j_{1}}^{i_{1}}(I(x_{13}))D_{2\;j_{2}}^{i_{2}}(I(x_{23 }))\,t^{j_{1}j_{2}i_{3}}(X_{12})\,, \tag{13}\]
where \(t^{i_{1}i_{2}i_{3}}(X)\) is a tensor living in three different spin representations in general case. This object transforms in a proper way with respect to local rotation and dilatations.
\[D_{1\;j_{1}}^{i_{1}}(R)D_{2\;j_{2}}^{i_{2}}(R)D_{3\;j_{3}}^{i_{3} }(R)\,t^{j_{1}j_{2}j_{3}}(X)=t^{i_{1}i_{2}i_{3}}(RX)\text{ for all }R\in O(d)\,\] \[t^{i_{1}i_{2}i_{3}}(\lambda X)=\lambda^{q}t^{i_{1}i_{2}i_{3}}(X) \tag{14}\]
\[X_{12\mu}=-X_{21\mu}=\frac{x_{13\mu}}{x_{13}^{\,2}}-\frac{x_{23\mu}}{x_{23}^{\,2}} \,,\quad X_{12}^{\,2}=\frac{x_{12}^{\,2}}{x_{13}^{\,2}x_{23}^{\,2}}\] (A.7)
The scaling dimensions of the fields should satisfy the following expressions
\[\delta_{12} =\frac{1}{2}(\eta_{1}+\eta_{2}-\eta_{3}+q)\,,\] (A.8) \[\delta_{23} =\frac{1}{2}(\eta_{2}+\eta_{3}-\eta_{1}-q)\,,\] (A.9) \[\delta_{31} =\frac{1}{2}(\eta_{3}+\eta_{1}-\eta_{2}-q)\,.\] (A.10)
So we see that for the construction of the two-point function for spin \(s\) currents, we should realize the construction of the representation of the inversion matrix \(D(I(x_{12}))\) where:
\[I_{\mu\nu}(x_{12})=\delta_{\mu\nu}-2\hat{x}_{12\mu}\hat{x}_{12\nu},\quad\hat{x }_{12}=\frac{x_{12}}{\sqrt{x_{12}^{2}}}\] (A.11)
which is more or less obvious and known. Another important property of this formulation is that in the three-point function we can rearrange all three representations due to the following important properties [77] of structural function (\(q=0\)):
\[D_{1\,\,j_{1}}^{\,i_{1}}(I(\hat{x}_{13}))D_{2\,\,j_{2}}^{\,i_{2}} (I(\hat{x}_{23}))\,t^{j_{1}j_{2}i_{3}}(\hat{X}_{12})\] \[=D_{1\,\,j_{1}}^{\,i_{1}}(I(\hat{x}_{12}))D_{3\,\,j_{3}}^{\,i_{3} }(I(\hat{x}_{32}))\,\tilde{t}^{\,j_{1}i_{2}j_{3}}(\hat{X}_{13})=D_{2\,\,j_{2}}^ {\,i_{2}}(I(\hat{x}_{21}))D_{3\,\,j_{3}}^{\,i_{3}}(I(\hat{x}_{31}))\,\hat{t}^{ \,i_{1}j_{2}j_{3}}(\hat{X}_{32})\,,\] \[\tilde{t}^{\,i_{1}i_{2}i_{3}}(\hat{X})=D_{1\,\,j_{1}}^{\,i_{1}}(I (\hat{X}))\,t^{j_{1}i_{2}i_{3}}(\hat{X}),\quad\hat{t}^{\,i_{1}i_{2}i_{3}}( \hat{X})=D_{2\,\,j_{2}}^{\,i_{2}}(I(\hat{X}))\,t^{i_{1}j_{2}i_{3}}(\hat{X})\,.\] (A.12)
It follows then, that in the case when all three representations are the same (i.e. same spin currents) and the three-point function is symmetric for all fields \(\mathcal{O}_{1},\,\mathcal{O}_{2},\,\mathcal{O}_{3}\), then:
\[t^{i_{2}i_{1}i_{3}}(X)=t^{i_{1}i_{2}i_{3}}(-X)\,,\quad D^{\,i_{1}} _{\,\,j_{1}}(I(X))\,t^{j_{1}i_{2}i_{3}}(X)=t^{i_{3}i_{1}i_{2}}(-X).\] (A.13)
The first relation contains \(-X\) in r.h.s. because this object depends on the space-time coordinates through the difference between the inversions of the first and second coordinates around the third point (A.7), and when we replace the first two operators we also exchange \(x_{1}\) with \(x_{2}\). The importance of the minus sign in the second relation we consider in detail during the investigation of our ansatz for \(t^{i_{1}i_{2}i_{3}}(X)\). Then for irreducible representations, for which the two-point functions are fixed as (A.4), we see consistent scaling behavior and covariance with respect to inversions, rotations, and translations. All these mean that \(D(I(x_{12}))\) behaves as a parallel transport transformation between two space-time points for local conformal rotations. This fact is very important for understanding an analogous formula for three-point functions. The important property
of conformal transformations is that one can map any three points into any other three points. This leads to an essentially (almost) unique three-point function in general dimension \(d\). The general form of the three-point function is considered in [77] and presented here in (A.5). The original point of this consideration is that the three-point function is described through the homogeneous tensor \(t^{i_{1}i_{2}i_{3}}(X)\) satisfying (A.6) and (A.12). More details can be found in [77],[78] and [79], here we just note that if we restrict ourselves to the polynomial function of unit vector:
\[t^{i_{1}i_{2}i_{3}}(X)=t^{i_{1}i_{2}i_{3}}(\hat{X}),\] (A.14)
where
\[\hat{X}_{\mu}=\frac{X_{\mu}}{\sqrt{X^{2}}},\] (A.15)
then in (A.8)-(A.10) we have
\[q=0\] (A.16)
and instead of
\[I_{\mu\alpha}(x_{23})\hat{X}_{12\,\alpha}=\frac{x_{12}^{2}}{x_{13}^{2}}\hat{X} _{13\,\mu}\,,\quad I_{\mu\alpha}(x_{13})\hat{X}_{12\,\alpha}=\frac{x_{12}^{2}} {x_{23}^{2}}\hat{X}_{32\,\mu}\,,\] (A.17)
we have
\[I_{\mu\alpha}(x_{23})\hat{X}_{12\,\alpha}=\hat{X}_{13\,\mu}\,, \quad I_{\mu\alpha}(x_{13})\hat{X}_{12\,\alpha}=\hat{X}_{32\,\mu}\,,\] (A.18)
and we see that inversion operators \(I_{\mu\alpha}(x_{ij}),i\neq j,i,j=1,2,3\) really rotate from one direction to other unit inverted vectors \(\hat{X}_{ij}\). This leads to the familiar expression for the three-point function:
\[\langle\mathcal{O}_{1}^{i_{1}}(x_{1})\,\mathcal{O}_{2}^{i_{2}}(x _{2})\,\mathcal{O}_{3}^{i_{3}}(x_{3})\rangle=\frac{1}{(x_{12}^{2})^{\delta_{1 2}}\,(x_{23}^{2})^{\delta_{23}}\,(x_{31}^{2})^{\delta_{31}}}\] \[\times D_{1\,\,j_{1}}^{i_{1}}(I(x_{13}))D_{2\,\,j_{2}}^{i_{2}}(I( x_{23}))\,t^{j_{1}j_{2}i_{3}}(\hat{X}_{12})\,,\]
where \(t^{i_{1}i_{2}i_{3}}(\hat{X})\) is a homogeneous and dimensionless tensor satisfying
\[D_{1\,\,j_{1}}^{i_{1}}(R)D_{2\,\,j_{2}}^{i_{2}}(R)D_{3\,\,j_{3}} ^{i_{3}}(R)\,t^{j_{1}j_{2}j_{3}}(\hat{X})=t^{i_{1}i_{2}i_{3}}(R\hat{X})\text{ for all }R\,\] (A.20) \[t^{i_{1}i_{2}i_{3}}(\lambda\hat{X})=t^{i_{1}i_{2}i_{3}}(\hat{X})\] (A.21)
\[\hat{X}_{12\mu}=-\hat{X}_{21\mu}=\sqrt{\frac{x_{13}^{2}x_{23}^{2}}{x_{12}^{2}}} \left[\frac{x_{13\mu}}{x_{13}^{2}}-\frac{x_{23\mu}}{x_{23}^{2}}\right]\] (A.22)
The scaling dimensions of the fields for \(q=0\) are
\[\delta_{12}= \frac{1}{2}(\eta_{1}+\eta_{2}-\eta_{3})\,,\] \[\delta_{23}= \frac{1}{2}(\eta_{2}+\eta_{3}-\eta_{1})\,,\] \[\delta_{31}= \frac{1}{2}(\eta_{3}+\eta_{1}-\eta_{2})\,.\] (A.23)
#### Conservation condition
For the derivation of the conservation conditions, we note that:
\[(\nabla_{x_{1}}\partial_{a})\langle{\cal J}^{(s_{1})}(a;x_{1}) \,{\cal J}^{(s_{2})}(b;x_{2})\,{\cal J}^{(s_{3})}(c;x_{3})\rangle\] \[=\nabla_{x_{1}^{\mu}}\left[\frac{1}{x_{12}^{\Delta_{12}}x_{23}^{ \Delta_{23}}x_{31}^{\Delta_{31}}}\partial_{a^{\mu}}{\cal I}^{(s_{1})}(a,a^{ \prime};x_{13})\right]{\cal I}^{(s_{2})}(b,b^{\prime};x_{23})*^{(s_{1})}_{a^{ \prime}}*^{(s_{2})}_{b^{\prime}}t^{(s_{3})}(a^{\prime},b^{\prime};c;\hat{X}_{12})\] \[+\frac{1}{x_{12}^{\Delta_{12}}x_{23}^{\Delta_{23}}x_{31}^{\Delta_ {31}}}\partial_{a^{\mu}}{\cal I}^{(s_{1})}(a,a^{\prime};x_{13}){\cal I}^{(s_{ 2})}(b,b^{\prime};x_{23})*^{(s_{1})}_{a^{\prime}}*^{(s_{2})}_{b^{\prime}}\nabla _{x_{1}^{\mu}}t^{(s_{3})}(a^{\prime},b^{\prime};c;\hat{X}_{12})\] (A.24)
Using the following relations:
\[\nabla_{x_{1}^{\mu}}\frac{1}{x_{12}^{\Delta_{12}}x_{31}^{\Delta_ {31}}}=-\frac{1}{x_{12}^{\Delta_{12}}x_{31}^{\Delta_{31}}}\left[\frac{\Delta_ {12}x_{12\mu}}{x_{12}^{2}}+\frac{\Delta_{31}x_{13\mu}}{x_{13}^{2}}\right]\] \[=-\frac{1}{x_{12}^{\Delta_{12}}x_{31}^{\Delta_{31}}}\left[\Delta_ {12}X_{32\mu}+(\Delta_{12}+\Delta_{31})\frac{x_{13\mu}}{x_{13}^{2}}\right]\] \[=-\frac{1}{x_{12}^{\Delta_{12}}x_{31}^{\Delta_{31}+2}}\left[\Delta _{12}I_{\mu\alpha}(x_{13})\frac{X_{12}^{\alpha}}{X_{12}^{2}}+(\Delta_{12}+ \Delta_{31})\frac{x_{13\mu}}{x_{13}^{2}}\right]\] (A.25)
\[(\nabla_{x_{1}}\partial_{a}){\cal I}^{(s_{1})}(a,a^{\prime};x_{13})=2(d+s_{1} -2)\frac{(x_{13}\partial_{a})}{x_{13}^{2}}{\cal I}^{(s_{1})}(a,a^{\prime};x_{ 13})\] (A.26)
\[\nabla_{x_{1}^{\mu}}t^{(s_{3})}(a,b;c;X_{12})=\nabla_{X_{12}^{ \alpha}}t^{(s_{3})}(a,b;c;X_{12})\frac{\partial X_{12}^{\alpha}}{\partial x_ {1}^{\mu}}\] \[=\nabla_{X_{12}^{\alpha}}t^{(s_{3})}(a,b;c;X_{12})\frac{I_{\mu}^{ \alpha}(x_{13})}{x_{13}^{2}}\] (A.27)
we see that the conservation condition is satisfied when:
\[\Delta_{12}+\Delta_{31}=2\Delta_{(s_{1})}=2(d+s_{1}-2)\] (A.28)
and:
\[(\nabla_{X}\partial_{a})t^{(s_{3})}(a,b;c;X)=\Delta_{12}\tfrac{(X \partial_{a})}{X^{2}}t^{(s_{3})}(a,b;c;X) \tag{110}\]
This is the equation for structural tensor object \(t^{(s_{3})}(a,b;c;X)\) which we use in the second section. The equation (110) (or (100)) is equivalent to the conservation condition for the first current in the three-point function.
Now we can separate the traceless projector from "kernel" part and write (110) in the following form:
\[\Big{(}\nabla^{\mu}-\Delta_{12}\frac{\hat{X}^{\mu}}{\sqrt{X^{2}} }\Big{)}\partial_{\mu}^{a}\mathcal{E}^{(s_{1})}(a,a^{\prime})*_{a^{\prime}}^{ s_{1}}\tilde{t}^{(s_{3})}(a^{\prime},b,c;\hat{X})*_{b}^{s_{2}}*_{c}^{s_{3}} \mathcal{E}^{(s_{2})}(b,\tilde{b})\mathcal{E}^{(s_{3})}(c,\tilde{c})\] \[=\tfrac{1}{s_{1}!}\sum_{k=0}^{s_{1}/2-1}(s_{1}-2k)!\lambda_{k}^{s _{1}}[a^{2}]^{k}\Big{[}\Big{(}(\nabla\partial^{a})-\Delta_{12}\tfrac{(\hat{X} \partial^{a})}{\sqrt{X^{2}}}\Big{)}\Box_{a}^{k}\] \[-\tfrac{1}{d+2s_{1}-2k-4}\Big{(}(a\nabla)-\Delta_{12}\tfrac{(a \hat{X})}{\sqrt{X^{2}}}\Big{)}\Box_{a}^{k+1}\Big{]}\tilde{t}^{(s_{3})}(a,b,c; \hat{X})*_{b}^{s_{2}}*_{c}^{s_{3}}\mathcal{E}^{(s_{2})}(b,\tilde{b})\mathcal{ E}^{(s_{3})}(c,\tilde{c}) \tag{111}\]
where \(\tilde{t}^{(s_{3})}(a,b,c;\hat{X})\) now is:
\[\tilde{t}^{(s_{3})}(a,b,c;\hat{X})=I^{s_{3}}(c,c^{\prime};\hat{X} )*_{c^{\prime}}\tilde{H}^{(s_{123})}(a,b,c^{\prime};\hat{X})\] \[=\sum_{\begin{subarray}{c}s_{i}\in[0,\ldots s_{i}]\\ \{s_{i}\}\in\mathcal{A}\end{subarray}}(-1)^{\ell_{3}}\tilde{C}_{\ell_{1}\ell_ {2}\ell_{3}}(\hat{X}a)^{\ell_{1}}(\hat{X}b)^{\ell_{2}}(\hat{X}c)^{\ell_{3}}( ab)^{\alpha}I^{\beta}(b,c;\hat{X})I^{\gamma}(c,a;\hat{X}). \tag{112}\]
Then we compute the \(p\)-th trace as:
\[\Phi^{k}(a;b,c;\hat{X};\alpha,\gamma)=\Box_{a}^{k}(ab)^{\alpha}( ac)^{\gamma}(\hat{X}a)^{\ell_{1}}\] \[=\sum_{\begin{subarray}{c}p,q,n\\ p+q+n\leq k\end{subarray}}\rho\left(\begin{array}{c}k;p,q,n\\ \alpha,\gamma,\ell_{1}\end{array}\right)(ab)^{\alpha-k+n+q}(ac)^{\gamma-k+n+ p}(bc)^{k-n-p-q}(\hat{X}a)^{\ell_{1}-2n-p-q}(\hat{X}b)^{p}(\hat{X}c)^{q}, \tag{113}\]
where we neglected all terms of type \(O(b^{2},c^{2})\). From the equation
\[\Phi^{k+1}(a;b,c;\hat{X};\alpha,\gamma)=\Box_{a}\Phi^{k}(a;b,c; \hat{X};\alpha,\gamma) \tag{114}\]
we get the following recursion relation
\[\rho\left(\begin{array}{c}k+1;p,q,n\\ \alpha,\gamma,\ell_{1}\end{array}\right) =2\rho\left(\begin{array}{c}k;p,q,n\\ \alpha,\gamma,\ell_{1}\end{array}\right)(\alpha-k+n+q)(\gamma-k+n+p)\] \[+2\rho\left(\begin{array}{c}k;p-1,q,n\\ \alpha,\gamma,\ell_{1}\end{array}\right)(\alpha-k+n+q)(\ell_{1}-2n-p-q+1)\] \[+2\rho\left(\begin{array}{c}k;p,q-1,n\\ \alpha,\gamma,\ell_{1}\end{array}\right)(\gamma-k+n+p)(\ell_{1}-2n-p-q+1)\] \[+\rho\left(\begin{array}{c}k;p,q,n-1\\ \alpha,\gamma,\ell_{1}\end{array}\right)(\ell_{1}-2n-p-q+2)(\ell_{1}-2n-p-q+1) \tag{115}\]
This equation after substitution
\[\rho\left(\begin{matrix}k;p,q,n\\ \alpha,\gamma,\ell_{1}\end{matrix}\right)=2^{k-n}[\alpha]_{k-n-q}[\gamma]_{k-n-p}[ \ell_{1}]_{2n+p+q}\hat{\rho}(k;p,q,n)\] (A.35)
goes to Pascal's identity for multinomial:
\[\hat{\rho}(k+1;p,q,n) =\hat{\rho}(k;p,q,n)+\hat{\rho}(k;p,q,n-1)\] \[+\hat{\rho}(k;p-1,q,n)+\hat{\rho}(k;p,q-1,n)\] (A.36)
with obvious solution
\[\hat{\rho}(k;p,q,n)=\frac{[k]_{n+p+q}}{p!q!n!}=\binom{k}{p,q,n}\] (A.37)
Then we can easily derive the \(k\)th trace of our ansatz:
\[\Box_{a}^{k}\tilde{t}^{(s)}(a,b,c;\hat{X})=\sum_{\ell_{1}\in[2k, \ldots s_{1}];\ell_{2},\ell_{3}\in[0,\ldots s_{2},s_{3}]}T_{\ell_{1},\ell_{2},\ell_{3}}^{(k)}\left[\begin{matrix}\ell_{1}-2k,\ell_{2},\ell_{3}\\ \alpha;\beta,\gamma\end{matrix}\right]\] (A.38)
where we introduced notation:
\[\left[\begin{matrix}\ell_{1},\ell_{2},\ell_{3}\\ \alpha;\beta,\gamma\end{matrix}\right]=(\hat{X}a)^{\ell_{1}}(\hat{X}b)^{\ell_{ 2}}(\hat{X}c)^{\ell_{3}}(ab)^{\alpha}I^{\beta}(b,c;\hat{X})I^{\gamma}(c,a;\hat {X})\] (A.39)
and \(T_{\ell_{1},\ell_{2},\ell_{3}}^{(k)}\) is \(k\)th trace map of \(\tilde{C}_{\ell_{1}\ell_{2}\ell_{3}}\)
\[T_{\ell_{1},\ell_{2},\ell_{3}}^{(k)}=(-1)^{\ell_{3}}\sum_{ \begin{subarray}{c}p,q,n\\ p+q+n\leq k\end{subarray}}\tilde{C}_{\ell_{1}-2k+2n+p+q,\ell_{2}-p,\ell_{3}-q }\ \rho\left(\begin{matrix}k;p,q,n\\ \alpha,\gamma,\ell_{1}\end{matrix}\right)\] (A.40)
In this way substituting (A.38) in (A.30) one can straightforwardly derive the conservation condition on \(T_{\ell_{1},\ell_{2},\ell_{3}}^{(k)}\) given in (4.9).
## Appendix B Examples
#### Coincident spins \(s_{1}=s_{2}=s_{3}=s\)
We present examples for the most symmetric case of equal spins \(s_{1}=s_{2}=s_{3}=s\). It is enough to write a "kernel" term with the following symmetry properties:
\[\tilde{t}^{(s)}(a,b;c;\hat{X})=\tilde{t}^{(s)}(b,a;c;-\hat{X})\] (B.1) \[I^{s}(a,a^{\prime};\hat{X})*_{a^{\prime}}\tilde{t}^{(s)}(a^{\prime },b;c;\hat{X})=\tilde{t}^{(s)}(c,a;b;-\hat{X})\] (B.2)
From these conditions, we derive the most general polynomial ansatz for \(t^{(s)}(a,b;c;\hat{X})\):
\[\tilde{t}^{(s)}(a,b;c;\hat{X})= I^{s}(c,c^{\prime};\hat{X})*_{c^{\prime}}\tilde{H}^{(s)}(a,b,c^{ \prime};\hat{X}) \tag{114}\] \[\tilde{t}^{(s)}_{1}(a,b;c;\hat{X})= \big{[}\tilde{H}^{(s)}(a,b,c;\hat{X})+I^{s}(a,a^{\prime};\hat{X} )*_{a^{\prime}}\tilde{H}^{(s)}(a^{\prime},b,c;-\hat{X})\] \[+ I^{s}(b,b^{\prime};\hat{X})*_{b^{\prime}}\tilde{H}^{(s)}(a,b^{ \prime},c;\hat{X})\big{]} \tag{115}\]
where the main object, \(\tilde{H}\), is given by
\[\tilde{H}^{(s)}(a,b,c;\hat{X})=\sum_{\begin{subarray}{c}\ell_{1},\ell_{2},\ell _{3}\in[0,\ldots s]\\ \{\ell_{i}\}\in\bar{\mathcal{A}}\end{subarray}}\tilde{C}_{\ell_{1}\ell_{2} \ell_{3}}(\hat{X}a)^{\ell_{1}}(\hat{X}b)^{\ell_{2}}(\hat{X}c)^{\ell_{3}}(ab)^ {\alpha}(bc)^{\beta}(ca)^{\gamma}\,. \tag{116}\]
Here \(\bar{\mathcal{A}}\) is the range of indices defined by the following natural restrictions:
\[\alpha+\gamma+\ell_{1}=s \tag{117}\] \[\alpha+\beta+\ell_{2}=s\] (118) \[\gamma+\beta+\ell_{3}=s \tag{119}\]
These also can be resolved fixing \(\alpha,\beta,\gamma\) for any choice of \(\ell_{1},\ell_{2},\ell_{3}\):
\[2\alpha=s+\ell_{3}-\ell_{1}-\ell_{k} \tag{120}\] \[2\beta=s+\ell_{1}-\ell_{2}-\ell_{3}\] (121) \[2\gamma=s+\ell_{2}-\ell_{1}-\ell_{3} \tag{122}\]
The positiveness of \(\alpha,\beta,\gamma\) for coincident spins leads to the triangle inequalities:
\[s+\ell_{i}\geq\ell_{j}+\ell_{k}\quad i\neq j\neq k \tag{123}\]
Another special point in consideration of equal spins is that conditions (109) and (110) force the coefficients \(T^{(0)}_{\ell_{1}\ell_{2}\ell_{3}}\) to be completely symmetric with respect to \(\ell_{1},\ell_{2},\ell_{3}\). Then:
\[I^{s}(a,a^{\prime};\hat{X})*_{a^{\prime}}I^{s}(b,b^{\prime};\hat {X})*_{b^{\prime}} \tilde{H}^{(s)}(a^{\prime},b^{\prime},c;\hat{X})=(-1)^{\ell_{1}+ \ell_{2}+\ell_{3}}I^{s}(c,c^{\prime};\hat{X})*_{c^{\prime}}\tilde{H}^{(s)}(a,b,c^{\prime};\hat{X})\] \[=I^{s}(c,c^{\prime};\hat{X})*_{c^{\prime}}\tilde{H}^{(s)}(a,b,c^{ \prime};-\hat{X}) \tag{124}\]
and we get even (odd) sum of \(\ell\)'s for even (odd) spin \(s\):
\[\sum_{i=1,2,3}\ell_{i}=3s-2(\alpha+\beta+\gamma)\,. \tag{125}\]
The relation (124) helps to explain the minus sign in condition (110) and we can make the following simple derivation showing that (115) is equivalent to (114):
\[\tilde{H}^{(s)}(a,b,c;\hat{X})+I^{s}(a,a^{\prime};\hat{X})*_{a^{ \prime}}\tilde{H}^{(s)}(a^{\prime},b,c;-\hat{X})+I^{s}(b,b^{\prime};\hat{X})* _{b^{\prime}}\tilde{H}^{(s)}(a,b^{\prime},c;\hat{X})\] \[=I^{s}(c,c^{\prime};\hat{X})*_{c^{\prime}}\Big{[}I^{s}(c^{\prime},c^{\prime\prime};\hat{X})*_{c^{\prime\prime}}H^{(s)}(a,b,c^{\prime\prime}; \hat{X})+I^{s}(b,b^{\prime};\hat{X})*_{b^{\prime}}\tilde{H}^{(s)}(a,b^{\prime},c^{\prime};\hat{X})\] \[+I^{s}(a,a^{\prime};\hat{X})*_{a^{\prime}}\tilde{H}^{(s)}(b,a^{ \prime},c^{\prime};\hat{X})\Big{]}=I^{s}(c,c^{\prime};\hat{X})*_{c^{\prime}} \bar{\tilde{H}}^{(s)}(a,b,c^{\prime};\hat{X}) \tag{126}\]
where
\[\bar{\bar{H}}^{(s)}(a,b,c^{\prime};\hat{X}) =\sum_{\begin{subarray}{c}\bar{\ell}_{1},\bar{\ell}_{2},\bar{\ell}_ {3}\in[0,\ldots,s]\\ \bar{\ell}_{1}+\bar{\ell}_{2}+\bar{\ell}_{3}=\text{even}\end{subarray}}\bar{T}^ {(0)}_{\bar{\ell}_{1}\bar{\ell}_{2}\bar{\ell}_{3}}(\hat{X}a)^{\bar{\ell}_{1}} (\hat{X}b)^{\bar{\ell}_{2}}(\hat{X}c)^{\bar{\ell}_{3}}(ab)^{\bar{\alpha}}(bc)^ {\bar{\beta}}(ca)^{\bar{\gamma}}\,,\] (B.16) \[\bar{T}^{(0)}_{\bar{\ell}_{1}\bar{\ell}_{2}\bar{\ell}_{3}} =\bar{T}^{(0)}(\bar{\ell}_{1}|\bar{\ell}_{2}\bar{\ell}_{3})+\bar{T}^ {(0)}(\bar{\ell}_{2}|\bar{\ell}_{3}\bar{\ell}_{1})+\bar{T}^{(0)}(\bar{\ell}_{ 3}|\bar{\ell}_{1}\bar{\ell}_{2})\,,\] (B.17)
where (symmetric in all \(\bar{\ell}_{i};i=1,2,3\)) coefficients \(\bar{T}^{(0)}_{\bar{\ell}_{1}\bar{\ell}_{2}\bar{\ell}_{3}}\) are constructed as a cyclic permutation (B.17) of the object that is symmetric in two indices only:
\[\bar{T}^{(0)}(\bar{\ell}_{1}|\bar{\ell}_{2}\bar{\ell}_{3})=(-1)^{\bar{\ell}_ {1}}\sum_{n_{2},n_{3}}^{\bar{\ell}_{2},\bar{\ell}_{3}}2^{n_{2}+n_{3}}T^{(0)}_ {\bar{\ell}_{1}-n_{2}-n_{3},\bar{\ell}_{2}-n_{2}\bar{\ell}_{3}-n_{3}}{\bar{ \alpha}+n_{2}\choose\bar{\alpha}}{\bar{\gamma}+n_{3}\choose\bar{\gamma}}\] (B.18)
The most general ansatz in this case is (B.3), with traceless projectors written as:
\[t^{(s)}(\tilde{a},\tilde{b};\tilde{c};\hat{X})=\mathcal{E}^{(s)}(\tilde{a},a)* _{a}\tilde{t}^{(s)}(a,b;c;\hat{X})*_{b}*_{c}\mathcal{E}^{(s)}(b,\tilde{b}) \mathcal{E}^{(s)}(c,\tilde{c}).\] (B.19)
#### Conservation condition for coincident spins
When all spins coincide, we need only one equation for fully symmetric coefficients:
\[(\alpha+1)(2\ell_{3}-2k-\Delta_{(s)}-s)T^{(k)}_{\ell_{1}-1,\ell_ {2}-1,\ell_{3}}+(\gamma+1)(2\ell_{2}-2k-\Delta_{(s)}-s)T^{(k)}_{\ell_{1}-1, \ell_{2},\ell_{3}-1}\] \[+(\alpha+1)(\ell_{3}+1)T^{(k)}_{\ell_{1}-1,\ell_{2},\ell_{3}+1}+ (\gamma+1)(\ell_{2}+1)T^{(k)}_{\ell_{1}-1,\ell_{2}+1,\ell_{3}}\] \[+\frac{1}{d+2s-2k-4}\left[2(\ell_{2}-\ell_{3})T^{(k+1)}_{\ell_{1 },\ell_{2},\ell_{3}}+2(\beta+1)\big{(}T^{(k+1)}_{\ell_{1}+1,\ell_{2},\ell_{3}- 1}+T^{(k+1)}_{\ell_{1}+1,\ell_{2}-1,\ell_{3}}\big{)}\right.\] \[-\left.(\ell_{2}+1)T^{(k+1)}_{\ell_{1}+1,\ell_{2}+1,\ell_{3}}-( \ell_{3}+1)T^{(k+1)}_{\ell_{1}+1,\ell_{2},\ell_{3}+1}\right]=0\] (B.20)
where
\[T^{(k+1)}_{\ell_{1},\ell_{2},\ell_{3}}=(\ell_{1}-2k)(\ell_{1}-2k -1)T^{(k)}_{\ell_{1},\ell_{2},\ell_{3}}+2(\alpha+1)(\gamma+1)T^{(k)}_{\ell_{1} -2,\ell_{2},\ell_{3}}\] \[+2(\alpha+1)(\ell_{1}-2k-1)T^{(k)}_{\ell_{1}-1,\ell_{2}-1,\ell_{3 }}-2(\gamma+1)(\ell_{1}-2k-1)T^{(k)}_{\ell_{1}-1,\ell_{2},\ell_{3}-1}\,,\] (B.21)
and we need to solve only the first conservation condition for \(k=0\) (the rest follow from tracelessness). Using the helpful ansatz (4.11), (4.12) we arrive to the following recursion for (symmetric in \(\ell_{1},\ell_{2},\ell_{3}\)) expressions \(C_{\ell_{1},\ell_{2},\ell_{3}}\) and \(T_{\ell_{1},\ell_{2},\ell_{3}}\)
\[D_{\ell_{1},\ell_{2},\ell_{3}}=(2\ell_{3}-\Delta_{(s)}-s)C_{\ell_ {1}-1,\ell_{2}-1,\ell_{3}}-(2\ell_{2}-\Delta_{(s)}-s)C_{\ell_{1}-1,\ell_{2}, \ell_{3}-1}\] \[+\beta(\ell_{2}+1)C_{\ell_{1}-1,\ell_{2}+1,\ell_{3}}-\beta(\ell_ {3}+1)C_{\ell_{1}-1,\ell_{2},\ell_{3}+1}\] \[+\frac{1}{d+2s-4}\left[2(\ell_{2}-\ell_{3})T_{\ell_{1},\ell_{2}, \ell_{3}}+2\big{(}\gamma T_{\ell_{1}+1,\ell_{2}-1,\ell_{3}}-\alpha T_{\ell_{1} +1,\ell_{2},\ell_{3}-1}\big{)}\right.\] \[+\left.\gamma(\ell_{3}+1)T_{\ell_{1}+1,\ell_{2},\ell_{3}+1}- \alpha(\ell_{2}+1)T_{\ell_{1}+1,\ell_{2}+1,\ell_{3}}\right]=0\] (B.22)
where
\[T_{\ell_{1},\ell_{2},\ell_{3}}=\ell_{1}(\ell_{1}-1)C_{\ell_{1},\ell _{2},\ell_{3}}+2\beta C_{\ell_{1}-2,\ell_{2},\ell_{3}}\] \[+2(\ell_{1}-1)C_{\ell_{1}-1,\ell_{2}-1,\ell_{3}}+2(\ell_{1}-1)C_{ \ell_{1}-1,\ell_{2},\ell_{3}-1} \tag{111}\]
Computer-assisted solutions have \(s+1\) independent parameters as they should.
#### Spin 2 case: energy-momentum tensor and connection with (100), (101)
First we review construction in the case of spin two following [77]. For three point function of energy-momentum tensors we have:
\[\langle T_{\mu\nu}(x_{1})\,T_{\sigma\rho}(x_{2})\,T_{\alpha\beta}(x_{3})\rangle =\frac{1}{x_{12}^{\;d}\,x_{13}^{\;d}\,x_{23}^{\;d}}\,\mathcal{I}_{\mu\nu,\mu^ {\prime}\nu^{\prime}}(x_{13})\mathcal{I}_{\sigma\rho,\sigma^{\prime}\rho^{ \prime}}(x_{23})\,t_{\mu^{\prime}\nu^{\prime}\sigma^{\prime}\rho^{\prime} \alpha\beta}(X_{12})\,, \tag{112}\]
with \(t_{\mu\nu\sigma\rho\alpha\beta}(X)\) homogeneous of degree zero in \(X\), symmetric and traceless on each pair of indices \(\mu\nu,\ \sigma\rho\) and \(\alpha\beta\) and from satisfying
\[t_{\mu\nu\sigma\rho\alpha\beta}(X)=t_{\sigma\rho\mu\nu\alpha \beta}(X)\,. \tag{113}\] \[\mathcal{I}_{\mu\nu,\mu^{\prime}\nu^{\prime}}(X)t_{\mu^{\prime} \nu^{\prime}\sigma\rho\alpha\beta}(X)=t_{\alpha\beta\mu\nu\sigma\rho}(X)\,, \tag{114}\]
The conservation equations require just
\[\Big{(}\partial_{\mu}-d\,\frac{X_{\mu}}{X^{2}}\Big{)}t_{\mu\nu\sigma\rho \alpha\beta}(X)=0 \tag{115}\]
Defining
\[h^{1}_{\mu\nu}(\hat{X}) = \hat{X}_{\mu}\hat{X}_{\nu}-\frac{1}{d}\,\delta_{\mu\nu}\,,\quad \hat{X}_{\mu}=\frac{X_{\mu}}{\sqrt{X^{2}}} \tag{116}\] \[h^{2}_{\mu\nu\sigma\rho}(\hat{X}) = \hat{X}_{\mu}\hat{X}_{\sigma}\delta_{\nu\rho}+(\mu\leftrightarrow \nu,\sigma\leftrightarrow\rho)\] (117) \[- \frac{4}{d}\hat{X}_{\mu}\hat{X}_{\nu}\delta_{\sigma\rho}-\frac{4} {d}\hat{X}_{\sigma}\hat{X}_{\rho}\delta_{\mu\nu}+\frac{4}{d^{2}}\delta_{\mu \nu}\delta_{\sigma\rho}\] \[h^{3}_{\mu\nu\sigma\rho} = \delta_{\mu\sigma}\delta_{\nu\rho}+\delta_{\mu\rho}\delta_{\nu \sigma}-\frac{2}{d}\,\delta_{\mu\nu}\delta_{\sigma\rho}=2\mathcal{E}_{\mu\nu, \sigma\rho}\] (118) \[h^{4}_{\mu\nu\sigma\rho\alpha\beta}(\hat{X}) = h^{3}_{\mu\nu\sigma\alpha}\hat{X}_{\rho}\hat{X}_{\beta}+(\sigma \leftrightarrow\rho,\alpha\leftrightarrow\beta)\] (119) \[- \frac{2}{d}\,\delta_{\sigma\rho}h^{2}_{\mu\nu\alpha\beta}(\hat{X} )-\frac{2}{d}\,\delta_{\alpha\beta}h^{2}_{\mu\nu\sigma\rho}(\hat{X})-\frac{8}{ d^{2}}\,\delta_{\sigma\rho}\delta_{\alpha\beta}h^{1}_{\mu\nu}(\hat{X})\,,\] \[h^{5}_{\mu\nu\sigma\rho\alpha\beta} = \delta_{\mu\sigma}\delta_{\nu\alpha}\delta_{\rho\beta}+(\mu \leftrightarrow\nu,\sigma\leftrightarrow\rho,\alpha\leftrightarrow\beta)\] (120) \[- \frac{4}{d}\,\delta_{\mu\nu}h^{3}_{\sigma\rho\alpha\beta}-\frac{4} {d}\,\delta_{\sigma\rho}h^{3}_{\mu\nu\alpha\beta}-\frac{4}{d}\,\delta_{\alpha \beta}h^{3}_{\mu\nu\sigma\rho}-\frac{8}{d^{2}}\,\delta_{\mu\nu}\delta_{\sigma \rho}\delta_{\alpha\beta}\,,\]
a general expansion for \(t_{\mu\nu\sigma\rho\alpha\beta}(X)\) has the form
\[t_{\mu\nu\sigma\rho\alpha\beta}(X) =a\,h^{5}_{\mu\nu\sigma\rho\alpha\beta}+b\,h^{4}_{\alpha\beta\mu \nu\sigma\rho}(\hat{X})+b^{\prime}\big{(}h^{4}_{\mu\nu\sigma\rho\alpha\beta}( \hat{X})+h^{4}_{\sigma\rho\mu\nu\alpha\beta}(\hat{X})\big{)}\] \[\quad+c\,h^{3}_{\mu\nu\sigma\rho}h^{1}_{\alpha\beta}(\hat{X})+c^{ \prime}\big{(}h^{3}_{\sigma\rho\alpha\beta}h^{1}_{\mu\nu}(\hat{X})+h^{3}_{\mu \nu\alpha\beta}h^{1}_{\sigma\rho}(\hat{X})\big{)}\] \[\quad+e\,h^{2}_{\mu\nu\sigma\rho}(\hat{X})h^{1}_{\alpha\beta}( \hat{X})+e^{\prime}\big{(}h^{2}_{\sigma\rho\alpha\beta}(\hat{X})h^{1}_{\mu\nu} (\hat{X})+h^{2}_{\mu\nu\alpha\beta}(\hat{X})h^{1}_{\sigma\rho}(\hat{X})\big{)}\] \[\quad+f\,h^{1}_{\mu\nu}(\hat{X})h^{1}_{\sigma\rho}(\hat{X})h^{1}_ {\alpha\beta}(\hat{X})\,.\] (B.33)
From the symmetry condition (B.25), (B.26) we have
\[b+b^{\prime}=-2a\,,\quad c^{\prime}=c\,,\quad e+e^{\prime}=-4b^{\prime}-2c\,,\] (B.34)
so that \(a,b,c,e,f\) may be regarded as independent. Then using conservation condition (B.23) we have two additional constraints:
\[d^{2}a+2(b+b^{\prime})-(d-2)b^{\prime}-dc+e^{\prime} =0\,,\] (B.35) \[d(d+2)(2b^{\prime}+c)+4(e+e^{\prime})+f =0\,.\] (B.36)
Therefore, we have three undetermined independent coefficients, say, \(a,b,c\), which are the free parameters of the three-point function (in arbitrary dimension \(d\)):
\[f=(d+4)(d-2)(4a+2b-c),\] (B.37) \[e^{\prime}=-(d+4)(d-2)a-(d-2)b+dc,\] (B.38) \[e=(d+2)(da+b-c).\] (B.39)
Now we can compare these with our general formulation in the case of spin two. We should look at ansatz (B.3) and (B.5) for the case of \(s=2\). First of all putting \(s=2\) in corresponding number of solution of triangle inequality (3.18) we obtain \(N_{222}=5\) which is correct number of parameters after applying symmetry constraints (B.3) then investigating these independent five terms in ansatz (B.5), identifying with (B.33) and using notation
\[\tilde{C}_{\ell_{1},\ell_{2},\ell_{3}}=(-1)^{\ell_{3}}C_{\ell_{1},\ell_{2}, \ell_{3}}\] (B.40)
where \(C_{\ell_{1},\ell_{2},\ell_{3}}\) is symmetric in \(\ell_{1},\ell_{2},\ell_{3}\). we obtain the following connections between coefficients:
\[a =\frac{C_{000}}{8};\quad b=\frac{C_{110}}{8};\quad b^{\prime}=- \frac{C_{000}}{4}-\frac{C_{110}}{8};\] (B.41) \[c =c^{\prime}=\frac{C_{200}}{2};\quad e=C_{000}+C_{110}+\frac{C_{11 2}}{4};\] (B.42) \[e^{\prime} =-\frac{C_{110}}{2}-\frac{C_{112}}{4}-C_{200};\quad f=4C_{110}+4 C_{112}+8C_{200}+C_{222}.\] (B.43)
So we see that these 8 coefficients \(a,b,b^{\prime},c,c^{\prime},e,e^{\prime},f\) from [77] expressed through the five coefficients from our ansatz \(C_{000},C_{110},C_{200},C_{112},C_{222}\). Because triangle inequality and symmetricity of \(C_{\ell_{1},\ell_{2},\ell_{3}}\) lead to the solution (B.25), (B.26) in general case. Then we can investigate conservation condition (B.27). taking into account that our normalization here slightly differ and we should insert in (B.5)
\[\tilde{C}_{\ell_{1},\ell_{2},\ell_{3}}=\alpha!\beta!\gamma!C_{\ell_{1},\ell_{2 },\ell_{3}}\] (B.44)
we see that for \(s=2\) we have only two nonzero independent equations:
\[D_{1,1,0} \sim(8-d^{2}-2d)C_{000}+(6-d)C_{110}+(4d+8)C_{200}+2C_{112}=0\] (B.45) \[D_{1,2,1} \sim(d^{2}-12)C_{110}-12C_{112}-2d(d-2)C_{200}-4C_{222}=0\] (B.46)
Now we see that it is possible to express \(C_{112}\) and \(C_{222}\) through the remaining three arbitrary parameter \(C_{000}\), \(C_{110}\) and \(C_{200}\) and these free parameters from (B.45), (B.46) are exactly equivalent to \(a,b,c\) (see (B.41) and (B.42)). Moreover after some straightforward manipulation we can see that all relations (B.34)-(B.39) are also satisfied exactly.
**Spin 3 case: solution of the conservation condition (B.22)**
Finalizing this Appendix we just present solution of the conservation condition for spin three case. Here we have eight different parameters in our ansatz and conservation equations expressed four from them through the four independent :
\[C_{3,0,0} =\frac{1}{9(d+2)}\Big{[}(d-2)(d+8)C_{1,0,0}+(d-14)C_{2,1,0}-2C_{1,1,1}-2C_{2,2,1}\Big{]}\] (B.47) \[C_{3,1,1} =\frac{1}{6(d+2)}\Big{[}(d+8)(d-2)^{2}C_{1,0,0}+(d(d+2)+8)C_{1,1,1}\] \[\qquad\qquad\qquad\qquad-4(d(d+8)-4)C_{2,1,0}+8C_{2,2,1}\Big{]}\] (B.48) \[C_{3,2,2} =\frac{1}{12(d+2)}\Big{[}-(d+6)(d+8)(d-2)^{2}C_{1,0,0}-2(d(d+10)+ 32)C_{1,1,1}\] \[\qquad\qquad\qquad+2(d^{3}+24d^{2}+60d-96)C_{2,1,0}+2(d(d-12)-44 )C_{2,2,1}\Big{]}\] (B.49) \[C_{3,3,3} =\frac{1}{54(d+2)}\Big{[}-(d+8)(d-2)^{2}(d^{2}-10d-60)C_{1,0,0}\] \[\quad+(640-d^{4}+2d^{3}-12d^{2}-200d)C_{1,1,1}+4(d^{4}+3d^{3}-124 d^{2}-300d+480)C_{2,1,0}\] \[\quad+(3d^{3}-16d^{2}+180d+736)C_{2,2,1}\Big{]}\] (B.50)
|
2305.20054 | UNSSOR: Unsupervised Neural Speech Separation by Leveraging
Over-determined Training Mixtures | In reverberant conditions with multiple concurrent speakers, each microphone
acquires a mixture signal of multiple speakers at a different location. In
over-determined conditions where the microphones out-number speakers, we can
narrow down the solutions to speaker images and realize unsupervised speech
separation by leveraging each mixture signal as a constraint (i.e., the
estimated speaker images at a microphone should add up to the mixture).
Equipped with this insight, we propose UNSSOR, an algorithm for
$\textbf{u}$nsupervised $\textbf{n}$eural $\textbf{s}$peech
$\textbf{s}$eparation by leveraging $\textbf{o}$ver-determined training
mixtu$\textbf{r}$es. At each training step, we feed an input mixture to a deep
neural network (DNN) to produce an intermediate estimate for each speaker,
linearly filter the estimates, and optimize a loss so that, at each microphone,
the filtered estimates of all the speakers can add up to the mixture to satisfy
the above constraint. We show that this loss can promote unsupervised
separation of speakers. The linear filters are computed in each sub-band based
on the mixture and DNN estimates through the forward convolutive prediction
(FCP) algorithm. To address the frequency permutation problem incurred by using
sub-band FCP, a loss term based on minimizing intra-source magnitude scattering
is proposed. Although UNSSOR requires over-determined training mixtures, we can
train DNNs to achieve under-determined separation (e.g., unsupervised monaural
speech separation). Evaluation results on two-speaker separation in reverberant
conditions show the effectiveness and potential of UNSSOR. | Zhong-Qiu Wang, Shinji Watanabe | 2023-05-31T17:28:02Z | http://arxiv.org/abs/2305.20054v2 | # UnSSOR: Unsupervised Neural Speech Separation by Leveraging Over-determined Training Mixtures
###### Abstract
In reverberant conditions with multiple concurrent speakers, each microphone acquires a mixture signal of multiple speakers at a different location. In over-determined conditions where the microphones out-number speakers, we can narrow down the solutions to speaker images and realize unsupervised speech separation by leveraging each mixture signal as a constraint (i.e., the estimated speaker images at a microphone should add up to the mixture). Equipped with this insight, we propose UNSSOR, an algorithm for unsupervised neural speech separation by leveraging over-determined training mixtures. At each training step, we feed an input mixture to a deep neural network (DNN) to produce an intermediate estimate for each speaker, linearly filter the estimates, and optimize a loss so that, at each microphone, the filtered estimates of all the speakers can add up to the mixture to satisfy the above constraint. We show that this loss can promote unsupervised separation of speakers. The linear filters are computed in each sub-band based on the mixture and DNN estimates through the forward convolutive prediction (FCP) algorithm. To address the frequency permutation problem incurred by using sub-band FCP, a loss term based on minimizing intra-source magnitude scattering is proposed. Although UNSSOR requires over-determined training mixtures, we can train DNNs to achieve under-determined separation (e.g., unsupervised monaural speech separation). Evaluation results on two-speaker separation in reverberant conditions show the effectiveness and potential of UNSSOR.
## 1 Introduction
In many machine learning and artificial intelligence applications, sensors, while recording, usually capture a mixture of desired and undesired signals. One example is the cocktail party problem (or speech separation) [1; 2], where, given a recorded mixture of the concurrent speech by multiple speakers, the task is to separate the mixture to individual speaker signals. Speech separation [3] has been dramatically advanced by deep learning, since deep clustering [4] and permutation invariant training (PIT) [5] solved the label permutation problem. They (and their subsequent studies [6; 7; 8; 9; 10; 11; 12; 13; 14]) are based on supervised learning, requiring paired clean speech and its corrupted signal generated via simulation, where clean speech is mixed with, for example, various noises and competing speakers at diverse energy and reverberation levels in simulated rooms [3]. The clean speech can provide an accurate, sample-level supervision for model training. Such simulated data, however, may not match the distribution of real-recorded test data in the target domain, and the resulting supervised learning based models would have generalization issues [15; 16]. How to train unsupervised neural speech separation systems on unlabelled target-domain mixtures is hence an important problem to study.
Training unsupervised speech separation models directly on monaural mixtures is an ill-posed task [2], since there is only one mixture signal observed but multiple speaker signals to reconstruct. The separation model would lack an accurate _supervision_ (or regularizer) to figure out what desired sound
objects (e.g., clean speaker signals) are, as there are infinite solutions where in each solution the estimated sources can sum up to the mixture. Supposing that the separation model does not separate well and outputs a clean speaker signal plus some competing speech, noise or reverberation, would this output be viewed as a desired sound object? This is clear to humans, clear to supervised learning based models (by comparing the outputs with training labels), but not really clear to an unsupervised model. On the other hand, many studies [3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14] have observed that deep learning based supervised learning can achieve remarkable separation. In other words, with proper supervision, modern DNNs are capable of separating mixed speakers, but, in an unsupervised setup, there lacks an accurate supervision to unleash this capability. The key to successful unsupervised neural separation, we believe, is designing a clever _supervision_ that can inform the model what desired sound objects are, and penalize the model if its outputs are not good and reward otherwise.
Our insight is that, in multi-microphone over-determined conditions where the microphones outnumber speakers, the ill-posed problem can be turned into a well-posed one, where a unique solution to the speakers exists (up to speaker permutation). This well-posed property (that a unique solution exists) can be leveraged as a _supervison_ (or regularizer) to design loss functions that could inform the unsupervised separation model what desired sound objects are and promote separation of speakers.
Equipped with this insight, we perform unsupervised neural speech separation by leveraging multi-microphone over-determined training mixtures. Our DNNs can be trained directly on over-determined mixtures to realize over- and under-determined separation. The proposed algorithm, named UNSSOR, obtains strong separation performance on two-speaker separation. Our contributions include:
* We enforce a linear-filter constraint between each speaker's reverberant images at each microphone pair, turning the ill-posed problem into a well-posed one that can promote separation of speakers.
* We formulate unsupervised neural speech separation as a blind deconvolution problem, where both the speaker images and linear filters need to be estimated. We design loss functions motivated by the blind deconvolution problem, and propose a DNN approach to optimize the loss functions, where the speaker images are estimated via DNNs and the linear filters are estimated via a sub-band linear prediction algorithm named FCP [17] based on the mixture and DNN estimates.
* We propose a loss term, which minimizes a measure named intra-source magnitude scattering, to address the frequency permutation problem incurred when using sub-band FCP.
* Based on over-determined training mixtures, UNSSOR can be trained to perform under-determined separation (e.g., monaural unsupervised speech separation).
## 2 Related work
Various unsupervised neural separation algorithms, which do not require labelled mixtures, have been proposed. The most notable one is mixture invariant training (MixIT) [18; 19; 20; 21; 22], which first synthesizes training mixtures, each by mixing two existing mixtures, and then trains a DNN to separate the resulting mixed mixture to underlying sources such that the separated sources can be partitioned into two groups and the separated sources in each group can sum up to one of the two existing mixtures (used for mixing). Care needs to be taken when synthesizing mixture of mixtures. First, the sources in an existing mixture could have similar characteristics (e.g., similar reverberation patterns as the sources in an existing mixture are recorded in the same room) that are informative about which sources belong to the same existing mixture, and this would prevent MixIT from separating the sources [18; 23]. Second, it is unclear how to mix existing multi-channel mixtures, which are usually recorded by devices with different microphone geometry and number of microphones. Third, mixing existing mixtures with different reverberation characteristics would create unrealistic mixtures.
UNSSOR avoids the above issues by training unsupervised neural separation models directly on existing mixtures rather than on synthesized mixture of mixtures. An earlier study related to this direction is the reverberation as supervision (RAS) algorithm [24], which addresses monaural two-speaker separation given binaural (two-channel) training mixtures. RAS performs magnitude-domain monaural separation directly on the left-ear mixture and then linearly filters the estimates through time-domain Wiener filtering so that the filtered estimates can approximate the right-ear mixture. RAS essentially does monaural separation and is effective at separating speakers in a semi-supervised learning setup, where a supervised PIT-based model is first trained and then used to boot-start unsupervised training. It however fails completely in fully-unsupervised setup [24], unlike UNSSOR.
Conventional algorithms such as independent component analysis [25; 26; 27; 28; 29], independent vector analysis (IVA) [29; 30; 31; 32] and spatial clustering [33; 34; 35; 36] can perform unsupervised separation directly
on existing mixtures. They perform separation based on a single test mixture at hand and are not designed to learn speech patterns from large training data, while UNSSOR leverages DNNs to model speech patterns through unsupervised learning, which could result in better separation. Another difference is that UNSSOR can be configured for monarual, under-determined separation, while ICA, IVA and spatial clustering cannot. There are studies [37] training DNNs to approximate pseudo-labels produced by spatial clustering. Their performance is however limited by that of spatial clustering.
## 3 Problem formulation
Given a \(P\)-microphone mixture with \(C\) speakers in reverberation conditions, the physical model can be formulated using a system of linear equations in the short-time Fourier transform (STFT) domain:
\[Y_{p}(t,f)=\sum\nolimits_{c=1}^{C}X_{p}(c,t,f)+\varepsilon_{p}(t,f),\,\text{ for}\,p\in\{1,\ldots,P\}, \tag{1}\]
where \(t\) indexes \(T\) frames, \(f\) indexes \(F\) frames, and at microphone \(p\), time \(t\) and frequency \(f\), \(Y_{p}(t,f)\), \(X_{p}(c,t,f)\) and \(\varepsilon_{p}(t,f)\in\mathbb{C}\) respectively denote the STFT coefficients of the mixture, reverberant image of speaker \(c\), and non-speech signals. In the rest of this paper, we refer to the corresponding spectrogram when dropping the index \(c\), \(p\), \(t\) or \(f\). We assume that \(\varepsilon\) is weak and stationary (e.g., a time-invariant Gaussian noise or simply modelling errors). Without loss of generality, we designate microphone \(1\) as the reference microphone. Our goal is to, in an unsupervised way, estimate each speaker's image at the reference microphone (i.e., \(X_{1}(c)\) for each speaker \(c\)) given the input mixture. We do not aim at dereverberation, instead targeting at maintaining the reverberation of each speaker.
Unsupervised separation based only on the observed mixture is difficult. There are infinite solutions to the above linear system where there are \(T\times F\times P\) equations (we have a mixture observation for each \(Y_{p}(t,f)\)) but \(T\times F\times P\times C\) unknowns (we have one unknown for each \(X_{p}(c,t,f)\)).
Our insight is that the number of unknowns can be dramatically reduced, if we enforce constraints to the speaker images at different microphones. Since \(X_{1}(c)\) and \(X_{p}(c)\) are both convolved versions of the dry signal of speaker \(c\), there exists a linear filter between them such that convolving \(X_{1}(c)\) with the filter would reproduce \(X_{p}(c)\). This convolutive relationship is a physical constraint, which can be leveraged to reduce the number of unknowns. Specifically, we formulate (1) as
\[Y_{1}(t,f)=\sum\nolimits_{c=1}^{C}X_{1}(c,t,f)+\varepsilon_{1}(t,f),\]
\[Y_{p}(t,f)=\sum\nolimits_{c=1}^{C}\mathbf{g}_{p}(c,f)^{\text{H}}\,\widetilde {\mathbf{X}}_{1}(c,t,f)+\varepsilon_{p}(t,f),\,\text{for}\,p\in\{2,\ldots,P\}, \tag{2}\]
where \(\widetilde{\mathbf{X}}_{1}(c,t,f)=[X_{1}(c,t-A,f),\ldots,X_{1}(c,t,f),\ldots,X _{1}(c,t+B,f)]^{\mathsf{T}}\in\mathbb{C}^{A+1+B}\) stacks a window of \(E=A+1+B\) T-F units, \(\mathbf{g}_{p}(c,f)\in\mathbb{C}^{E}\) is the _relative room impulse response_ (relative RIR) relating \(X_{1}(c)\) to \(X_{p}(c)\), and \((\cdot)^{\text{H}}\) computes Hermitant. Note that \(\mathbf{g}_{p}(c,f)\) is very short (i.e., \(E\) is small) if microphone \(1\) and \(p\) are placed close to each other, which is the case for compact arrays.
An implication of this constraint is that the number of unknowns is reduced from \(T\times F\times P\times C\) to \(T\times F\times C+F\times(P-1)\times E\times C\)1, which can be smaller than the number of equations (i.e., \(T\times F\times P\)) when \(P>C\) (i.e., over-determined conditions) and when \(T\) is sufficiently large (i.e., the input mixture is reasonably long). In other words, this formulation suggests that (1) there exists a solution for separation, which is most consistent with the above linear system; and (2) in over-determined cases, it is possible to estimate the speaker images in an unsupervised way.
Footnote 1: \(T\times F\times C\) is because there is one unknown for each \(X_{1}(c,t,f)\), and \(F\times(P-1)\times E\times C\) is because \(\mathbf{g}_{p}(c,f)\) is \(E\)-tap and we have one such filter for each of \(P-1\) microphone pairs for each frequency and speaker.
As \(\varepsilon\) is assumed weak, time-invariant and Gaussian, one way to find the solution is to compute an estimate that is most consistent with the linear system in (2):
\[\underset{\mathbf{g}\cdot(\cdot,\cdot),X_{1}(\cdot,\cdot,\cdot)}{\text{argmin}} \sum_{t,f}\Big{|}Y_{1}(t,f)-\sum\limits_{c=1}^{C}X_{1}(c,t,f)\Big{|}^{2}+\sum \limits_{p=2}^{P}\sum\limits_{t,f}\Big{|}Y_{p}(t,f)-\sum\limits_{c=1}^{C} \mathbf{g}_{p}(c,f)^{\text{H}}\,\widetilde{\mathbf{X}}_{1}(c,t,f)\Big{|}^{2}. \tag{3}\]
This is a blind deconvolution problem [38], which is non-convex in nature and difficult to be solved if no prior knowledge is assumed about the relative RIRs or the speaker images, because both of them are unknown. In the next section, we propose a DNN-based approach, which can model speech patterns through unsupervised learning (and hence model speech priors), to tackle this problem.
## 4 Method
Fig. 1 illustrates the proposed system. The DNN takes in the mixture at all the \(P\) microphones or at the reference microphone \(1\) as input and produces an intermediate estimate \(\hat{Z}(c)\) for each speaker \(c\). FCP [17] is then performed on \(\hat{Z}(c)\) at each microphone \(p\) to compute a linear-filtering result, denoted as \(\hat{X}_{p}^{\text{FCP}}(c)\), which, we will describe, is essentially an estimate of the speaker image \(X_{p}(c)\). After that, two loss functions are computed and combined for DNN training. This section describes the DNN configuration, loss functions, FCP filtering, and an extension for monaural separation.
### DNN configurations
The intermediate estimate \(\hat{Z}(c)\) for each speaker \(c\) is obtained via complex spectral mapping [14; 39], where we stack the real and imaginary (RI) parts of the input mixture as features for the DNN to predict the RI parts of \(\hat{Z}(c)\). For the DNN architecture, we employ TF-GridNet [14], which obtains strong results on supervised speech separation benchmarks. See Appendix F for more DNN details.
### Mixture-consistency loss on filtered estimates
Following (3), we propose _mixture consistency_ (MC) loss, which is computed by filtering the DNN estimate \(\hat{Z}(c)\) of each speaker \(c\) to approximate the \(P\)-channel input mixture:
\[\mathcal{L}_{\text{MC}}=\alpha_{1}\sum_{t,f}\mathcal{F}(Y_{1}(t,f),\sum_{c=1} ^{C}\hat{Z}(c,t,f))+\sum_{p=2}^{P}\alpha_{p}\sum_{t,f}\mathcal{F}(Y_{p}(t,f), \sum_{c=1}^{C}\hat{\mathbf{g}}_{p}(c,f)^{\text{H}}\;\widetilde{\mathbf{Z}}(c, t,f)). \tag{4}\]
\(\widetilde{\mathbf{Z}}(c,t,f)\) stacks a window of T-F units around \(\hat{Z}(c,t,f)\), and \(\hat{\mathbf{g}}_{p}(c,f)\) is an estimated relative RIR computed based on \(\hat{Z}(c,\cdot,f)\) and the mixture \(Y_{p}(\cdot,f)\) through FCP [17]. Both of them will be described in the next sub-section. \(\alpha_{p}\in\mathbb{R}\) is a weighting term for microphone \(p\). Following [14], \(\mathcal{F}(\cdot,\cdot)\) computes an absolute loss on the estimated RI components and their magnitude:
\[\mathcal{F}\Big{(}Y_{p}(t,f),\hat{Y}_{p}(t,f)\Big{)} =\frac{1}{\sum_{t^{\prime},f^{\prime}}\lvert Y_{p}(t^{\prime},f^ {\prime})\rvert}\Big{(}\Big{|}\text{Re}(Y_{p}(t,f))-\text{Re}(\hat{Y}_{p}(t,f) )\Big{|}\] \[+\Big{|}\text{Im}(Y_{p}(t,f))-\text{Im}(\hat{Y}_{p}(t,f))\Big{|} +\Big{|}\lvert Y_{p}(t,f)\rvert-\lvert\hat{Y}_{p}(t,f)\rvert\Big{|}\Big{)}, \tag{5}\]
where \(\text{Re}(\cdot)\) and \(\text{Im}(\cdot)\) respectively extract RI components and \(\lvert\cdot\rvert\) computes magnitude. The term \(1/\sum_{t^{\prime},f^{\prime}}\lvert Y_{p}(t^{\prime},f^{\prime})\rvert\) balances the losses at different microphones and across training mixtures.
According to the discussion in Section 3, minimizing \(\mathcal{L}_{\text{MC}}\) would encourage separation of speakers. We give an illustration of loss surface of \(\mathcal{L}_{\text{MC}}\) in Appendix B.
### FCP for relative RIR estimation
To compute \(\mathcal{L}_{\text{MC}}\), we need to first estimate each of the relative RIRs, \(\hat{\mathbf{g}}_{p}(c,f)\). In [17], FCP is proposed to estimate the relative RIR relating direct-path signal to reverberant image for speech
Figure 1: Illustration of UNSSOR (assuming \(P>C\)).
dereverberation. In this study, we employ FCP to estimate the relative RIR relating \(\hat{Z}(c)\) to the speaker image captured at each microphone \(p\) (i.e., \(X_{p}(c)\)).
Assuming speakers are non-moving, we estimate relative RIRs by solving the following problem:
\[\hat{\mathbf{g}}_{p}(c,f)=\underset{\mathbf{g}_{p}(c,f)}{\text{ argmin}}\sum\nolimits_{t}\frac{1}{\tilde{\lambda}_{p}(c,t,f)}|Y_{p}(t,f)- \mathbf{g}_{p}(c,f)^{\mathsf{H}}\,\widetilde{\mathbf{Z}}(c,t,f)|^{2}, \tag{6}\]
where \(\mathbf{g}_{p}(c,f)\in\mathbb{C}^{I+1+J}\) is a \(K\)-tap (with \(K=I+1+J\)) time-invariant FCP filter, \(\widetilde{\mathbf{Z}}(c,t,f)=[\hat{Z}(c,t-I,f),\dots,\hat{Z}(c,t,f),\dots,\hat {Z}(c,t+J,f)]^{\mathsf{T}}\in\mathbb{C}^{K}\) stacks \(I\) past and \(J\) future T-F units with the current one. Since the actual number of filter taps (i.e., \(A\) and \(B\) defined in the text below (2)) is unknown, we set them to \(I\) and \(J\), both of which are hyper-parameters to tune. \(\tilde{\lambda}_{p}(c,t,f)\) is a weighting term balancing the importance of each T-F unit. Following [17], we define it as \(\tilde{\lambda}_{p}(c,t,f)=\xi\max(\frac{1}{P}\sum_{p^{\prime}=1}^{P}\lvert Y_ {p^{\prime}}\rvert^{2})+\lvert Y_{p}(t,f)\rvert^{2}\), where \(\xi\) (\(=10^{-4}\) in this study) is used to floor the weighting term and \(\max(\cdot)\) extracts the maximum value of a spectrogram. (6) is a weighted linear regression problem, where a closed-form solution can be readily computed:
\[\hat{\mathbf{g}}_{p}(c,f)=\Big{(}\sum\limits_{t}\frac{1}{\tilde{\lambda}_{p}( c,t,f)}\widetilde{\mathbf{Z}}(c,t,f)\widetilde{\mathbf{Z}}(c,t,f)^{\mathsf{H}} \Big{)}^{-1}\sum\limits_{t}\frac{1}{\tilde{\lambda}_{p}(c,t,f)}\widetilde{ \mathbf{Z}}(c,t,f)(Y_{p}(t,f))^{*}, \tag{7}\]
where \((\cdot)^{*}\) computes complex conjugate. We then plug \(\hat{\mathbf{g}}_{p}(c,f)\) into (4) and compute the loss.
Although in (6) we linearly filter \(\hat{Z}(c)\) to approximate \(Y_{p}\), earlier studies [17] suggest that the resulting \(\hat{\mathbf{g}}_{p}(c,f)^{\mathsf{H}}\,\widetilde{\mathbf{Z}}(c,t,f)\) would be an estimate of \(X_{p}(c,t,f)\), if \(\hat{Z}(c)\) is reasonably accurate (see Appendix C for the derivation). We name the speaker image estimated this way as _FCP-estimated image_:
\[\hat{X}_{p}^{\text{FCP}}(c,t,f)=\hat{\mathbf{g}}_{p}(c,f)^{\mathsf{H}}\, \widetilde{\mathbf{Z}}(c,t,f). \tag{8}\]
It is therefore reasonable to sum up the FCP-estimated images of all the speakers and define a loss between the summation and \(Y_{p}\) as in (4).
### Time alignment issues and alternative loss functions
In (4), we do not filter the DNN estimates when computing the loss on the first (reference) microphone. We expect this to result in a \(\hat{Z}(c)\) time-aligned with the speaker image \(X_{1}(c)\) (i.e., \(\hat{Z}(c)\) is an estimate of \(X_{1}(c)\)). Since the reference microphone may not be the microphone closest to speaker \(c\), it is best to use non-causal filters when filtering \(\hat{Z}(c)\) to approximate the reverberant image \(X_{p}(c)\) at non-reference microphones that are closer to source \(c\) than the reference microphone, and instead use causal filters for non-reference microphones that are farther.2 Since estimating which non-reference microphones are closer or farther to a source than the reference microphone is not an easy task and doing this would complicates our system, we can just choose to use non-causal filters for all the non-reference microphones. This could, however, limit the DNN's capability at separating the speakers, because the relative RIRs for some non-reference microphones (farther to source \(c\) than the reference microphone) are causal, and it may not be a good idea to assume non-causal filters.
Footnote 2: Note that the relative RIR relating a signal to its delayed version is causal and the relative RIR relating a signal to its advanced version is non-causal [40].
To address this issue, we make a simple modification to the loss function in (4):
\[\mathcal{L}_{\text{MC}}=\sum\limits_{p=1}^{P}\alpha_{p}\mathcal{L}_{\text{MC},p }=\sum\limits_{p=1}^{P}\alpha_{p}\sum\limits_{t,f}\mathcal{F}\Big{(}Y_{p}(t,f),\sum\limits_{c=1}^{C}\hat{\mathbf{g}}_{p}(c,f)^{\mathsf{H}}\,\widetilde{ \mathbf{Z}}(c,t,f)\Big{)}, \tag{9}\]
where the difference is that we also filter the DNN estimates when computing the loss on the reference microphone, and we constrain \(\hat{\mathbf{g}}_{p}(c,f)\) to be causal and that \(\widetilde{\widetilde{\mathbf{Z}}}(c,t,f)\) only stacks current and past frames. This way, the resulting \(\hat{Z}(c)\) would not be time-aligned with the revereberant image captured at the reference microphone (i.e., \(X_{1}(c)\)) or any other non-reference microphones. Because of the causal filtering, \(\hat{Z}(c)\) would be more like an estimate of the reverberant image captured by a _virtual microphone_ that is closer to speaker \(c\) than all the \(P\) microphones. It would contain less reverberation of speaker \(c\) than any of the speaker images captured by the \(P\) microphones due to the causal filtering.
To produce an estimate that is time-aligned with the reverberant image at a microphone (e.g., \(X_{p}(c)\)), we use the FCP-estimated image computed in (8) (i.e., \(\hat{X}_{p}^{\text{FCP}}(c)\)) as the output.
### Addressing frequency permutation problem
In (4) and (9), FCP is performed in each frequency independently from the others. Even though the speakers are separated at each frequency, the separation results of the same speaker at different frequencies may however not be grouped into the same output spectrogram (see an example in Appendix D). This is known as the _frequency permutation problem_[29], which has been studied for decades in frequency-domain blind source separation algorithms such as frequency-domain independent component analysis [25; 26; 27; 28; 29] and spatial clustering [33; 34; 35; 36]. Popular solutions for frequency alignment are designed by leveraging cross-frequency correlation of spectral patterns [35; 41] and direction-of-arrival estimation [42]. However, these solutions are often empirical and have a complicated design. They can be used to post-process DNN estimates for frequency alignment, but it is not easy to integrate them with UNSSOR for joint training. This section proposes a loss term, with which the trained DNN can learn to produce target estimates without frequency permutation.
To deal with frequency permutation, IVA [30; 31; 32] assumes that, at each frame, the de-mixed outputs at all the frequencies follow a complex Gaussian distribution with a shared variance term across frequencies: \(\mathbf{w}(c,f)^{\text{H}}\mathbf{Y}(t,f)\sim\mathcal{N}(0,D(t,c))\), where \(\mathbf{w}(c,f)\in\mathbb{C}^{P}\) is the de-mixing weight vector (in a time-invariant de-mixing matrix) for speaker \(c\) at frequency \(f\), and \(D(t,c)\in\mathbb{R}\) is the shared variance term, assumed time-variant. When maximum likelihood estimation is performed to estimate the de-mixing matrix, the variance term shared across all the frequencies is found very effective at solving the frequency permutation problem [29; 31; 32; 32].
Motivated by IVA, we design the following loss term, named _intra-source magnitude scattering_ (ISMS), to alleviate the frequency permutation problem in DNN outputs:
\[\mathcal{L}_{\text{ISMS}}=\sum_{p=1}^{P}\alpha_{p}\mathcal{L}_{\text{ISMS},p}= \sum_{p=1}^{P}\alpha_{p}\frac{\sum_{t}\frac{1}{C}\sum_{c=1}^{C}\text{var} \Big{(}\text{log}(|\hat{X}_{p}^{\text{FCP}}(c,t,\cdot)|)\Big{)}}{\sum_{t} \text{var}\Big{(}\text{log}(|Y_{p}(t,\cdot)|)\Big{)}}, \tag{10}\]
where \(\hat{X}_{p}^{\text{FCP}}\) is computed via (8), \(\hat{X}_{p}^{\text{FCP}}(c,t,\cdot)\in\mathbb{C}^{F}\), and \(\text{var}(\cdot)\) computes the variance of the values in a vector. At each frame, we essentially want the the magnitudes of the estimated spectrogram of each speaker (i.e., \(\hat{X}_{p}^{\text{FCP}}(c,t,\cdot)\)) to have a small intra-source variance. The rationale is that, when frequency permutation happens, \(\hat{X}_{p}^{\text{FCP}}(c,t,\cdot)\) would contain multiple sources, and the resulting variance would be larger than that computed when \(\hat{X}_{p}^{\text{FCP}}(c,t,\cdot)\) contains only one source. \(\mathcal{L}_{\text{ISMS}}\) echoes IVA's idea of assuming a shared variance term across all the frequencies. If the ratio in (10) becomes smaller, it indicates that the magnitudes of \(\hat{X}_{p}^{\text{FCP}}(c,t,\cdot)\) are more centered around their mean. This is similar to optimizing the likelihood of \(\hat{X}_{p}^{\text{FCP}}(c,t,\cdot)\) under a Gaussian distribution with a variance term shared across all the frequencies. In (10), a logarithmic compression is applied, since log-compressed magnitudes better follow Gaussian distribution than raw magnitudes [43].
We combine \(\mathcal{L}_{\text{ISMS}}\) with \(\mathcal{L}_{\text{MC}}\) in (9) for DNN training, using a weighting term \(\gamma\in\mathbb{R}\):
\[\mathcal{L}_{\text{MC+ISMS}}=\mathcal{L}_{\text{MC}}+\gamma\times\mathcal{L }_{\text{ISMS}}. \tag{11}\]
### Training UNSSOR for monaural unsupervised separation
UNSSOR can be trained for monaural unsupervised separation by only feeding the mixture at the reference microphone to the DNN but still computing the loss on multiple microphones. Fig. 1 illustrates the idea. At run time, the trained system performs monaural under-determined separation, and multi-microphone over-determined mixtures are only required for DNN training. The loss computed at multiple microphones could guide the DNN to exploit monaural spectro-temporal patterns for separation, even in an unsupervised setup.
Experimental setup
We validate the proposed algorithms on two-speaker separation in reverberant conditions based on the six-channel SMS-WSJ dataset [44] (see Appendix A for its details). This section describes the baseline systems and evaluation metrics. See Appendix F for miscellaneous system and DNN setup.
### Baselines
The baselines include conventional unsupervised separation algorithms, an improved version of RAS, and supervised learning based models.
We include spatial clustering [34; 35; 36] for comparison. We use a public implementation [45], which leverages complex angular-central Gaussian mixture models [36] for sub-band spatial clustering and exploits inter-frequency correlation of cluster posteriors [34; 41] for frequency alignment. The number of sources is set to three, one of which is used for garbage collection, following [32]. After obtaining the estimates, we discard the one with the lowest energy. The STFT window size is tuned to \(128\) ms and hop size to \(16\) ms.
We include IVA [29; 32] for comparison. We use the public implementations provided by the _torchiva_ toolkit [46]. We use the default spherical Laplacian model to model source distribution. In over-determined cases, the number of sources is set to three and we discard the estimate with the lowest energy, similarly to the setup in the spatial clustering baseline. The STFT window size is tuned to \(256\) ms and hop size to \(32\) ms.
We propose a novel variant of the RAS algorithm [24] for comparison. Appendix E discusses the differences between UNSSOR and RAS. Since RAS cannot achieve unsupervised separation, we improve it by computing loss on multi-microphone mixtures, and name the new algorithm as improved RAS (iRAS). We employ the time-domain Wiener filtering (WF) technique in [24] to filter re-synthesized time-domain estimates \(\hat{z}(c)=\text{iSTFT}(\hat{Z}(c))\), where \(\hat{Z}(c)\) is produced by TF-GridNet. The loss is defined as:
\[\mathcal{L}_{\text{iRAS}}=\sum_{p=1}^{P}\alpha_{p}\mathcal{L}_{\text{iRAS},p} =\sum_{p=1}^{P}\alpha_{p}\frac{1}{\|y_{p}\|_{1}}\Big{\|}y_{p}-\sum_{c=1}^{C} \hat{h}_{p}(c)*\hat{z}(c)\Big{\|}_{1}, \tag{12}\]
with \(*\) denoting linear convolution, \(y_{p}\) the time-domain mixture at microphone \(p\), and \(\hat{h}_{p}(c)\) a time-domain Wiener filter computed by solving the following problem:
\[\hat{h}_{p}(c)=\underset{h_{p}(c)}{\text{argmin}}\,\,\Big{\|}y_{p}-h_{p}(c)* \hat{z}(c)\Big{\|}_{2}^{2}, \tag{13}\]
which has a closed-form solution. The separation result is computed as \(\hat{x}_{p}^{\text{WF}}(c)=\hat{h}_{p}(c)*\hat{z}(c)\). Following [24], we use \(512\) filter taps, and filter the future \(100\), the current, and the past \(411\) samples (i.e., non-causal filtering). We can also filter the current and the past \(511\) samples (i.e., causal filtering), and experiment with a filter length (in time) same as the length of FCP filters (see Appendix G)).
We report the result of using supervised learning, where PIT [5] is used to address the permutation problem. This result can be viewed as a performance upper bound of unsupervised separation.
We use the same DNN and training configurations as those in UNSSOR for a fair comparison.
### Evaluation metrics
We designate the first microphone as the reference microphone, and use the time-domain signal corresponding to \(X_{1}(c)\) of each speaker \(c\) for metric computation. The evaluation metrics include signal-to-distortion ratio (SDR) [47], scale-invariant SDR (SI-SDR) [48], perceptual evaluation of speech quality (PESQ) [49], and extended short-time objective intelligibility (eSTOI) [50].
## 6 Evaluation results
### Effectiveness of UNSSOR at promoting separation
Table 1 and 2 respectively report the results of using six- and three-microphone input and loss. After hyper-parameter tuning, in default we use the loss in (9) for DNN training, set \(I=19\) and \(J=0\)
(defined below (6)) for FCP (i.e., causal FCP filtering with \(20\) taps), and set \(\alpha_{p}=1\) (meaning no weighting is applied for different microphones). For the 3-microphone case, we use the mixtures at the first, third, and fifth microphones for training and testing.
In both tables, from row 1a we notice that UNSSOR produces reasonable separation of speakers, improving the SDR from \(0.1\) to, for example, \(12.5\) dB in Table 1, but its output suffers from the frequency permutation problem (see Appendix D for an example). In row 1c, we use oracle target speech to obtain oracle frequency alignment and observe much better results over 1a. This shows the effectiveness of \(\mathcal{L}_{\text{MC}}\) at promoting separation of speakers and the severity of the frequency permutation problem. In row 1b, we use a frequency alignment algorithm (same as that used in the spatial clustering baseline) [34, 41] to post-process the separation results of 1a. This algorithm leads to impressive frequency alignment (see 1b vs. 1c), but it is empirical and has a complicated design.
### Effectiveness of including intra-source magnitude scattering loss
We train DNNs using \(\mathcal{L}_{\text{MC+ISMS}}\) defined in (11). In each case (i.e, six- and three-microphone), we separately tune the weighting term \(\gamma\) in (11) based on the validation set. In both table, comparing row 2a-2c with 1a-1c, we observe that including \(\mathcal{L}_{\text{ISMS}}\) is very effective at dealing with the frequency permutation problem, yielding almost the same performance as using oracle frequency alignment.
### Results of training UNSSOR for monaural unsupervised separation
Table 3 and 4 use the mixture only at the reference microphone \(1\) as the network input, while computing the loss respectively on three and six microphones. We tune \(J\) to \(1\) (i.e., non-causal FCP filter), considering that, for a specific target speaker, the reference microphone may not be the
\begin{table}
\begin{tabular}{l l r r r r r r r} \hline \hline & & & & \multicolumn{3}{c}{Val. set} & \multicolumn{3}{c}{Test set} \\ \cline{4-9} Row & Systems & \(I\) & \(J\) & Loss & SDR (dB) & SDR (dB) & SI-SDR (dB) & PESQ & eSTOI \\ \hline
0a & Mixture & - & - & - & 0.1 & 0.1 & 0.0 & 1.87 & 0.603 \\ \hline
1a & UNSSOR & \(19\) & \(0\) & \(\mathcal{L}_{\text{MC}}\) & \(12.5\) & \(11.9\) & \(10.2\) & \(2.61\) & 0.735 \\
1b & UNSSOR + Corr. based freq. align. & \(19\) & \(0\) & \(\mathcal{L}_{\text{MC}}\) & \(16.1\) & \(15.7\) & \(14.7\) & \(3.47\) & 0.884 \\
1c & UNSSOR + Oracle freq. align. & \(19\) & \(0\) & \(\mathcal{L}_{\text{MC}}\) & \(16.2\) & \(15.8\) & \(14.9\) & \(3.48\) & 0.889 \\ \hline
2a & UNSSOR & \(19\) & \(0\) & \(\mathcal{L}_{\text{MC+ISMS}}\) & \(16.0\) & \(15.6\) & \(14.6\) & \(3.44\) & 0.885 \\
2b & UNSSOR + Corr. based freq. align. & \(19\) & \(0\) & \(\mathcal{L}_{\text{MC+ISMS}}\) & \(16.0\) & \(15.6\) & \(14.7\) & \(3.44\) & 0.885 \\
2c & UNSSOR + Oracle freq. align. & \(19\) & \(0\) & \(\mathcal{L}_{\text{MC+ISMS}}\) & \(16.0\) & \(15.6\) & \(14.7\) & \(3.44\) & 0.886 \\ \hline
3a & Spatial clustering + Corr. based freq. align. [32] & - & - & - & 8.8 & 8.6 & 7.4 & 2.44 & 0.726 \\
3b & IVa [32] & - & - & - & 10.3 & 10.6 & 8.9 & 2.58 & 0.764 \\
3c & iRAS w/ causal 512-tap filters & - & - & \(\mathcal{L}_{\text{BAS}}\) & \(7.8\) & \(7.6\) & \(5.7\) & \(2.14\) & 0.642 \\
3d & iRAS w/ non-causal 512-tap filters & - & - & \(\mathcal{L}_{\text{BAS}}\) & \(8.0\) & \(7.8\) & \(5.7\) & \(2.13\) & 0.637 \\ \hline
4a & PIT (supervised) [14] & - & - & - & 19.9 & 19.4 & 18.9 & 4.08 & 0.949 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Averaged results of 2-speaker separation on SMS-WSJ (6-channel input and loss).
\begin{table}
\begin{tabular}{l l r r r r r r r} \hline \hline & & & & \multicolumn{3}{c}{Val. set} & \multicolumn{3}{c}{Test set} \\ \cline{4-9} Row & Systems & \(I\) & \(J\) & Loss & SDR (dB) & SDR (dB) & SI-SDR (dB) & PESQ & eSTOI \\ \hline
0a & Mixture & - & - & - & 0.1 & 0.1 & 0.0 & 1.87 & 0.603 \\ \hline
1a & UNSSOR & \(19\) & \(0\) & \(\mathcal{L}_{\text{MC}}\) & \(9.9\) & \(9.4\) & \(7.4\) & \(2.12\) & 0.672 \\
1b & UNSSOR + Corr. based freq. align. & \(19\) & \(0\) & \(\mathcal{L}_{\text{MC}}\) & \(15.3\) & \(15.0\) & \(13.9\) & \(3.18\) & 0.867 \\
1c & UNSSOR + Oracle freq. align. & \(19\) & \(0\) & \(\mathcal{L}_{\text{MC}}\) & \(15.5\) & \(15.2\) & \(14.1\) & \(3.19\) & 0.871 \\ \hline
2a & UNSSOR & \(19\) & \(0\) & \(\mathcal{L}_{\text{MC+ISMS}}\) & \(15.7\) & \(15.4\) & \(14.4\) & \(3.20\) & 0.874 \\
2b & UNSSOR + Corr. based freq. align. & \(19\) & \(0\) & \(\mathcal{L}_{\text{MC+ISMS}}\) & \(15.7\) & \(15.4\) & \(14.4\) & \(3.20\) & 0.875 \\
2c & UNSSOR + Oracle freq. align. & \(19\) & \(0\) & \(\mathcal{L}_{\text{MC+ISMS}}\) & \(15.8\) & \(15.4\) & \(14.5\) & \(3.20\) & 0.876 \\ \hline
3a & Spatial clustering + Corr. based freq. align. [32] & - & - & - & 9.6 & 9.5 & 8.5 & 2.52 & 0.759 \\
3b & IVa [32] & - & - & - & 11.6 & 12.0 & 10.7 & 2.67 & 0.802 \\
3c & iRAS w/ causal 512-tap filters & - & - & \(\mathcal{L}_{\text{BAS}}\) & \(5.1\) & \(4.8\) & \(2.7\) & \(1.88\) & 0.588 \\
3d & iRAS w/ non-causal 512-tap filters & - & - & \(\mathcal{L}_{\text{BAS}}\) & \(4.6\) & \(4.5\) & \(2.2\) & \(1.87\) & 0.579 \\ \hline
4a & PIT (supervised) [14] & - & - & - & 17.4 & 16.8 & 16.3 & 3.91 & 0.924 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Averaged results of 2-speaker separation on SMS-WSJ (3-channel input and loss).
microphone closest to that speaker.3 We still set the microphone weight \(\alpha_{p}\) to \(1.0\) for non-reference microphones (i.e., when \(p\neq 1\)), but tune \(\alpha_{1}\) to a smaller value based on the validation set. Without using a smaller \(\alpha_{1}\), we found that the DNN easily overfits to microphone \(1\), as we use the mixture at microphone \(1\) as the only input and compute the MC loss also on the mixture at microphone \(1\). The DNN can just optimize \(\mathcal{L}_{\text{MC},p}\) to zero at microphone \(1\) and not optimize that at other microphones.
Footnote 3: We do not need many future taps, considering that the hop size is \(8\) ms in our system and the microphone array in SMS-WSJ is a compact array with a diameter of \(20\) cm. In air, sound would travel \(340\times 0.008=2.72\) meters in \(8\) ms if its speed is \(340\) meters per second. This distance is far larger than the array aperture size.
From row 1a of both tables, strong performance is observed in this under-determined setup, indicating that the multi-microphone loss can inform the DNN what desired target sound objects are and the DNN can learn to model spectral patterns in speech for unsupervised separation.
### Comparison with other methods
In Table 1-4, we compare the performance of UNSSOR with spatial clustering, IVA, iRAS, and supervised PIT-based models. In Appendix G, we compare UNSSOR and iRAS when they use the same filter length (in time). UNSSOR shows better performance than previous unsupervised separation models that can be performed or trained directly on mixtures. It is worse than supervised PIT but the performance is reasonably strong. For example, in row 2a of Table 2, UNSSOR obtains \(15.4\) dB SDR on the test set, which is close to the \(16.8\) dB result obtained by supervised PIT in 4a.
## 7 Conclusion
We have proposed UNSSOR for unsupervised neural speech separation. We show that it is possible to train unsupervised models directly on mixtures, if the mixtures are over-determined. We have proposed mixture-consistency loss functions, which leverage multi-microphone mixtures as constraints, to promote separation of speakers. We find that minimizing ISMS can alleviate the frequency permutation problem. Although UNSSOR requires over-determined training mixtures, it can be trained to perform under-determined unsupervised separation. Future research will combine UNSSOR with semi-supervised learning, evaluate it on real data recorded in noisy-reverberant conditions, and address the limitation of our current system (see Appendix H).
In closing, we emphasize that the key scientific contribution of this paper is that the over-determined property afforded by having more microphones than speakers can narrow down the solutions to the underlying sources, and this property can be leveraged to design a supervision to train DNNs to model speech patterns via unsupervised learning and realize unsupervised separation. This meta-idea, we believe, would motivate the design of many algorithms in future research on neural source separation.
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline & & & & \multicolumn{3}{c}{Val. set} & \multicolumn{3}{c}{Test set} \\ \cline{4-10} Row & Systems & \(I\) & \(J\) & Loss & SDR (dB) & SDR (dB) & SI-SDR (dB) & PESQ & eSTOI \\ \hline
0a & Mixture & - & - & - & \(0.1\) & \(0.1\) & \(0.0\) & \(1.87\) & \(0.603\) \\ \hline
1a & UNSSOR & \(19\) & \(1\) & \(\mathcal{L}_{\text{MC-ISMS}}\) & \(12.5\) & \(12.0\) & \(11.4\) & \(3.18\) & \(0.822\) \\ \hline
2a & iRAS w/ causal 512-tap filters & - & - & \(\mathcal{L}_{\text{BAS}}\) & \(-0.1\) & \(-0.3\) & \(-3.0\) & \(1.62\) & \(0.453\) \\
2b & iRAS w/ non-causal 512-tap filters & - & - & \(\mathcal{L}_{\text{BAS}}\) & \(11.0\) & \(10.7\) & \(9.9\) & \(2.81\) & \(0.783\) \\ \hline
3a & Monaural PIT (supervised) [14] & - & - & - & \(16.2\) & \(15.7\) & \(15.3\) & \(3.79\) & \(0.907\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Averaged results of 2-speaker separation on SMS-WSJ (1-channel input and 3-channel loss).
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline & & & & & \multicolumn{3}{c}{Val. set} & \multicolumn{3}{c}{Test set} \\ \cline{4-10} Row & Systems & \(I\) & \(J\) & Loss & SDR (dB) & SDR (dB) & SI-SDR (dB) & PESQ & eSTOI \\ \hline
0a & Mixture & - & - & - & \(0.1\) & \(0.1\) & \(0.0\) & \(1.87\) & \(0.603\) \\ \hline
1a & UNSSOR & \(19\) & \(1\) & \(\mathcal{L}_{\text{MC-ISMS}}\) & \(13.0\) & \(12.5\) & \(11.9\) & \(3.27\) & \(0.832\) \\ \hline
2a & iRAS w/ causal 512-tap filters & - & - & \(\mathcal{L}_{\text{BAS}}\) & \(7.5\) & \(7.2\) & \(5.6\) & \(2.03\) & \(0.641\) \\
2b & iRAS w/ non-causal 512-tap filters & - & - & \(\mathcal{L}_{\text{BAS}}\) & \(10.7\) & \(10.5\) & \(9.7\) & \(2.80\) & \(0.778\) \\ \hline
3a & Monaural PIT (supervised) [14] & - & - & - & \(16.2\) & \(15.7\) & \(15.3\) & \(3.79\) & \(0.907\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Averaged results of 2-speaker separation on SMS-WSJ (1-channel input and 6-channel loss).
## Acknowledgments and Disclosure of Funding
We would like to thank Dr. Robin Scheibler at LINE Corporation for constructive discussions on IVA. This research is part of the Delta research computing project, which is supported by the National Science Foundation (award OCI \(2005572\)) and the State of Illinois. Delta is a joint effort of the University of Illinois at Urbana-Champaign and its National Center for Supercomputing Applications. We also gratefully acknowledge the support of NVIDIA Corporation with the donation of the RTX 8000 GPUs used in this research.
|
2309.00051 | The Nature of the IMBH Candidate CXO~J133815.6+043255: High-Frequency
Radio Emission | The ultra-luminous X-ray source CXO~J133815.6+043255 is a strong candidate
for a bona-fide intermediate mass black hole, residing in the outskirts of
NGC~5252. We present 22~GHz radio observations of this source obtained
serendipitously in an ongoing high-frequency imaging survey of radio-quiet
Active Galactic Nuclei (AGN), and use this new data point to construct the
broad-band radio spectral energy distribution (SED). We find that the SED
exhibits a spectral slope of $\alpha=-0.66\pm0.02$, consistent with a steep
spectrum from optically-thin synchrotron emission from an unresolved jet. We
also find that the $L_R / L_X$ ratio is approximately $10^{-3}$, inconsistent
with radio-quiet AGN and many ULXs but consistent with low-luminosity AGN
(LLAGN) and radio-loud quasars. Together, these observations support the
conclusion that CXO~J133815.6+043255 is an intermediate-mass black hole
producing a low-mass analog of radio jets seen in classical quasars. | Krista Lynne Smith, Macon Magno, Ashutosh Tripathi | 2023-08-31T18:00:04Z | http://arxiv.org/abs/2309.00051v1 | # The Nature of the IMBH Candidate CXO J133815.6+043255: High-Frequency Radio Emission
###### Abstract
The ultra-luminous X-ray source CXO J133815.6+043255 is a strong candidate for a bona-fide intermediate mass black hole, residing in the outskirts of NGC 5252. We present 22 GHz radio observations of this source obtained serendipitously in an ongoing high-frequency imaging survey of radio-quiet Active Galactic Nuclei (AGN), and use this new data point to construct the broad-band radio spectral energy distribution (SED). We find that the SED exhibits a spectral slope of \(\alpha=-0.66\pm 0.02\), consistent with a steep spectrum from optically-thin synchrotron emission from an unresolved jet. We also find that the \(L_{R}/L_{X}\) ratio is approximately \(10^{-3}\), inconsistent with radio-quiet AGN and many ULXs but consistent with low-luminosity AGN (LLAGN) and radio-loud quasars. Together, these observations support the conclusion that CXO J133815.6+043255 is an intermediate-mass black hole producing a low-mass analog of radio jets seen in classical quasars.
Black holes (162) -- Intermediate-mass black holes (816) -- Active galactic nuclei (16)
## 1 Introduction
Intermediate mass black holes (IMBHs) fall in the mass gap between stellar remnants (\(M_{\rm BH}\sim 10M_{\odot}\)) and their supermassive counterparts (\(M_{\rm BH}\sim 10^{6}-10^{10}M_{\odot}\)). Evidence for black holes in the lower portion of this intermediate range is now commonplace in gravitational wave detections from the Laser Interferometer Gravitational Wave Observatory (LIGO), which routinely observes mergers of \(\sim 10M_{\odot}\) black holes that presumably produce a remnant with \(M_{\rm BH}\sim 10^{2}M_{\odot}\)(e.g., Abbott et al., 2020). Electromagnetic observations have long suggested the presence of black holes in this mass range, although never unambiguously. Colbert & Mushotzky (1999) first interpreted bright X-ray sources in spiral galaxies as possible black holes with masses of \(10^{2}-10^{4}\ M_{\odot}\). In the past two decades,observational focus on ultra-luminous X-ray sources (ULXs) has generated a number of IMBH candidates. X-ray spectroscopy of ULXs has been found to be consistent with the cooler accretion disks of smaller black holes, although sometimes requiring the assumption of unusual accretion states (Miller et al., 2003; Bachetti et al., 2013; Palit & Mondal, 2023). Measurements of the radio and X-ray luminosities have allowed mass estimates via the black hole fundamental plane (Merloni et al., 2003) that suggest intermediate values for some objects (Mezcua & Lobanov, 2011; Cseh et al., 2012). The object HLX-1, located in the galaxy ESO 243-49, has an X-ray luminosity implying a strong lower bound of a few hundred solar masses that is supported by numerous electromagnetic follow-up studies (Farrell et al., 2010). X-ray timing has also delivered several candidates, from observations of quasi-periodic oscillations in several ULXs that indicate intermediate masses if typical frequency-mass scaling relations are followed (e.g., M 82 X-1, NGC 5408 X-1, Pasham et al., 2014; Strohmayer et al., 2007). While these objects remain viable IMBH candidates, it has become clear that the majority of ULXs are most likely to be particular accretion states of X-ray binaries, especially neutron stars accreting at many times the Eddington rate, as described in recent reviews (Fabrika et al., 2021; King et al., 2023).
One especially interesting IMBH candidate is CXO J133815.6+043255 (hereafter CXO J1338+04), an ultra-luminous X-ray source (ULX) in the outskirts of the lenticular Seyfert galaxy NGC 5252. This object has been carefully studied by a handful of detailed investigations over the past decade. It was discovered by Kim et al. (2015) in a search of archival _Chandra_ images, and confirmed to have AGN-like optical ionization line ratios by the same study. In follow up investigations, Kim et al. (2017) found evidence that the ionized
gas in the vicinity was kinematically bound to the object and has very low metallicity, leading the authors to posit that the ULX was once the central black hole in a dwarf galaxy that is now in the late stages of a minor merger with NGC 5252. Mezcua et al. (2018) provided the strongest evidence of an IMBH in the source so far: a VLBI image showing a resolved, 2-component radio source at the location of the ULX, with an SED analysis suggesting that one source is the radio core, and the other a small jet lobe. Using the black hole fundamental plane (Merloni et al., 2003), they constrain the black hole mass to \(10^{3.5}M_{\odot}<M_{\rm BH}<10^{6.3}M_{\odot}\). Finally, Kim et al. (2020) conducted near-infrared (NIR) imaging of the source with the _Herschel_ space telescope to study the stellar properties of the putative remnant dwarf host, finding a stellar mass of \(M_{*}=10^{7.9}M_{\odot}\) for the remnant galaxy, and used scaling relations to estimate the IMBH mass at \(10^{5}M_{\odot}\).
In this work, we present data obtained serendipitously for CXO J1338+04 in our 22 GHz radio survey of nearby radio-quiet ultra-hard X-ray selected AGN from the _Swift_-BAT survey conducted over the past several years with the Jansky Very Large Array (JVLA) (Smith et al., 2016, 2020).
The main target in our radio observation was NGC 5252, but our field of view easily encompasses CXO J1338+04, which is robustly detected (\(>5\sigma\)). We combine our new, high-frequency radio flux density with archival radio data, and analyze the source's X-ray to radio luminosity ratio in the context of other accreting sources to place further constraints on its nature.
In Section 2, we discuss the JVLA observation and data reduction. In Section 3, we present the broadband radio spectral energy distribution and the \(L_{R}/L_{X}\) results. Section 4 provides some discussion of the results, and a conclusive summary can be found in Section 5.
Throughout, we assume cosmological parameters \(H_{0}=69.6\) km s\({}^{-1}\) Mpc\({}^{-1}\), \(\Omega_{M}=0.286\), and \(\Omega_{\Lambda}=0.714\)(Bennett et al., 2014).
## 2 Observation and Data Reduction
The object CXO J1338+04 was observed in the same integration as its host galaxy NGC 5252, as part of our JVLA 22 GHz imaging survey of radio-quiet AGN host galaxies from the _Swift_-BAT ultra-hard X-ray survey (Baumgartner et al., 2013). The observations were taken in the K-band while the array was in C-configuration, so the resulting images have a beam size of \(1\arcsec\). We have presented the results of the first two phases of this survey in three papers (Smith et al., 2016, 2020, 2020); it is the radio segment of the larger BAT AGN Spectroscopic Survey (BASS; Koss et al., 2017, 2022), a multi-wavelength follow-up effort for the full BAT AGN sample.
Our observations of NGC 5252 occurred on March 2, 2020 as part of an execution block including two other science targets. The block began with X- and K-band attenuation scans and an X-band reference pointing scan on 3C 286. The science observation of NGC 5252 had an on-source integration time of 8 minutes and 50 seconds, and was bracketed on either side by gain calibration scans of J1347+1217 and preceded by an X-band pointing calibration scan of this same target. The raw data were passed through the standard JVLA reduction pipeline at the National Radio Astronomy Observatory (NRAO). We then processed them using the Common Astronomy Software Applications package (v. 4.5, McMullin et al., 2007), and inspected the data for radio-frequency interference (RFI) or bad phase calibration interactively with CASA's plotms. In the case of NGC 5252, neither effect was found. Finally, each science target was split from the main measurement set and averaged over all 64 channels within each spectral window, and the data were cleaned to a 0.03 mJy threshold with Briggs weighting.
The image obtained by this observation is shown in Figure 1 with an inset of the VLBI image of Mezcua et al. (2018) and alongside the \(g\)-band image of NGC 5252 from the Pan-STARRS survey (Chambers et al., 2016). The ULX is approximately 10 kpc from the galaxy center. The radio source is unresolved in our image with \(\theta_{\rm maj}=1.07\arcsec\) and \(\theta_{\rm min}=0.82\arcsec\); the resolution of our observations is much larger than the VLBI image from Mezcua et al. (2018) in which the pc-scale jet was resolved.
## 3 Results
### Radio Spectral Energy Distribution
We measure an unresolved flux density of \(0.53\pm 0.02\) mJy for CXO J1338+04. (The Seyfert nucleus in NGC 5252 was detected at \(6.39\pm 0.04\) mJy; this and the core flux densities for our full sample will be published in the upcoming final paper in our survey series, Magno et al. in preparation.)
Several observations at lower radio frequencies exist in the literature. These are helpfully tabulated by Yang et al. (2017) in their Table 2. We note that they noticed no significant variability in the radio flux density despite these archival measurements occurring over several decades. Our beam size is slightly smaller than the majority of the VLA beam sizes for observations taken at 1.49 GHz (\(\sim 1.5\arcsec\)) and slightly larger than VLA beam sizes for observations taken at 8.4 GHz (\(\sim 0.3\arcsec\)). We construct an SED with these archival observations in
Figure 2. The flux densities of 8.4 GHz observations, taken annually from 1990 through 1993 (Wilson and Tsvetanov, 1994; Kukula et al., 1995; Nagar et al., 1999), are all consistent with one another and at similar resolution (\(\sim 0.3^{\prime\prime}\), taken in A and A/B hybrid configurations of the VLA), so we average these measurements in the plot and fitting.
CXO J1338+04 is also robustly detected in the VLA Sky Survey (VLASS; Lacy et al., 2020) at 3 GHz and 2.5\({}^{\prime\prime}\) resolution. We used an Epoch 1.2 VLASS image centered on NGC 5252. This image was generated using the Canadian Initiative for Radio Astronomy Data Analysis (CIRADA) cutout service1. Flux density was measured using the Cube Analysis and Rendering Tool for Astronomy (CARTA; Comrie et al., 2021) interface: we create a 2.5\({}^{\prime\prime}\)\(\times\)2.0\({}^{\prime\prime}\) region with a position angle of 18.45 degrees centered on the ULX, fully containing its emission. CARTA calculates the flux density by multiplying the mean intensity in this region by the number of beam areas subtended (where the beam area is defined as the volume of the elliptical Gaussian defined by the synthesized beam, divided by the maximum of that function). We obtain a VLASS 3 GHz flux density of \(1.53\pm 0.16\) mJy; however, the currently available VLASS images have a systematic error of 15% according to its documentation (0.22 on our measured flux). We use this value in the plot and the spectral slope fit.
Footnote 1: cutouts.cirada.ca
There have also been a number of VLBI investigations of the source. Yang et al. (2017) observed it with the EVN at 1.66 GHz and find it unresolved with a flux density of \(1.8\pm 0.1\) mJy. The VLBA observations of Mezcua et al. (2018) resolved the radio source into two components along a roughly east-west axis, with a combined total flux density of \(0.66\pm 0.09\) mJy. The eastern component is detected at both of their observing frequencies (4.4 GHz and 7.6 GHz), with a very steep spectral index between the two sources of \(\alpha=-2.0\pm 0.1\), where \(S_{\nu}\propto\nu^{\alpha}\). Due to this well-constrained, steep spectral index, Mezcua et al. (2018) believe the eastern component to be a jet lobe, associated with the radio core in the western component. The western component is detected at \(0.12\pm 0.03\) mJy at 7.6 GHz, and is not detected at 4.4 GHz. Based on the 5\(\sigma\) upper limit of the flux density, they place an upper limit on the spectral index of this component of \(\alpha\sim-0.6\); this flatter index is most consistent with the western source harboring the true radio core. The Yang et al. (2017) observation likely failed to resolve the two components because of the unfortunate coincidence that its beam is highly elongated along the apparent jet axis. Because of the dramatically higher resolution of these VLBI observations, we do not include them in the SED fitting, although they are relevant to the upcoming \(L_{R}/L_{X}\) discussion in Section 3.2.
Our 22 GHz data point extends the lower-resolution SED significantly in frequency space. Using least-squares minimization, we calculate an overall spectral index of \(\alpha=-0.66\pm 0.02\) between 1.4 and 22 GHz, a typical steep spectral index consistent with the majority of emission at these resolutions coming from an unresolved jet source.
It is clear from the VLBI-scale observations of Mezcua et al. (2018) that a jet-like radio source is present at the position of the ULX within our beam. However, since the spatial resolution of our 22 GHz observation and many of the archival observations shown in Figure 2 are not on VLBI scales, it is possible that other sources such as star-forming regions are contributing to the emission. Star-forming HII regions tend to have \(\alpha\sim-0.1\)(Condon, 1992) due to significant contribu
Figure 1: (a) Close-up view of our K-band radio image of ULX CXO J1338+04. Contours occur at 70, 50, 30, and 15% of the peak flux density. Inset is a reproduction of the VLBI image from Mezcua et al. (2018); the very small black box in the center, from which the inset is enlarged, has the dimensions of the VLBI image. (b) _Left:_ Full K-band radio image of NGC 5252, including the ULX in the top right. The beam is shown on bottom right. _Right:_ The PanSTARRS g-band image of NGC 5252 with radio contours overlaid in red. The scalebar shows the 10.3 kpc distance between the Seyfert nucleus and the ULX.
tions from bremsstrahlung radiation. Isolated supernova remnants tend to have \(\alpha<-0.6\)(e.g., Kapinska et al., 2017), although the relatively flat thermal component of nearby HII regions typically leads to a flatter composite spectrum of the star forming region as a whole. X-ray binaries often have flat or inverted spectra (e.g., Migliari et al., 2003). It is therefore unlikely, but not impossible, that a spectral index of \(\alpha\sim-0.6\) is consistent with star formation; however, Kim et al. (2015) found AGN-like optical emission line ratios from long-slit spectroscopic observations of CXO 1338+04 with \(\sim 1\arcsec\) resolution, inconsistent with ratios expected from HII regions. With all of this accounted for, it is unlikely that star formation related sources are causing the steep radio spectral index reported here.
### The \(L_{r}/l_{x}\) Ratio
Laor & Behar (2008) found that the relationship between the radio and X-ray luminosities of radio-quiet Seyferts and quasars is the same as that exhibited by cool, coronally active stars: \(L_{R}/L_{X}\sim 10^{-5}\). In that work, the authors compare radio-loud and radio-quiet optically-selected Palomar-Green quasar subsamples (PG; Schmidt & Green, 1983) studied in detail by Boroson & Green (1992) and four ULXs: NGC 5408 X-1 (Kaaret et al., 2003), M82 X-1 (Kaaret et al., 2006), NGC 7424 ULX-2 (Soria et al., 2006), and Holmberg II (Dewangan et al., 2004). As described in detail in Laor & Behar (2008), the radio fluxes of the PG sample were measured at 5 GHz by the VLA (Kellermann et al., 1989, 1994) and the X-ray fluxes come from ROSAT observations by Brandt et al. (2000) and Laor & Brandt (2002). The bolometric X-ray luminosity is derived over the range 0.2-20 keV using the relation \(L_{X}=C\nu L_{\nu}({\rm 1keV})\), with \(C=6.25\); for the detailed derivation, see Section 2.1 of Laor & Behar (2008). The radio and X-ray luminosities of the four ULXs come from diverse sources as detailed in Section 2.3 of Laor & Behar (2008) and the references above, but are typically derived at 5 GHz and 2-10 keV. The radio-loud and radio-quiet quasars occupy distinctly different populations in \(L_{R}/L_{X}\) space, with the four ULXs consistent with the radio-quiet population.
The Laor & Behar (2008) comparison also includes a small sample of low-luminosity Seyferts; instead of that sample of 12, which required many corrections of the X-ray and radio fluxes, we include two larger samples: the LLAGN sample from Plotkin et al. (2012), and the sample of 100 nearby (\(z<0.05\)) Seyferts at 22 GHz from an earlier phase of the same survey in which we detect CXO J1338+04 here, as well as the radio and X-ray luminosities of a further \(\sim 150\) objects that represent the final phase of the survey (and are also radio-quiet Seyferts). The survey will be published fully in an upcoming work (Magno et al., in preparation). The large majority (96%) of the 22 GHz survey sources are radio-quiet, and are consistent with LLAGN along the fundamental plane of black hole activity (Smith et al., 2020).
The \(0.3-8\) keV X-ray luminosity of CXO J1338+04 is \(L_{X}=1.5\times 10^{40}\) erg s\({}^{-1}\)(Kim et al., 2015). When compared to the 4.4 GHz radio luminosity derived from the flux density of the total jet + core morphology from Mezcua et al. (2018), this results in a ratio of \(L_{R}/L_{X}=1.3\times 10^{-3}\). In Figure 3, we plot this value along with the objects given in Laor & Behar (2008), the LLAGN of Plotkin et al. (2012), and the Smith et al. (2020) high-frequency survey of radio-quiet Seyferts.
Our own observed flux results in a luminosity of \(L_{\rm 22GHz}=5.2\times 10^{37}\) erg s\({}^{-1}\), which when compared to the X-ray luminosity yields \(L_{R}/L_{X}=4.3\times 10^{-3}\).
We note that the samples compared in this plot were taken from a diverse array of literature sources, and were not all obtained at the same radio or X-ray frequencies or energies. Canonically, the threshold between radio-loud and radio-quiet AGN was proposed by Terashima & Wilson (2003) as \(R_{X}\equiv\nu L_{\nu,\rm 5GHz}/L_{X,2-10keV}\sim-4.5\). Laor & Behar (2008) also use the radio luminosity at
Figure 2: Spectral energy distribution of CXO J1338+04, including data summarized by Yang et al. (2017), and our data point at 22 GHz (log \(\frac{\nu}{\rm GHz}=1.34\)). Circles represent the relative sizes of the beam for each observation; the smallest is the \(0.36\arcsec\) 1.6 GHz MERLIN observations by Thean et al. (2001), and \(\sim 0.3\arcsec\) resolutions of the averaged 8.4 GHz observations by Wilson & Tsvetanov (1994), Kukula et al. (1995), and Nagar et al. (1999). The largest is the 1.4 GHz flux observation from the FIRST survey, at \(\sim 6\arcsec\)(Becker et al., 1995).
5 GHz, but in X-rays use the integrated \(0.2-20\)keV obtained with the conversion factor \(C_{\nu}=6.25\) (see preceding discussion in this section). Therefore, the X-ray luminosities of objects in Figure3 may shift left or right by a factor of \(\sim 5\), or 0.5 dex. The effect of using different bands on the \(L_{R}/L_{X}\) ratio can be visually estimated by the spread between our 22 GHz data point and the corresponding 4.4 GHz data point from Mezcua et al. (2018); although their data point is taken with VLBI at significantly smaller spatial scales, it remains above ours by about 0.3 dex. All of the Smith et al. (2020) points are taken at 22 GHz, and remain mostly consistent in \(L_{R}/L_{X}\) with the radio-quiet quasars at 5 GHz from Laor & Behar (2008) (despite being at lower X-ray luminosities, as expected for Seyferts as compared to luminous quasars). We have used the _Swift-XRT_ 2-10 keV X-ray luminosities (rather than the ultra-hard _Swift-BAT_ luminosities) when computing the \(L_{X}\) of the Smith et al. (2020) sample, for maximum consistency with the other samples. It is therefore unlikely that the distinct populations on the plot are due to the observations being taken at different bands or energies; and in any case, CXO 1338+04 retains its relative position to the other populations at both radio frequencies and resolutions presented.
## 4 Discussion
When combined with archival observations, our high-frequency data point extends the radio SED significantly. The best-fitting spectral index of \(\alpha=-0.66\pm 0.02\) is consistent with the canonical value for steep spectrum radio emission from a synchrotron jet (e.g., Krolik, 1999). Because the resolved VLBI observations of Mezcua et al. (2018) indicated a possible core and jet-lobe morphology at much smaller scales, it is reasonable to assume that the larger-scale unresolved radio emission probed by the archival data and our new observation shown in Figure 2 is due to this same jet.
Radio-loud and radio-quiet samples show a clear dichotomy in their \(L_{R}/L_{X}\) ratios. The ratio between the radio and X-ray luminosity of CXO J1338+04 is consistent with it being a low-mass analog of a radio-loud AGN or quasar, as shown in Figure 3. The same conclusion was reached by Mezcua et al. (2018) based on the 4.4 GHz flux of only the western component of their resolved EVN image, which they believe is the location of the core. We have added a point to Figure 3 that shows
Figure 3: Ratio of the radio and X-ray luminosities compared to the X-ray luminosity for the Laor & Behar (2008) (LB08) quasar and ULX samples, the BAT AGN Seyfert sample from Smith et al. (2020), the LLAGN sample from Plotkin et al. (2012), and the ULX being studied in this work, CXO J1338+04. The radio luminosities for the LB08 sample are at 5 GHz for the quasars and a variety of frequencies near 5 GHz for the ULXs. The BAT AGN luminosities are taken from the 1′′core at 22 GHz. We therefore show the values for CXO J1338+04 measured at 22 GHz by our survey, and at 4.4 GHz by Mezcua et al. (2018).
the ratio for the entire integrated 4.4 GHz flux from their observation, as well as for our unresolved 22 GHz observation. Both points are far above the radio-quiet quasars from Laor and Behar (2008) and from the 95% radio-quiet BAT AGN sample from Smith et al. (2020). In fact, the \(L_{R}/L_{X}\) ratio is most consistent with radio-loud quasars and the LLAGN sample from Plotkin et al. (2012), which was chosen to be analogous to the "low-hard" accretion state of X-ray binaries. The origin of radio emission in radio-loud objects is reasonably well-established as a classical jet, and the LLAGN population is considered most likely to produce jets in accordance with the disk-jet coupling model due to its position on the same fundamental plane as low-hard X-ray binaries and the fact that most LLAGN are radio-loud (Merloni et al., 2003; Falcke et al., 2004; Terashima and Wilson, 2003), this observation further supports the interpretation of the radio emission from CXO J1338+04 as a jet launched by an accreting intermediate-mass black hole. In this case, it joins the recently discovered jet candidate found in the dwarf elliptical galaxy SDSS J090613.77+561015.2 (Yang et al., 2023), which is likely to have a black hole mass of \(3.6\times 10^{5}M_{\odot}\)(Baldassare et al., 2016).
It is also apparent from Figure 3 that CXO J1338+04 is much more radio-loud than the other ULXs in the Laor and Behar (2008) sample. These objects include NGC 5408 X-1, which is likely to be a stellar-mass object accreting at super-Eddington rates and driving a wind nebula (Pinto et al., 2016; Luangtip et al., 2021); Homberg II X-1, which is unlikely to be \(>100M_{\odot}\) and accreting at slightly sub-Eddington (Goad et al., 2006; Ambrosi et al., 2022); M 82 X1, whose nature is uncertain but may be a few\(\times 100M_{\odot}\) black hole (Pasham et al., 2014; Mondal et al., 2022); and NGC 7424 ULX2, which is located in a young OB association but does exhibit a steep radio spectrum like our source (Soria et al., 2006). In short, most of these objects are either stellar mass objects, or at least are well below the mass range established for CXO J1338+04 by previous investigations: \(10^{3.5}M_{\odot}<M_{\rm BH}<10^{6.3}M_{\odot}\) from Mezcua et al. (2018) and \(10^{5}M_{\odot}\) by Kim et al. (2020). In fact, the majority of ULXs may be stellar mass objects accreting at or above the Eddington rate (e.g., Berghea et al., 2008), so if CXO J1338+04 is a bona-fide intermediate-mass black hole producing a real analog of a radio-loud quasar jet, we might expect its \(L_{R}/L_{X}\) ratio to exceed that of other ULXs, as we observe here.
## 5 Conclusions
We have presented a new high-frequency 22 GHz radio observation of the IMBH candidate CXO J133815.6+043255, an ultra-luminous X-ray source in the outskirts of the Seyfert galaxy NGC 5252. We find the following:
* The 22 GHz flux density is \(S_{\nu}=0.53\pm 0.02\) mJy. When combined with archival observations, this results in a broadband SED well-fit by a power law with a spectral index of \(\alpha=-0.66\pm 0.02\) between 1.4 and 22 GHz, consistent with steep radio spectra from synchrotron jet emission.
* The \(L_{R}/L_{X}\) ratio definitively occupies the regime of LLAGN and radio-loud quasars, and is not consistent with radio-quiet Seyferts or with ULXs associated with stellar mass objects.
We conclude that the 22 GHz observations support the conclusion that CXO J133815.6+043255 is an intermediate-mass black hole producing a radio jet, as suggested by the VLBI observations of Mezcua et al. (2018). That is, CXO J133815.6+043255 is likely to be a true low-mass analog of radio-loud quasars.
KLS and MM gratefully acknowledge discussions fostered by the _VLA Sky Survey in the Multiwavelength Spotlight_ conference in Socorro, NM in September 2022. AT is supported by NASA grant number 80NSSC22K0741. This research has made use of the CIRADA cutout service at URL cutouts.cirada.ca, operated by the Canadian Initiative for Radio Astronomy Data Analysis (CIRADA). CIRADA is funded by a grant from the Canada Foundation for Innovation 2017 Innovation Fund (Project 35999), as well as by the Provinces of Ontario, British Columbia, Alberta, Manitoba and Quebec, in collaboration with the National Research Council of Canada, the US National Radio Astronomy Observatory and Australia's Commonwealth Scientific and Industrial Research Organisation.
|
2309.16627 | Class Activation Map-based Weakly supervised Hemorrhage Segmentation
using Resnet-LSTM in Non-Contrast Computed Tomography images | In clinical settings, intracranial hemorrhages (ICH) are routinely diagnosed
using non-contrast CT (NCCT) for severity assessment. Accurate automated
segmentation of ICH lesions is the initial and essential step, immensely useful
for such assessment. However, compared to other structural imaging modalities
such as MRI, in NCCT images ICH appears with very low contrast and poor SNR.
Over recent years, deep learning (DL)-based methods have shown great potential,
however, training them requires a huge amount of manually annotated
lesion-level labels, with sufficient diversity to capture the characteristics
of ICH. In this work, we propose a novel weakly supervised DL method for ICH
segmentation on NCCT scans, using image-level binary classification labels,
which are less time-consuming and labor-efficient when compared to the manual
labeling of individual ICH lesions. Our method initially determines the
approximate location of ICH using class activation maps from a classification
network, which is trained to learn dependencies across contiguous slices. We
further refine the ICH segmentation using pseudo-ICH masks obtained in an
unsupervised manner. The method is flexible and uses a computationally light
architecture during testing. On evaluating our method on the validation data of
the MICCAI 2022 INSTANCE challenge, our method achieves a Dice value of 0.55,
comparable with those of existing weakly supervised method (Dice value of
0.47), despite training on a much smaller training data. | Shreyas H Ramananda, Vaanathi Sundaresan | 2023-09-28T17:32:19Z | http://arxiv.org/abs/2309.16627v1 | Class Activation Map-based Weakly supervised Hemorrhage Segmentation using Resnet-LSTM in Non-Contrast Computed Tomography images
###### Abstract
In clinical settings, intracranial hemorrhages (ICH) are routinely diagnosed using non-contrast CT (NCCT) for severity assessment. Accurate automated segmentation of ICH lesions is the initial and essential step, immensely useful for such assessment. However, compared to other structural imaging modalities such as MRI, in NCCT images ICH appears with very low contrast and poor SNR. Over recent years, deep learning (DL)-based methods have shown great potential, however, training them requires a huge amount of manually annotated lesion-level labels, with sufficient diversity to capture the characteristics of ICH. In this work, we propose a novel weakly supervised DL method for ICH segmentation on NCCT scans, using image-level binary classification labels, which are less time-consuming and labor-efficient when compared to the manual labeling of individual ICH lesions. Our method initially determines the approximate location of ICH using class activation maps from a classification network, which is trained to learn dependencies across contiguous slices. We further refine the ICH segmentation using pseudo-ICH masks obtained in an unsupervised manner. The method is flexible and uses a computationally light architecture during testing. On evaluating our method on the validation data of the MICCAI 2022 INSTANCE challenge, our method achieves a Dice value of 0.55, comparable with those of existing weakly supervised method (Dice value of 0.47), despite training on a much smaller training data.
Keywords:Intracranial hemorrhages weakly supervised LSTM class activation maps
## 1 Introduction
Acute Intracranial hemorrhage (ICH) is a life-threatening disease, requiring emergency medical attention. Accurate detection and segmentation of ICH could efficiently assist clinicians in assessing their severity. Even though MRI scans
show better contrast of ICH, these lesions are routinely diagnosed using non-contrast CT (NCCT) imaging in clinical practice. However, since ICH lesions have poor contrast on CT images with a low signal-to-noise ratio, manual labeling of ICH on CT images is highly time-consuming and labor-intensive. The recent advent of deep learning (DL) models has shown improvement in the detection performance of segmentation tasks on CT images [1]. However, training DL methods requires large manually labeled datasets, and are affected by variations across hemorrhage sub-types and demographic characteristics (e.g., age).
To date, most of the methods for hemorrhage segmentation on CT images are fully supervised. The existing methods for hemorrhage segmentation (at lesion-level) and classification (image-level) have used 3D CNN [2][3] or 2D CNN models [4], including encoder-decoder architecture[5][6], mainly using U-Net [5][6], and multi-tasking architecture [7] for segmentation. Moreover, to mitigate the inconsistencies in detection across slices and to reduce classification errors, long short-term memory (LSTM) modules [8] have been used. A few weakly-supervised methods have been proposed for hemorrhage segmentation on NCCT images to overcome the need for large amount of labelled data. These methods include the use of sliding window (SWIN) transformers [9] and attention maps [10] for getting bounding boxes rather than precise boundaries. Still, transformers might not be suitable for low-data regimes (where weakly supervised methods would be highly useful), since they require a large amount of training data. They also do not explicitly leverage the spatial information, limiting their ability to capture fine-grained details [11],[12]. Hence, there is a need for robust weakly supervised methods that would provide accurate detection of hemorrhage on CT images from image-level classification labels. It is also essential for such a method to use spatial contextual information, for efficient quantification and characterization of ICH in real-time clinical applications.
In this work, we aim to accurately localize ICH on NCCT images using a weakly supervised method trained using image-level labels. Our method integrates contextual information across slices in a classification model and uses class activation map [13] to provide a weak prior for salient locations for hemorrhages. The pseudo-labels obtained in an unsupervised manner from class activation map provides sufficient diversity in their characteristics, thus improving the
Figure 1: **Challenges in CT image segmentation.** (a) Comparison of CT and MRI (b) ICH on RGB-converted CT images
robustness of the segmentation method. We evaluate our method on a publicly available CT dataset consisting of various sub-types of ICH.
## 2 Materials & Methods
Our main aim is to obtain accurate segmentation for hemorrhages using image-level labels. Towards that aim, our method consists of two steps: (1) obtaining the spatial localization of lesions using the class activation maps (CAM) from initial classification; (2) weakly- supervised refinement of the location and boundary of ICH lesions. The steps in the proposed method are illustrated in Fig. 2.
**Getting spatial CAM from image-level labels using ResNet-LSTM:** To obtain the location-based information using image-level labels, we train the classification model and estimate the spatially salient regions on the brain that contribute to the image-level decision. We choose the ResNet-101 [14] on the input 2D slices since it has been successfully used for classification tasks in medical imaging [15]. The input slices are converted from grayscale to RGB colorspace (as shown in Fig. 1) to be provided as input to 2D ResNet. The main reason for using 2D ResNet is to provide negative samples for training since all subjects had hemorrhages (and hence the lack of negative samples in 3D volumes). Since the ResNet model is trained on the slices individually, to ensure the continuity across slices, we integrate an LSTM (after convergence of ResNet) and continue training again until convergence. The LSTM establishes short-term dependencies across slices ensuring continuity. Hence, to obtain the 3D CAM (with continuity across slices), we integrate the LSTM module with the ResNet by removing the fully connected (FC) layers. After training, we remove the LSTM module, since the weights of the ResNet backbone are updated to learn the short-term dependencies across slices. We later add the FC layers and obtained contextually meaningful 3D CAM, which provides localized feature saliency, highlighting the potential hemorrhage regions. We use binary cross-entropy loss function for true target value \(y\) and predicted value \(\hat{y}\) to train the classification model as follows:
\[\text{BCE}(y,\hat{y})=-\left(y\log(\hat{y})+(1-y)\log(1-\hat{y})\right) \tag{1}\]
**Obtaining pseudolabel masks for segmentation:** Since the CAM provide a weak prior of hemorrhage location, we threshold the CAM at the threshold \(\text{Th}_{CAM}\) to obtain the region containing hemorrhage. The region within the threshold CAM region is clustered using an unsupervised K-Means algorithm. To select a hemorrhage cluster from others, we use differential prediction as follows: (i) From the K-means clustering output, we subtract the region corresponding to each cluster from the original image, (ii) we feed the subtracted image to the trained ResNet, (iii) among all the clusters, we take the cluster corresponding to the lowest value of logits as ICH cluster (since subtracting the ICH-cluster would lead to loss of information and hence is expected to provide the lowest logits value). We consider this cluster mask as a pseudo-lesion mask for weakly
supervised hemorrhage segmentation in the next step.
**CAM-guided Weakly supervised segmentation using pseudo-ICH labels:** The main aim for this step is to improve the hemorrhage localization and improve its segmentation. Hence, we perform contrast enhancement and remove the skull areas to avoid spurious bright regions that could lead to erroneous detection. For the skull removal, we smooth the image using bilateral filtering [16] and threshold the image at \(95^{th}\) percentile of intensity values. We discard the skull and anything outside the convex hull of the skull (example shown in Fig. 2). We train the 3D U-Net [17] model in a supervised manner by using the pseudo-label masks obtained in the above step. We use a combination of BCE and Dice loss function for training the U-Net model for target mask \(y\) and predicted mask \(\hat{y}\).
\[L_{comb}(y,\hat{y})=(1-\frac{2\sum(y\odot\hat{y})+\epsilon}{\sum y+\sum\hat{y}+ \epsilon})+(-(y\log(\hat{y})+(1-y)\log(1-\hat{y}))) \tag{2}\]
Figure 2: **Weakly supervised ICH segmentation from image-level labels. Top:** Training Phase, consisting of 2 steps: obtaining CAM from image-level labels (blue) and CAM-guided pseudo-ICH labels for training 3D U-Net for segmentation (orange). **Bottom:** Testing phase, where the trained 3D U-Net is used to segment ICH on whole NCCT volumes.
**Testing phase:** During the testing phase, we perform skull extraction and histogram equalization and apply the trained 3D U-Net to get the final ICH segmentation mask on 3D NCCT volumes.
### Implementation details
For training ResNet-101 and ResNet-LSTM, we use the SGD optimizer with momentum=0.9, with a batch size of 32 and a learning rate of 0.0001 (0.001 from ResNet-LSTM) for 100 epochs with a patience value of 25 epochs for early stopping (converged at \(\sim\)45 epochs). We use a threshold value of 0.7 on CAM, and for K-Means clustering, the value of K is 4. We train 3D U-Net using Adam optimizer [18] with a batch size of 4 and learning rate of 0.001 with the same number of epochs and patience value as mentioned above. The above hyperparameters are chosen empirically.
## 3 Experiments
### Dataset Details
We use 100 3D CT scans released as training data for the MICCAI 2022 ICH segmentation challenge ([https://instance.grand-challenge.org/](https://instance.grand-challenge.org/), INSTANCE) [19] on NCCT images, including 5 different types of hemorrhages: intraparenchymal hemorrhage (IPH), intraventricular hemorrhage (IPH), subarachnoid hemorrhage (SAH), subdural hemorrhage (SDH), and epidural hemorrhage (EDH). Manual lesion labels are available for the scans. However, since we aim to develop a weakly supervised method that could provide segmentation results with image-level labels, we extract image-level classification labels from the manual lesion labels (present/positive sample if any sub-type of the lesion are labeled in a slice, absent/negative otherwise). The volumes are anisotropic, with dimensions ranging from 512 \(\times\) 512 \(\times\) 20 to 512 \(\times\) 512 \(\times\) 70, and a voxel resolution of 0.42mm \(\times\) 0.42mm \(\times\) 5mm.
Experiments:We submitted our method to the INSTANCE challenge dataset to be evaluated in-house by the organizers on the unseen validation data. We also compare with a fully supervised baseline on the same dataset, and another weakly supervised method [9] on a different CT dataset. Additionally, on the INSTANCE training data that is publicly available, we perform 3-fold cross-validation with a training-validation-test split of 70-10-20 subjects (2240 and 320 slices for ResNet-LSTM training and validation respectively). We also perform an ablation study to investigate the effect of various components: (1) using the CAM from ResNet alone as pseudo-ICH labels for 3D U-Net training (ResNet + U-Net), (2) using the CAM from ResNet-LSTM as pseudo-ICH labels for 3D U-Net (ResNet-LSTM + U-Net), (3) Using ResNet CAM and K-means clustering to get pseudo-ICH labels for 3D U-Net training (ResNet + K-means
+ U-Net), (4) Using ResNet-LSTM CAM and K-means clustering to get pseudo-ICH labels for 3D U-Net training (ResNet-LSTM + K-means + U-Net). We also determine the statistical significance of results using two-tailed, paired t-tests.
**Performance evaluation metrics:** We use (1) Dice overlap measure (Dice), (2) relative volume difference (RVD):\(\frac{|V_{\text{pred}}-V_{\text{gt}}|}{V_{\text{gt}}}\), where \(V_{\text{pred}}\) and \(V_{\text{gt}}\) are predicted output and ground truth volumes respectively, (3) voxel-wise true positive rate (TPR). Also, Hausdorff distance (HD) and surface Dice (SD) values were additionally determined by the organizers on submitting our method to INSTANCE challenge validation data.
## 4 Results and discussion
**Validation results and comparison with existing work:** Evaluation results of our method on the unseen INSTANCE validation data by the challenge organizers are reported in table 1, along with the comparison with existing methods. On comparing our method with another weakly supervised method using SWIN transformer [9] for binary classification, our Dice values (0.55) on the INSTANCE validation data compares favourably with the Dice value of 0.47 obtained by [9] on PhysioNet dataset [20], despite using only 2240 training slices (from 70 subjects) for training (20,000 samples used by [9]). This could be due to the fact that we also take 3D contextual information into consideration using LSTM. A fully supervised U-Net [21], used as a baseline in INSTANCE challenge [19] provided a Dice value of 0.64. Comparing this with our results, our method shows the potential to be able to provide comparable results with a larger and more diverse training data.
**Cross-validation on publicly available INSTANCE data:** The results of 3-fold cross validation are reported in table 2 and the visual results are shown in Fig 3. On performing the cross-validation, our method achieve a mean Dice of 0.47 and a mean RVD of 1.05 at the threshold of 0.5. As shown in Fig. 3 (top panel), CAM provides a good estimate of ICH location, while the boundaries are further improved by the 3D U-Net segmentation model.
\begin{table}
\begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline
**Methods** & **Dataset** & **DICE** & **RVD** & **HD** & **SD** \\ \hline
**Our Method** & INSTANCE validation (unseen) & \(\mathbf{0.55\pm 0.3}\) & \(\mathbf{0.79\pm 1.22}\) & \(\mathbf{42.75\pm 32.54}\) & \(\mathbf{0.28\pm 0.17}\) \\ \hline
**Fully sup U-Net(baseline)** & INSTANCE validation (unseen) & \(0.64\pm 0.27\) & \(0.46\pm 0.2\) & \(277.63\pm 163\) & \(0.51\pm 1.14\) \\ \hline
**Weakly supervised[8]** & PhysioNet & \(0.47\pm 0.26\) & – & – & – \\ \hline \end{tabular}
\end{table}
Table 1: Performance on unseen validation data from the challenge and comparison with existing literature.
**Ablation study results:** From table 2 and the bottom panel of Fig. 3, out of all the settings, ResNet + U-Net has the worst performance with a mean Dice of 0.14. This is also evident from the spatially inaccurate CAM that we obtain from the ResNet model (c). Integrating LSTM for fine-tuning the ResNet classification model provides significant improvement in the spatial localization accuracy of CAM (in terms of size, focus, and location as shown in d) and hence the segmentation output (e) since it takes into consideration the context across
Figure 3: **Evaluation on INSTANCE data and ablation study. Top panel:** Results shown for NCCT 2D slices (b), along with CAM maps from ResNet-LSTM classification model (c), K-means clusters from CAM (d) and segmentation output (e). **Bottom panel:** CAM maps from ResNet alone (b) shown along with CAM from ResNet-LSTM (c); K-means clustering output (d) and segmentation output (e) are shown corresponding to CAM map from (c). In both top and bottom panels, manual segmentations are shown in (f).
slices. Also, from the paired t-test results (table 2), the performance of our method significantly improves with the integration of LSTM within the ResNet model. This is in line with the result of the INSTANCE challenge, where most of the top-performing methods consider both 2D and 3D information.
Out of all methods, the ResNet-LSTM + K-means + U-Net model performs the best with a mean Dice value of 0.57, indicating the use of clustered rough pseudo-labels in leveraging the difference between ICH and background regions (also evident from the improvement in segmentation results in Fig. 3e). The main sources of false positives are the high-intensity regions near skull areas and confounding structures similar to lesions. However, our contrast enhancement during preprocessing (shown in Fig. 2) improves the contrast of CT images thus efficiently aiding the models in learning discriminative features for ICH concerning the background.
Currently, our method provides lower performance while segmenting thinner lesions closer to the skull (e.g., SAH and EDH), thus affecting our overall performance. One of the future directions of our work would be to improve the segmentation accuracy and make the method generalizable across various subtypes of ICH.
## 5 Conclusion
Our proposed weakly supervised method is trained for ICH segmentation using image-level labels. Our method is highly flexible and computationally light during prediction. Our method takes into consideration both 2D and 3D features contributing to the classification decision, by obtaining CAM from ResNet fine-tuned using LSTM. Our method achieves a Dice value of 0.55 on the validation data of MICCAI 2022 INSTANCE challenge and performs on par with the existing weakly supervised method for ICH segmentation.
## Acknowledgements
The authors acknowledge the Start-up grant from Indian Institute of Science (IISc), India.
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline
**Metric** & **DICE** & **RVD** & **TPR** \\ \hline Cross-validation & \(0.47\pm 0.3\) & \(1.05\pm 1.22\) & 0.48 \\ \hline \multicolumn{4}{|c|}{**Ablation Study**} \\ \hline ResNet + U-Net & \(0.14\pm 0.17\) & \(3.15\pm 3.69\) & 0.11 \\ ResNet-LSTM + U-Net & \(0.31\pm 0.23^{**}\) & \(1.54\pm 1.37^{*}\) & \(0.32^{*}\) \\ \hline ResNet + K-means + U-Net & \(0.18\pm 0.2\) & \(0.82\pm 0.54\) & 0.24 \\ ResNet-LSTM + K-means + U-Net & \(0.57\pm 0.26^{**}\) & \(0.37\pm 0.32^{**}\) & \(0.48^{*}\) \\ \hline \end{tabular}
\end{table}
Table 2: Cross Validation Performance and Ablation study on the INSTANCE data (*/** indicate values significantly above those in the previous row using a paired t-test; *p-value\(<\)0.05, **p-value\(<\)0.001) |
2309.00045 | The energy distribution of the first supernovae | The nature of the first Pop III stars is still a mystery and the energy
distribution of the first supernovae is completely unexplored. For the first
time we account simultaneously for the unknown initial mass function (IMF),
stellar mixing, and energy distribution function (EDF) of Pop III stars in the
context of a cosmological model for the formation of a MW-analogue. Our
data-calibrated semi-analytic model is based on a N-body simulation and follows
the formation and evolution of both Pop III and Pop II/I stars in their proper
timescales. We discover degeneracies between the adopted Pop III unknowns, in
the predicted metallicity and carbonicity distribution functions and the
fraction of C-enhanced stars. Nonetheless, we are able to provide the first
available constraints on the EDF, $dN/dE_\star \propto E_{\star}^{-\alpha_e}$
with $1\leq \alpha_e \leq2.5$. In addition, the characteristic mass of the Pop
III IMF should be $m_{\rm ch}<100\:{\rm M_\odot}$, assuming a mass range
consistent with hydrodynamical simulations (0.1-1000$\:{\rm M_\odot}$).
Independent of the assumed Pop III properties, we find that all [C/Fe]>+0.7
stars (with [Fe/H]<-2.8) have been enriched by Pop III supernovae at a $>20\%$
level, and all [C/Fe]>+2 stars at a $>95\%$ level. All very metal-poor stars
with $\rm [C/Fe]<0$ are predicted to be predominantly enriched by Pop III
hypernovae and/or pair instabillity supernovae. To better constrain the
primordial EDF, it is absolutely crucial to have a complete and accurate
determination of the metallicity distribution function, and the properties of
C-enhanced metal-poor stars (frequency and [C/Fe]) in the Galactic halo. | I. Koutsouridou, S. Salvadori, Á. Skúladóttir, M. Rossi, I. Vanni, G. Pagnini | 2023-08-31T18:00:01Z | http://arxiv.org/abs/2309.00045v1 | # The energy distribution of the first supernovae
###### Abstract
The nature of the first Pop III stars is still a mystery and the energy distribution of the first supernovae is completely unexplored. For the first time we account simultaneously for the unknown initial mass function (IMF), stellar mixing, and energy distribution function (EDF) of Pop III stars in the context of a cosmological model for the formation of a MW-analogue. Our data-calibrated semi-analytic model is based on a N-body simulation and follows the formation and evolution of both Pop III and Pop II/I stars in their proper timescales. We discover degeneracies between the adopted Pop III unknowns, in the predicted metallicity and carbonicity distribution functions and the fraction of C-enhanced stars. Nonetheless, we are able to provide the first available constraints on the EDF, \(dN/dE_{\star}\propto E_{\star}^{-\alpha_{\rm e}}\) with \(1\leq\alpha_{\rm e}\leq 2.5\). In addition, the characteristic mass of the Pop III IMF should be \(m_{\rm ch}<100\,\rm M_{\odot}\), assuming a mass range consistent with hydrodynamical simulations (0.1-1000\(\,\rm M_{\odot}\)). Independent of the assumed Pop III properties, we find that all \([\rm C/Fe]>+0.7\) stars (with \([\rm Fe/H]<-2.8\)) have been enriched by Pop III supernovae at a \(>20\%\) level, and all \([\rm C/Fe]>+2\) stars at a \(>95\%\) level. All very metal-poor stars with \([\rm C/Fe]<0\) are predicted to be predominantly enriched by Pop III hypernovae and/or pair instability supernovae. To better constrain the primordial EDF, it is absolutely crucial to have a complete and accurate determination of the metallicity distribution function, and the properties of C-enhanced metal-poor stars (frequency and [C/Fe]) in the Galactic halo.
keywords: stars: Population III - Galaxy: formation, halo, abundances - cosmology: first stars - galaxies: high redshift
## 1 Introduction
The formation of the first stars marked a fundamental shift in the history of our universe; from a simple, homogeneous and isotropic state to the structured complexity we observe today. Their light brought an end to the so-called dark ages and initiated a period of reionization and heating of the intergalactic medium (IGM). Formed out of purely metal-free primordial gas, the first stars are commonly referred to as Population III (Pop III) stars to distinguish them from the subsequent generations of Pop II (or metal-poor; \(Z\lesssim 0.1\,Z_{\odot}\)) stars and Pop I (or metal-rich) stars. The Pop III stars were the sources of the first metals (i.e., elements heavier than lithium) and dust grains, which they released in the surrounding medium via supernova (SN) explosions and stellar winds.
Although critical for our understanding of the early Universe, the nature and characteristics of the first stars remain elusive. As of today, no metal-free star has been observed and theoretical models have thus far failed to reach a consensus (e.g., Bromm 2013 and Klessen 2019 for recent reviews). Initially, first stars were thought to be very massive (\(\sim 100-1000\)\(\,\rm M_{\odot}\)), owing to the lack of metals and dust grains, which are more efficient coolants than molecular hydrogen1 and thus facilitate the fragmentation of gas clouds (Omukai & Nishi 1998; Omukai & Palla 2001; Abel et al. 2002; Bromm et al. 2002; O'Shea & Norman 2007). Later, more detailed calculations showed that Pop III stars can have lower masses, of the order of some tens of solar. After the initial spherically distributed gas infall, material falling with non-negligible angular momentum starts building up a rotationally supported disc around the central proto-stellar core. Radiative feedback from the nascent proto-star plays a key role in regulating this accretion process, being able to clear out the accretion disk when the star reaches a mass of \(\approx 30-40\,\rm M_{\odot}\) (McKee & Tan 2008; Hosokawa et al. 2011). Cosmological simulations that include radiative feedback confirm this picture and show that the mass spectrum of Pop III stars is broader (\(\sim 10-1000\,\rm M_{\odot}\)) than previously thought (Hirano et al. 2014; Hirano et al. 2015). Furthermore, detailed 3D simulations investigating the formation of the first proto-stars, show that the accretion disc can be highly susceptible to fragmentation, leading to the formation of sub-solar fragments (see, e.g., Machida et al. 2008; Clark et al. 2011; Greif et al. 2011; Dopcke et al. 2013; Stacy et al. 2016; Wollenberg et al. 2020). However, it is still not settled whether most of these fragments migrate inwards and merge together, or are expelled from the system to survive as low mass stars (Hosokawa et al. 2016; Hirano & Bromm 2017). Ultimately, although the mass range of Pop III stars is still largely
unknown, it is probably biased towards massive stars, and most likely extremely broad, from \(\approx 1000\,\mathrm{M}_{\odot}\) down to \(<1\,\mathrm{M}_{\odot}\).
At the higher mass end, the evolution of Pop III stars is fairly well understood. In the absence of rotation, stars with initial masses \(\gtrsim 260\,\mathrm{M}_{\odot}\) collapse directly into black holes swallowing all their heavy element production (Fryer et al., 2001). Between \(\sim 140\) and \(\sim 260\,\mathrm{M}_{\odot}\), stars explode as pair instability supernovae (PISNe), with explosion energies and ejecta depending on their initial mass (Heger & Woosley, 2002; Takahashi et al., 2018). The mechanism involves electron-positron pair production, which lowers the internal radiation pressure supporting the star against gravitational collapse. This pressure drop leads to a rapid contraction, which in turn ignites a runaway thermonuclear explosion that blows apart the star completely, leaving no stellar remnant behind (e.g., Barkat et al., 1967; Bond et al., 1984). At lower masses, but higher than \(\sim 100\,\mathrm{M}_{\odot}\), pair-instability still takes place but the net reduction in pressure is not sufficient to lead to the complete disruption of the star. Instead, after a series of pulsations the star produces a large iron core that likely collapses to a black hole, sweeping most heavy elements inside (Heger & Woosley, 2002).
Contrary to PISNe, the explosion of stars with initial masses \(m_{\star}\sim 10-100\,\mathrm{M}_{\odot}\) leaves behind a stellar remnant and thus the final ejecta can vary strongly according to the amount of mixing and fallback2, which depend on many factors, including the explosion energy and rotation (e.g., Nomoto et al., 2006; Heger & Woosley, 2010; Nomoto et al., 2013). Therefore, even if the mass distribution of Pop III stars were to be established theoretically, it is not yet settled whether these additional parameters are mass-dependent or follow separate distributions.
Footnote 2: Fallback refers to the material that collapses to form a neutron star when \(m_{\star}\leq 20-30\,\mathrm{M}_{\odot}\), or a black hole when \(m_{\star}\gtrsim 20-30\,\mathrm{M}_{\odot}\)(Colgate & White, 1966; Heger & Woosley, 2010; Chan et al., 2018).
Currently there is no'standard model' that describes the properties of Pop III stars. However, one can infer indirect constraints from stellar archaeology, i.e., the study of the oldest, most metal-poor stars in the Milky Way and its dwarf satellites (e.g., Frebel & Norris, 2015). These long-lived stars preserve in their atmospheres the chemical abundance patterns of their birth gas clouds, which were polluted by their ancestors: the first stars. One of the most interesting populations among them are the carbon-enhanced metal-poor (CEMP) stars, which are characterized by high relative carbon abundances [C/Fe]\(\geq+0.7\) (e.g., Aoki et al., 2007). The CEMP stars are commonly divided into two major sub-classes (Aoki et al., 2002; Beers & Christlieb, 2005): 1) those showing an excess of heavy elements produced by slow neutron-capture processes (CEMP-s); and 2) those showing no such enhancement (CEMP-no stars). The two sub-classes are linked to different formation scenarios. CEMP-s stars represent \(\gtrsim 50\%\) of all CEMP stars but are extremely rare at [Fe/H]\(<-3\)(Norris et al., 2013; Yoon et al., 2016). Their abundance pattern and the fact that \(\gtrsim 80\%\) of them are members of binary systems (Starkenburg et al., 2014; Hansen et al., 2016) are consistent with them being C-enhanced by mass transfer from an asymptotic giant branch (AGB) companion star (e.g., Aoki et al., 2007; Abate et al., 2015). On the contrary, CEMP-no stars are dominant among the most iron-poor stars: at least 12 out of the 14 known stars with \(\mathrm{[Fe/H]}<-4.5\) are CEMP-no stars (see Section 3 and Vanni et al. submitted). The CEMP-no stars are not preferentially found in binary systems (Starkenburg et al., 2014; Hansen et al., 2016) or, even when they are, show no trace of mass transfer by a companion (Aguado et al., 2022), and so their chemical abundances are representative of their birth environment.
CEMP-no stars have been observed in large numbers in the Galactic halo (e.g., Yong et al., 2013; Carollo et al., 2012; Norris et al., 2013; Placco et al., 2014; Yoon et al., 2016; Bonifacio et al., 2015) and in ultra faint dwarf (UFD) satellites (e.g., Norris et al., 2010; Lai et al., 2011; Gilmore et al., 2013; Frebel et al., 2014; Ji et al., 2016; Spite et al., 2018), but also in more massive and luminous dwarf spheroidal (dSph) galaxies (Skuladotti, A. et al., 2015; 2023 submitted; Susmitha et al., 2017; Chiti et al., 2018; Yoon et al., 2020) and the Galactic bulge (Howes et al., 2016; Arentsen et al., 2021). Their proposed zero-metallicity progenitors include: (i) massive "spinstars" with internal mixing and mass loss (Meynet et al., 2006; Maeder et al., 2015; Liu et al., 2021), which are now strongly disfavoured by the measured high values of \({}^{12}\mathrm{C}/^{13}\mathrm{C}\) in several CEMP-no stars (Aguado et al., 2022, 2023); (ii) low energy faint (\(E_{51}=E_{\star}/10^{51}\mathrm{erg}<1\)) or normal (\(E_{51}\sim 1\)) core-collapse supernovae (cSCNe) with mixing and fallback that release small amounts of iron and large amounts of carbon and other light elements (e.g. Umeda & Nomoto, 2003; Iwamoto et al., 2005; Cooke & Madau, 2014; Marassi et al., 2014; Tominaga et al., 2014; Salvadori et al., 2015, Vanni et al. in prep).
However, higher energy progenitors have also been found consistent with metal-poor stars. Ishigaki et al. (2014) found that the abundance pattern of the most iron-deficient star observed (SMSS J031300.36-670839.3; Keller et al., 2014) is well reproduced both by a \(E_{51}=1\) ccSN and by a \(E_{51}\geq 10\) Pop III hypernova. Ishigaki et al. (2018) suggested that more than half of a sample of \(\sim 200\) extremely metal-poor (EMP; [Fe/H]\(<3\)) literature stars are best fitted by a \(m_{\star}=20\,\mathrm{M}_{\odot}\)(\(E_{51}=10\)) hypernova model. Placco et al. (2015), however, who analyzed a subset of these stars, found systematically lower energy-progenitors for them (see Table 2 in Ishigaki et al., 2018). Ezzeddine et al. (2019) argued that the CEMP-no star HE 1327-2326 shows an imprint of a \(E_{51}=5\) assymetric hypernova explosion. More recently, Skuladotti et al. (2021) and Placco et al. (2021) discovered two ultra metal poor stars (\(\mathrm{[Fe/H]}<-4\)) in the Sculptor dSph galaxy and in the Galactic halo, respectively, with very low [C/Fe] and abundance patterns indicating that they descend from \(E_{51}=10\) Pop III hypernovae (see also Skuladotti et al., 2023).
Besides identifying individual Pop III progenitors, several studies have employed galaxy formation models of MW-analogues to investigate the properties of Pop III stars in a statistical manner, including their spatial distribution (e.g. White & Springel, 2000; Brook et al., 2007; Tumlinson, 2010; Salvadori et al., 2010; Ishiyama et al., 2016; Starkenburg et al., 2017; Hartwig et al., 2022) and the impact of their initial mass function (IMF) on the abundances of metal-poor stars (e.g. Salvadori et al., 2007; Komiya et al., 2010; de Benmassuti et al., 2017; Hartwig et al., 2018; Sarmento et al., 2019; Tarumi et al., 2020). However, none of these studies has examined the full parameter space of Pop III stars - IMF, mixing, explosion energy - and none has considered the existence of high \(E_{51}\) primordial SNe.
In this work, we aim to fill this gap and explore how varying the energy distribution of the first SNe affects the properties of the stars surviving until \(z=0\) in the Galactic halo. To this end, we develop a new semi-analytic model (SAM), named NEFFERTITI (_Near-FiEld cosmology: Re-Tracing Invisible Tlmes_), that traces the formation and evolution of individual Pop III and Pop II/I stars, accounting for all the unknown related to primordial star-formation. NEFFERTITI can run on any merger-tree or dark matter (DM) simulation to shed light on the earliest phases of star-formation. Here, we combine it with a DM simulation of a MW analogue. This allows us to follow in detail the early chemical evolution of the first star-forming halos and link them with the observed properties of Galactic halo stars in order to constrain the nature of Pop III stars and of the first SNe.
## 2 The model
This Section introduces our newly developed SAM NEFERTITI, which in this work is coupled with a cosmological N-body simulation for the formation of a MW analogue. NEFERTITI builds upon our previous works with the SAM GAMETE Salvadori et al. (2010); Graziani et al. (2015, 2017); Pacucci et al. (2017); Pagnini et al. (2023), but represents significant advancements, allowing us to account for: (i) finite stellar lifetimes of both Pop III and Pop II/I stars, i.e., relaxing the instantaneous recycling approximation; (ii) the incomplete sampling of the IMF for both Pop III and Pop III/I stars; (iii) an unknown energy distribution function for Pop III stars exploding as SNe.
In the following, we describe briefly how the N-body simulation traces the hierarchical growth of dark-matter (DM) haloes (Section 2.1) and, in detail, how the evolution of their baryonic content is followed by the SAM (Section 2.2). Finally, in Section 2.3 we present the calibration of the free parameters of our model.
### The \(N\)-body simulation
We use a cold dark matter N-body simulation of a MW analogue (Scannapieco et al., 2006; Salvadori et al., 2010) that has been carried out with the GCD+ code (Kawata & Gibson, 2003) using a multi-resolution technique (Kawata & Gibson, 2003). The highest resolution region has a radius of four times the virial radius3 of the system, \(R_{\rm vir}=239\,{\rm kpc}\) at \(z=0\), and a softening length of \(540{\rm pc}\). The system comprises \(\sim 10^{6}\,{\rm DM}\) particles of mass \(\sim 7.8\times 10^{5}\,{\rm M}_{\odot}\), i.e., a virial mass \(M_{\rm vir}=7.8\times 10^{11}\,{\rm M}_{\odot}\), consistent with observational estimates for the MW (\(M_{\rm vir,MW}\approx 6-25\times 10^{11}\,{\rm M}_{\odot}\); see Wang et al., 2015 and references therein). A low-resolution simulation including gas physics and star formation has been used to confirm that the initial conditions will lead to the formation of a disc galaxy. The position and velocities of all DM particles are stored, at each snapshot of the simulation, and a friend-of-friends algorithm, with a linking parameter \(b=0.15\) and a threshold number of particles of \(50\), is used to identify the virialized DM haloes. The timestep between the snapshots is \(\Delta t_{z}\approx 22\,{\rm Myr}\) at \(17<z<8\) and \(\approx 110\,{\rm Myr}\) at \(z<8\).
Footnote 3: For comparison, the virial radius of the MW is estimated at \(R_{\rm vir,MW}=200-290\,{\rm kpc}\)(e.g., Dehnen et al., 2006; Posti & Helmi, 2019).
### The modelling of baryons
The SAM follows the flow of baryons from the intergalactic medium (IGM) into the DM haloes, the formation of stars and stellar evolution within each galaxy, and the return of mass and metals into the interstellar medium (ISM) and the IGM through stellar feedback. In order to resolve the evolution of the most massive stars, we adopt a shorter sub-timestep of \(\delta t_{s}=1\,{\rm Myr}\) for the SAM. At the end of each timestep \(\Delta t_{z}\) of the N-body simulation, the stellar and gas mass within each halo is equally distributed among all its DM particles. The baryons then follow the course of their respective DM particles in the next integration step of the N-body simulation. This way we can extract the spatial distribution of all stellar populations throughout the Galaxy's assembly history.
At \(z<3\), when the DM halo of our central galaxy (i.e., the MW) has grown significantly, the assumption that the newly formed stars are equally distributed among its DM particles is no longer valid. Indeed, we know that a disc should form, leading to a more centrally confined star formation. However, this approximation is suitable for investigating the ancient very metal-poor stellar halo populations we are interested in, which form at \(z>5\) in smaller progenitor halos. Indeed, we have confirmed that ancient stars (born before \(z\sim 5\)) inhabiting our mock MW's halo at \(z=0\) have been mostly acquired via mergers, i.e., they form ex-situ.
#### 2.2.1 Gas accretion
According to the traditional view of galaxy formation, all infalling gas is initially shock heated to the virial temperature, \(T_{\rm vir}\), of the DM halo and forms a quasi-static atmosphere, which then cools radiatively from the inside out and falls onto the central galaxy (hot mode accretion; Rees & Ostriker, 1977; Silk, 1977; White & Rees, 1978). More recent simulations have revealed a new paradigm in which part of the gas enters the halo along cold, dense filaments and accretes directly onto the galaxy without being shock-heated (cold-mode accretion; Keres et al., 2005; Dekel & Birnboim, 2006; Cattaneo et al., 2020). The latter is the dominant accretion mode for virial masses \(M_{\rm vir}<10^{11}\,{\rm M}_{\odot}\), and is therefore more relevant for our study that focuses on very metal-poor stars (VMP; \({\rm[Fe/H]}\leq-2\)), which form at \(z>5\) in our simulation, when the maximum halo mass is \(M_{\rm vir}\sim 5\times 10^{10}\,{\rm M}_{\odot}\).
In the lowest-mass halos, baryonic infall may be substantially reduced due to photodissociating and photoionizing radiation, since gas cannot cool and accrete onto halos with virial temperature, \(T_{\rm vir}\), lower than that of the IGM (e.g., Blanchard et al., 1992). To account for this effect, we assume that there is no gas accretion onto halos with \(T_{\rm vir}<T_{\rm SF}\), where \(T_{\rm SF}=2\times 10^{3}\,{\rm K}\) at \(z>z_{\rm reli}=6\), and \(T_{\rm SF}=2\times 10^{4}\,{\rm K}\) at \(z\leq z_{\rm reli}\) when the Milky Way environment is fully ionized (Salvadori et al., 2014).
For haloes with \(T_{\rm vir}>T_{\rm SF}\), we assume that gas from the intergalactic medium is continuously added into their cold filaments at a rate that is proportional to their dark matter growth:
\[\dot{M}_{\rm fil,accr}=\dot{f}_{\rm b}\dot{M}_{\rm vir}, \tag{1}\]
where \(f_{\rm b}=\Omega_{\rm b}/\Omega_{m}\) is the universal baryon fraction.
The gas within the filaments is subsequently assumed to stream onto the central galaxy on a free-fall timescale:
\[t_{\rm ff}=\Big{(}\frac{3\pi}{32{\rm G}_{\rm\rho}}\Big{)}^{1/2}, \tag{2}\]
where G is the gravitational constant and \(\rho=\rho_{200}(z)\) is the total (dark+baryonic) mass density of the halo at redshift \(z\). Hence, the gas accretion rate onto a galaxy and the variation of its filaments' gas mass are given, respectively, by
\[\dot{M}_{\rm gas,accr}=\frac{M_{\rm fil}}{t_{\rm ff}} \tag{3}\]
and
\[\dot{M}_{\rm fil}=\dot{M}_{\rm fil,accr}-\dot{M}_{\rm gas,accr}. \tag{4}\]
#### 2.2.2 Star formation
Star formation (SF) occurs in a single burst at each sub-timestep \(\delta t_{s}\), at a rate given by the cold gas mass, \(M_{\rm gas}\), within the galaxy, the free-fall time (Eq. 2), and the SF efficiency \(\epsilon_{\rm SF}\), which is a free parameter of our model:
\[{\rm SFR}=\epsilon_{\rm SF}\frac{M_{\rm gas}}{t_{\rm ff}}. \tag{5}\]
Following Salvadori et al. (2015), we assume that the SF efficiency in minihaloes with \(T_{\rm vir}\leq 2\times 10^{4}\,{\rm K}\) is reduced by a factor of
\(2\epsilon_{\star}[1+(2\times 10^{4}\,{\rm K}/T_{\rm vir})^{3}]^{-1}\), to account for the ineffective cooling by molecular hydrogen (Salvadori & Ferrara, 2012).
At each timestep \(\delta t_{s}\), and for each halo, we compute the stellar mass formed, \(M_{\star}={\rm SFR}\cdot\delta t_{s}\), and form a Simple Stellar Population (SSP) only if \(M_{\star}\) is over or equal to the maximum stellar mass \(m_{\star}^{\rm max}\) allowed by the IMF4. This way we ensure that stars throughout the whole mass range of the assumed IMF can be represented (see Section 2.2.3). Each SSP is characterized by its formation time \(t_{\rm form}\), the number and initial masses of its stars and their elemental abundances (which are equal to the ones of the gas in their host halo at \(t_{\rm form}\)).
Footnote 4: Throughout the text, the index (\(\star\)) is used to refer to the total stellar mass of a galaxy, while the index (\(\star\)) is used to refer to the mass of individual stars.
Our star formation model is calibrated to act on the total gas content within a galaxy (see Section 2.3); we do not differentiate here between the different phases (e.g., cold, warm, molecular, atomic) of the ISM. In addition, our model ignores some physical mechanisms, such as merger/instability induced starbursts, mass quenching (including AGN and halo quenching5; Peng et al., 2010) and ram pressure (Gunn & Gott, 1972), that can influence the star formation rate of our central galaxy and that of its accreted satellite galaxies. Ram pressure can both trigger bursts of SF by compressing the gas within a satellite galaxy and quench its SF by stripping it (e.g., Kapferer et al., 2009; Bekki, 2014; Koutsouridou & Cattaneo, 2019). These effects might be important for the evolution of Local Group satellites at low (\(z\) Simpson et al., 2018; Hausammann et al., 2019) but see also Salvadori et al. 2015 for a different view). However, at \(z>5\), the main progenitor (or major branch) of the MW with \(M_{\rm vir}<10^{11}\,{\rm M}_{\odot}\) (and \(M_{\star}\lesssim 10^{9}\,{\rm M}_{\odot}\)), is unlikely to hold a sufficiently massive hot atmosphere to exert high ram pressure (see, e.g., Gelli et al., 2020). The same is true for the AGN and halo quenching mechanisms, which come about at \(M_{\rm vir}\sim 10^{12}\,{\rm M}_{\odot}\), or \(M_{\star}>10^{10}\,{\rm M}_{\odot}\)(e.g., Cattaneo et al., 2020; Bluck et al., 2020; Koutsouridou & Cattaneo, 2022). Mergers and disc instabilities can drive inward gas flows that provoke bursts of SF in the central galactic regions (Barnes & Hernquist, 1991; Teyssier et al., 2010; Zolotov et al., 2015; Bournaud, 2016). Although the latter can be important for the evolution of high redshift gas-rich galaxies they can be reasonably neglected for a study focused on the stellar halo such as ours.
Footnote 5: AGN quenching refers to feedback from an active galactic nucleus (AGN) that can eject cold gas from within a galaxy and/or prevent the hot gas surrounding it from cooling. Halo quenching refers to the disruption of cold filamentary flows by a massive hot atmosphere.
#### 2.2.3 The Initial Mass Function of Pop III and Pop II/I stars
Following the critical metallicity scenario (e.g., Omukai, 2000; Bromm et al., 2001; Schneider et al., 2002), we form PopIII stars when the total metallicity, \(Z_{\rm gas}\), of the gas within a progenitor halo is below a critical value \(Z_{\rm crit}=10^{-4.5}\,Z_{\odot}\)(de Bennassuti et al., 2017), and PopIII/I stars whenever \(Z_{\rm gas}\geq Z_{\rm crit}\).
We adopt a Larson (1998) IMF for both Pop III and Pop II/I stars:
\[\phi(m_{\star})=\frac{dN}{dm_{\star}}\propto m_{\star}^{-2.35}{\rm exp}\left( -\frac{m_{\rm ch}}{m_{\star}}\right), \tag{6}\]
but with very different characteristic mass, \(m_{\rm ch}\), and \(m_{\star}\) range.
For Pop II stars we assume \(m_{\star}=[0.1,100]\,{\rm M}_{\odot}\) and \(m_{\rm ch}=0.35\,{\rm M}_{\odot}\), which is consistent with observations of present-day forming stars (Krumholz, 2015).
For Pop III stars we consider \(m_{\star}=[0.1,1000]\,{\rm M}_{\odot}\) and an IMF biased towards more massive stars, i.e., with \(m_{\rm ch}\geq 1M_{\odot}\). This mass range and characteristic mass are indeed consistent with constraints on the Pop III IMF obtained from ultra faint dwarf galaxies (Rossi et al., 2021) and in line with the results of cosmological hydrodynamical simulations for the formation of Pop III stars (Susa et al., 2014; Hirano et al., 2014; Hirano et al., 2015, see also the Introduction). In Pagnini et al. (2023), we showed that a \(m_{\rm ch}=10\,{\rm M}_{\odot}\) is in good agreement with the observed [C/Fe] range within the bulge and can explain the dearth of CEMP-no stars with [C/Fe]\(>+1\) in this environment. Therefore, we adopt \(m_{\rm ch}=10\,{\rm M}_{\odot}\) as our starting point and explore the effect of different characteristic masses and a different maximum mass, \(m_{\star}^{\rm max}\), for Pop III stars in Section 4.1.
In some cases, and most commonly in poorly star-forming low-mass halos, the SF bursts are not strong enough to fully populate the theoretical IMF. This has important consequences for the type of stars formed, stellar feedback, chemical evolution and total stellar mass (Kroupa & Weidner, 2003; Weidner & Kroupa, 2006; Weidner et al., 2013; de Bennassuti et al., 2017; Applebaum et al., 2020; Rossi et al., 2021). To account for that, we implement in NEFERTITI a Monte Carlo procedure that generates a random sequence of stars, according to the assumed IMF, with total mass equal to the stellar mass formed in each SF burst (see Rossi et al., 2021 for details). In other words, we account for the incomplete sampling of the IMF of both Pop III and Pop III/I stars in poorly star-forming halos.
#### 2.2.4 The energy distribution of Pop III SNe
Currently, there is no theoretical constraint on the explosion energies of Pop III SNe with \(m_{\star}=10-100\,{\rm M}_{\odot}\), and while observations suggest that they could have spanned almost 2 orders of magnitude, their distribution remains completely unknown (see Introduction). Here, we account for the first time for the energy distribution of such primordial SNe in the context of a cosmological galaxy formation model. To this end, we assume a mass-independent energy distribution function (EDF) of the form:
\[\frac{{\rm d}N}{{\rm d}E_{\star}}\propto E_{\star}^{-\alpha_{e}}, \tag{7}\]
where \(E_{\star}\) is the explosion energy and \(\alpha_{e}\) is a free parameter of the model. Based on this underlying distribution, we assign randomly an energy level to each Pop III SN with \(m_{\star}=10-100\,{\rm M}_{\odot}\). The top panel of Fig. 1 shows the cumulative probability as a function of \(E_{\star}\) for the different \(\alpha_{e}\) values considered hereafter. The bottom panel shows the corresponding probability for a Pop III star to explode as a faint SN (\(E_{51}=[0.3,0.6]\)), a core-collapse SN (ccSN; \(E_{51}=[0.9,1.2,1.5]\)), a high energy SN (\(E_{51}=[1.8,2,4,3]\)) or a hypernova (\(E_{51}=[5,10]\)), for \(\alpha_{e}=0.5\), 1 and 2.
#### 2.2.5 Relaxing the Instantaneous Recycling Approximation
In all previous works where the predecessor of our SAM (the GAMETE SAM) was coupled with time consuming N-body simulations (Salvadori et al., 2010; Graziani et al., 2015; Pacucci et al., 2017; Graziani et al., 2017; Pagnini et al., 2023), the chemical evolution was computed assuming the _instantaneous recycling approximation_ (IRA), i.e., that all stars that do not survive until \(z=0\), die and return gas and metals into the ISM _instantaneously_. This approximation overestimates the ISM enrichment rate (steepening the metal-poor tail of the MDF) and describes poorly the abundance of elements produced on long timescales, such as Fe and C. In addition, it blurs
out the chemical signatures of different stellar types, such as primordial faint SNe and PISNe, that in reality explode at different times, by minging their ejecta instantaneously.
In NEFERTITI, we abandon the IRA and, instead, follow the evolution of each individual star, depending on its initial mass and metallicity. At each timestep and for each halo, we compute the rate, \(\dot{R}\), at which gas is restored into the ISM through stellar winds or SN explosions from:
\[\dot{R}=\int_{m_{\bullet}(t)}^{m_{\bullet}^{\rm max}}(m_{\bullet}-w_{m}(m_{ \bullet}))N(\tilde{t}_{\rm form},m_{\star}){\rm d}m_{\bullet}, \tag{8}\]
where \(N(\tilde{t}_{\rm form},m_{\star})\) is the number of stars with mass \(m_{\bullet}\) that were formed at time \(\tilde{t}_{\rm form}=t-\tau_{\star}\), \(w_{m}\) and \(\tau_{\star}\) are the remnant mass and lifetime of a star with initial mass \(m_{\star}\), and \(m_{\rm turn}(t)\) is the turnoff mass, i.e., the mass corresponding to \(\tau_{\star}=t\).
Similarly, we define the total ejection rate, \(\dot{Y}_{i}\), of an element \(i\) that is returned to the ISM without being re-processed (first term in the square brackets) and newly produced (second term):
\[\dot{Y}_{i}=\int_{m_{\rm turn}(t)}^{m_{\bullet}^{\rm max}}\big{[}(m_{\bullet}- w_{m}(m_{\bullet})-m_{i}(m_{\bullet},Z_{\star}))Z_{i}(\tilde{t}_{\rm form})+\]
\[+m_{i}(m_{\bullet},Z_{\star})\big{]}N(\tilde{t}_{\rm form},m_{\star}){\rm d}m_ {\star}, \tag{9}\]
where \(m_{i}(m_{\star},Z_{\star})\) is the mass of element \(i\) that is synthesized by a star with initial mass \(m_{\bullet}\) and metallicity \(Z_{\star}\), and \(Z_{i}(\tilde{t}_{\rm form})\) is the mass fraction of the element \(i\) at the time of formation of each star.
We adopt the stellar lifetimes of Raiteri et al. (1996) for Pop II/I stars and the stellar lifetimes of Schaerer (2002) for Pop III stars.
#### 2.2.6 Stellar yields and mixing
The metal yields and remnant masses of Pop III stars, entering equations 8 and 9, are adopted from Heger & Woosley (2002) for PISNe (\(140\leq m_{\bullet}/{\rm M}_{\odot}\leq 260\)) and from Heger & Woosley (2010) for less massive Pop III SNe (\(10\leq m_{\star}/{\rm M}_{\odot}\leq 100\)). The latter are given for 10 different explosion energies in the range \(0.3-10\times 10^{51}\) erg (see Section 2.2.4). Heger & Woosley (2010) use a 1D code that cannot capture mixing between stellar layers due to its multidimensional nature. They, therefore, implement mixing artificially by moving a running boxcar through the star typically four times (Welsh et al., 2021). For each stellar mass and explosion energy, there are 14 different values for the _mixing parameter_ (in the range \(f_{\rm mix}=0-0.2512\)), which is defined as the width of the boxcar in units of the helium core mass6. The mixing parameter is unknown but it can largely affect the abundance of various chemical elements produced by Pop III SNe, such as carbon, e.g., Vanni et al. in prep. In the absence of theoretical yields for the intermediate pulsational PISNe, i.e., stars with \(m_{\star}=100-140\) M\({}_{\odot}\), we assume that they collapse into black holes returning no mass into the ISM.
Footnote 6: Rotation and mass loss at all stages of stellar evolution are ignored in the Heger & Woosley (2010) models.
For Pop II/I stars we adopt the yields of Limongi & Chieffi (2018; set R without rotation velocity) for massive stars evolving as core-collapse SNe (ccSNe) and the van den Hoek & Groenewegen (1997) yields for low and intermediate mass (\(m_{\star}<8{\rm M}_{\odot}\)) Asymptotic Giant Branch (AGB) stars.
#### 2.2.7 Mechanical feedback from SNe
Supernovae drive winds that, if sufficiently energetic, can escape the gravitational potential well of their host halo and expel gas and metals into the surrounding medium. Different kinds of SNe are characterised by different explosion energies. For Pop III PISNe, we adopt the mass-energy relation of Heger & Woosley (2002), while for metal-free stars with \(m_{\star}=10\)-100 M\({}_{\odot}\) we consider all energy levels (\(E_{51}=0.3-10\)) provided by Heger & Woosley (2010), independently of the stellar mass as explained in Section 2.2.4. For Pop II/I ccSNe, we assume an average explosion energy of \(10^{51}\)erg. Therefore, at each timestep, the total power output from SNe in a halo is \(\sum\limits_{i}N_{\rm SN}^{i}\cdot E_{\rm SN}^{i}\), where \(\dot{N}_{\rm SN}^{i}\) is the explosion rate of SNe with energy \(E_{\rm SN}^{i}\). If a fraction \(\epsilon_{\rm wind}\) of this power is converted into kinetic form, the gas outflow rate from the halo, \(\dot{M}_{\rm gas,ej}\), will satisfy:
\[\frac{1}{2}\dot{M}_{\rm gas,ej}u_{\rm esc}^{2}=\epsilon_{\rm wind}\sum \limits_{i}\dot{N}_{\rm SN}^{i}E_{\rm SN}^{i}, \tag{10}\]
where the wind efficiency, \(\epsilon_{\rm wind}\), is the second free parameter of our model and \(u_{\rm esc}=\sqrt{\frac{GM_{\rm vir}}{r_{\rm vir}}}=f(M_{\rm vir},z)\)(Barkana & Loeb, 2001) is the escape speed of the halo.
#### 2.2.8 Following the gas evolution
At each sub-timestep \(\dot{\sigma}t_{\rm s}\) of the SAM, we compute the evolution of the total gas mass, \(M_{\rm gas}\) and mass of element \(i\), \(M_{Z_{i}}\) in the ISM of each galaxy from the equations:
\[\dot{M}_{\rm gas}=\dot{M}_{\rm gas,accr}-{\rm SFR}+\dot{R}-\dot{M}_{\rm gas,ej} \tag{11}\]
and
\[\dot{M}_{Z_{i}}=Z_{i}^{\rm IGM}\dot{M}_{\rm gas,accr}-Z_{i}{\rm SFR}+\dot{Y}_{ i}-Z_{i}\dot{M}_{\rm gas,ej}, \tag{12}\]
Figure 1: Top: cumulative probability for a Pop III SN with \(m_{\bullet}=10-100\) M\({}_{\odot}\) to have a given explosion energy, \(E_{51}\), for different values of the energy distribution function (Eq. 7) exponent, \(\alpha_{e}\). Bottom: the corresponding probability of such star to explode as a faint SN, a ccSN, a high energy SN or a hypernova, for \(\alpha_{e}=0.5\) (red), 1 (orange), and 2 (cyan).
respectively, where \(Z_{i}=M_{Z_{i}}/M_{\rm gas}\) is the mass fraction of element \(i\) in the ISM and \(Z_{i}^{\rm IGM}\) is its mass fraction in the IGM. The latter is updated after each sub-timestep by summing the contributions of all haloes \(h\):
\[\dot{M}_{Z_{i}}^{\rm IGM}=\sum_{h}\Bigl{(}-Z_{i}^{\rm IGM}\dot{M}_{\rm gas,accr} ^{h}+Z_{i}^{h}\dot{M}_{\rm gas,gi}^{h}\Bigr{)}. \tag{13}\]
Equations 12 and 13 imply that the ejecta of dying stars are instantaneously and homogeneously mixed within both the ISM of each halo and the IGM, which are thus characterized by a unique chemical composition at each timestep. This perfect mixing approximation has significant consequences for the chemical evolution of our system, which are discussed in Sections 4 and 6.
### Model calibration
In Fig. 2, we compare the observed global properties of the MW (grey shaded areas) with the results of our model at \(z=0\), as a function of the SF efficiency \(\epsilon_{\rm SF}\) (left), and the wind efficiency \(\epsilon_{\rm wind}\) (right panels). One can see that the final stellar mass, \(M_{*}\), depends strongly on \(\epsilon_{\rm wind}\) but only weakly on \(\epsilon_{\rm SF}\), while the opposite is true for the gas-to-stellar mass ratio, \(M_{\rm gas}/M_{*}\). This can also be inferred by solving analytically the equations that govern the evolution of the stellar and gas mass in a galaxy (see equations 4 and 6 in Koutsouridou & Cattaneo, 2019). Therefore, these two observables are sufficient to constrain the two free parameters of our model.
We adopt \(\epsilon_{\rm SF}=0.8\), and \(\epsilon_{\rm wind}=0.002\), for which we obtain \(M_{*}\approx 4.2\times 10^{10}\)M\({}_{\odot}\), and \(M_{\rm gas}/M_{*}\approx 0.13\) at \(z=0\). Bland-Hawthorn and (2016) report \(M_{*}=(5\pm 1)\times 10^{10}\) M\({}_{\odot}\) (gray areas in the top row of Fig. 2), and \(M_{\rm vir}=(1.3\pm 0.3)\times 10^{12}\) M\({}_{\odot}\) for the Milky Way, by combining estimates from dynamical model fitting to stellar surveys and to the Galactic rotation curve (see references therein). Using the same approach, McMillan (2017) and Cautun et al. (2020) find similar values: \(M_{*}=(5.43\pm 0.57)\times 10^{10}\) M\({}_{\odot}\), \(M_{\rm vir}=(1.3\pm 0.3)\times 10^{12}\)M\({}_{\odot}\); and \(M_{*}=5.04^{+0.43}_{-0.52}\times 10^{10}\)M\({}_{\odot}\), \(M_{\rm vir}=1.08^{+0.20}_{-0.14}\times 10^{12}\) M\({}_{\odot}\), respectively7. We choose a value for \(\epsilon_{\rm wind}\) that gives a stellar mass at the lower limit of the observational range, since the virial mass of our system is lower than most observational estimates (see Section 2.1). By substituting \(\epsilon_{\rm wind}=0.002\) in Eq. 10 we get a mass loading factor8\(\eta\equiv\dot{M}_{\rm gas,gi}/{\rm SFR}\approx 0.14\) for our MW-analogue at \(z=0\), in agreement with recent observational estimates (\(\eta_{\rm MW}=0.1\pm 0.06\), Fox et al., 2019).
Footnote 7: Studies estimating the MW stellar mass from direct integration of starlight usually find higher values, \(M_{*}\sim 6\times 10^{10}\) M\({}_{\odot}\)(Bland-Hawthorn & Gerhard, 2016), but these studies do not simultaneously constrain its virial mass.
Footnote 8: Note that \(u_{\rm esc}^{2}\approx 2.8\times 10^{47}\) erg/M\({}_{\odot}\) for \(M_{\rm vir}=7.8\times 10^{11}\) M\({}_{\odot}\) at \(z=0\) (Eq. 27 in Barkana & Loeb, 2001) and that with the assumed Pop II/I IMF we form one SN every 100 M\({}_{\odot}\).
Our adopted value for \(\epsilon_{\rm SF}\) results in a gas-to-stellar mass ratio within the typically reported range of 1.0-1.05 (Ferriere, 2001; Stahler & Palla, 2004). In addition, at \(z=0\), \(\epsilon_{\rm SF}\) corresponds to a star formation timescale \(\epsilon_{\rm SF}\equiv M_{\rm gas}/{\rm SFR}\sim 1.9\) Gyr, and a SFR\(\sim 2.7\) M\({}_{\odot}/{\rm yr}\) for our MW-analogue, in agreement with observational estimates (\(\epsilon_{\rm SF,MW}=2\) Gyr, Bigiel et al., 2008; SFR\({}_{\rm MW}=1-3\) M\({}_{\odot}/{\rm yr}\), Chomiuk & Povich, 2011; Bland-Hawthorn & Gerhard, 2016). At \(z=0\), the mean metallicity of the ISM, \(Z_{\rm ISM}=1.07\)\(Z_{\odot}\), in our MW-analogue and of the IGM, \(Z_{\rm IGM}\sim 0.17\)\(Z_{\odot}\), are in accordance with observations in the Galactic disc and in high-velocity clouds currently accreting onto the disc (\(\sim 0.1-0.3\)\(Z_{\odot}\); Ganguly et al., 2005; Tripp et al., 2003; Danforth & Shull, 2008). For all the above reasons, we are confident that our model with the selected values for \(\epsilon_{\rm SF}\) and \(\epsilon_{\rm wind}\), is a good representation of the evolution of a MW-like galaxy and its immediate environment.
## 3 Stellar data for model comparison
This Section describes in detail the available observations of Galactic halo stars that we use to compare with key results of our model - namely the predicted metallicity distribution function, the fraction of CEMP stars, the carbonicity distribution function and the distribution of stars in the [C/Fe]-[Fe/H] diagram.
* _The Metallicity Distribution Function (MDF):_ for Galactic halo stars with \(-4<{\rm[Fe/H]}<-2\) we adopt the MDF proposed by Bonifacio et al. (2021), which is the largest and most complete (i.e., biased-corrected) MDF for model comparison. It represents the average of three independently derived MDFs, the one by Naidu et al. (2020; H3 Survey), the uncorrected one by Schorck et al. (2009; Hamburg/ESO Survey) and the one determined by Bonifacio et al. (2021) themselves, and corrected for selection biases, from Sloan Digital Sky Survey spectra. The standard deviation in each metallicity bin (shown with the black errorbars in Fig. 3) is provided by Bonifacio et al. (2021) as an error estimate on the MDF. The three MDFs used for computing the average are essentially identical above \({\rm[Fe/H]}\sim-3\), explaining the small errors in this metallicity range. However, there are other published Galactic halo MDFs, that appear steeper (e.g., the corrected MDF by Schorck et al., 2009, and the one of Carollo et al., 2010) or shallower (e.g., Youakim et al., 2020) in this [Fe/H] range. At lower [Fe/H], the Naidu et al. (2020) MDF, which is based on high-resolution data, is undefined. The Schorck et al. (2009) MDF and the one determined by Bonifacio et al. (2021) extend down to \({\rm[Fe/H]}\sim-4\), but are based on low resolution (\(R\sim 2000\)) data. Due to the fact that metallicities of \({\rm[Fe/H]}<-3\) can only be accurately and precisely determined through high-resolution spectra, and due to the low number statistics
Figure 2: Total stellar mass (top row) and gas-to-stellar mass ratio (bottom row) of our MW-analogue at \(z=0\), as a function of the two free parameters of our model: the star formation efficiency (for \(\epsilon_{\rm wind}=0.002\); left) and the wind efficiency (for \(\epsilon_{\rm SF}=0.8\); right). Gray shaded areas represent the observed global properties of the MW (see text for details).
at low [Fe/H], the average MDF shows much larger errors at \(-4\leq\mathrm{[Fe/H]}\leq-3\).
* _The lowest-Fe tail of the MDF:_ we compute the halo MDF at \(\mathrm{[Fe/H]}<-4\) using data from the SAGA9 database (2023, April 10 version), which assembles the abundances of all \(\mathrm{[Fe/H]}\leq-2.5\) stars derived from high and medium-resolution follow up observations (Suda et al., 2008, 2011; Yamada et al., 2013). As researchers usually favour detailed studies of the most extreme stars, follow-up observations are biased towards low metallicities. For this reason, we only consider the SAGA MDF at \(\mathrm{[Fe/H]}<-4\) (including 43 stars), where we can assume that follow-up is near-complete. Still, it is likely that not all known stars with estimated \(-4.5\leq\mathrm{[Fe/H]}\leq-4\) have been followed-up at high resolution. If those are confirmed in the future, the number of stars at \(\mathrm{[Fe/H]}<-4.5\) with respect to the number of stars at \(-4.5\leq\mathrm{[Fe/H]}\leq-4\) will decline. We represent this possibility qualitatively with down-pointing arrows (shown, for example, in Fig. 4).
Footnote 9: [http://sagadatabase.jp/](http://sagadatabase.jp/)
* _The fraction of CEMP-no stars_: we compute the fraction of CEMP-no stars, \[F_{\mathrm{CEMP}}(\mathrm{[Fe/H]})=\frac{N_{\mathrm{CEMP}}(\mathrm{[Fe/H]})}{ N_{\star}(\mathrm{[Fe/H]})},\] (14) where \(N_{\mathrm{CEMP}}\) is the number of CEMP-no stars, and \(N_{\star}\) the total at a given [Fe/H] bin, using the high/medium-resolution sample of Pacco et al. (2014) and the high-resolution sample of Yong et al. (2013). Placco et al. (2014) collected a large sample of VMP literature stars, excluded those with \(\mathrm{[Ba/Fe]}>+0.6\) and \(\mathrm{[Ba/Sr]}>0\), which were likely enriched by an AGB companion (CEMP-s stars), and corrected the carbon abundances of the remaining sample to account for evolutionary effects. Their estimated fractions of CEMP-no (\(\mathrm{[C/Fe]}\geq+0.7\)) stars are shown in Fig. 5. Yong et al. (2013) performed a homogeneous chemical abundance analysis of 19 literature and program stars, of which 172 have \(\mathrm{[Fe/H]}<-2\). We completed their catalogue at low metallicities by adding the more recently discovered EMP stars shown in Fig. 7 (diamond points) and listed below. We computed \(N_{\mathrm{CEMP}}\) using the Yong et al. (2013) classification of CEMP-no stars (that is based on the Aoki et al. 2007 criterion10) and excluding stars with \(\mathrm{[Ba/Fe]}>+0.6\). The latter criterion was adopted to be consistent with Placco et al. (2014). In both observational surveys the CEMP-no fraction decreases with increasing [Fe/H]. However, at \(-4<\mathrm{[Fe/H]}<-3\), the \(F_{\mathrm{CEMP}}\) of Yong et al. (2013) are significantly lower than those of Placco et al. (2014; see Fig. 5) highlighting the uncertainties in the observational estimates of \(F_{\mathrm{CEMP}}\) (see also Section 7). Footnote 10: Both the Aoki et al. (2007) criterion for CEMP stars (\(\mathrm{[C/Fe]}\geq+0.70\), for \(\mathrm{log}(L/L_{\odot})\leq 2.3\) and \(\mathrm{[C/Fe]}\geq+3.0-\mathrm{log}(L/L_{\odot})\), for \(\mathrm{log}(L/L_{\odot})>2.3\)) and the corrections of Placco et al. (2014), account for the depletion of the surface carbon abundance of stars as they ascent the red-giant branch.
* _The Carbonicity Distribution Function (CDF):_ we compare our predictions for the [C/Fe] ("carbonicity", Carollo et al., 2012; Lee et al., 2017) distribution function of VMP inner halo stars, to observations from the SAGA database. We do so only at \(\mathrm{[C/Fe]}>+2\) where we expect the observational sample to have higher completeness. We compute the SAGA CDF by including all confirmed CEMP-no stars, CEMP stars with upper limits for barium enhancement at \(\mathrm{[Ba/Fe]}>+0.6\) as well as CEMP stars with no measurement of barium abundances (35 stars in total with \(\mathrm{[C/Fe]}>+2\) and \(\mathrm{[Fe/H]}\leq-2\), including upper limits for [C/Fe]).
* _The [C/Fe] vs [Fe/H] abundances_: in the left panel of Fig. 7 we compare our predicted distribution of halo stars in the [C/Fe]-[Fe/H] diagram with the CEMP-no and C-normal stars in the catalogues of Yong et al. (2013; X points) and Placco et al. (2014; circles) as well as with individual stars (diamond points) by Christlieb et al. (2002); Norris et al. (2007); Caffau et al. (2011); Keller et al. (2014); Hansen et al. (2014); Frebel et al. (2015); Li et al. (2015); Bonifacio et al. (2015); Caffau et al. (2016); Bonifacio et al. (2018); Francois et al. (2018); Aguado et al. (2018); Starkenburg et al. (2018); Aguado et al. (2019); Ezzeddine et al. (2019) and Nordlander et al. (2019)11.
Footnote 11: The Norris et al. (2007) star and the Aguado et al. (2019)/Ezzzeddine et al. (2019) star are already included in the sample of Yong et al. (2013) and were, therefore, not added to it before computing \(F_{\mathrm{CEMP}}\).
## 4 Results
Using our model (Section 2), we investigate how the MDF, the CDF and the fraction of CEMP-no stars in the Galactic halo depends on the unknown energy distribution function (EDF) of the first Pop III supernovae. In Section 4.1 we examine the degeneracy between the EDF and the IMF of Pop III stars and in Section 4.2 we determine the key observables to constrain them. Our findings are impacted by the stochastic sampling of both the masses and the SN explosion energies of Pop III stars (Section 2.2.3). Therefore, for each model presented in this article, we have averaged over 100 realizations as we find that this number is sufficient for our results to converge. We focus on the surviving VMP (\(\mathrm{[Fe/H]}\leq-2\)) stars that lie in the inner Galactic halo, i.e. at galactocentric radii \(7\mathrm{\ kpc}\leq R_{\mathrm{gal}}\leq 20\mathrm{\ kpc}\), at \(z=0\). Due to the instantaneous mixing of the IGM (that reaches \(\mathrm{[Fe/H]}\approx-2\), by \(z=5\)), we find that all VMP stars are formed before \(z\sim 5\), regardless of the assumed properties (EDF, IMF and mixing) of Pop III stars. Since we do not consider binary mass transfer, all CEMP (and C-normal) stars in our models, reflect the abundances of their birth clouds. We adopt the solar abundances from Asplund et al. (2009).
### Pop III stars: Energy Distribution Function vs IMF
This Section explores how the properties of the surviving inner halo stars in our model change when we vary the EDF of the first SNe. In what follows, we assume a flat distribution for the unknown Pop III mixing parameter and assign randomly one of the 14 mixing levels provided by Heger and Woosley (2010) to each Pop III star with \(10\leq m_{\star}/\mathrm{M}_{\odot}\leq 100\). We consider four Larson-type IMFs (Eq. 6): (i) with characteristic mass \(m_{\mathrm{ch}}=1\,\mathrm{M}_{\odot}\) and maximum mass for Pop III stars \(m_{\star}^{\mathrm{max}}=1000\,\mathrm{M}_{\odot}\), (ii) with \(m_{\mathrm{ch}}=10\,M_{\odot}\) and \(m_{\star}^{\mathrm{max}}=1000\,\mathrm{M}_{\odot}\), (iii) with \(m_{\mathrm{ch}}=100\,M_{\odot}\) and \(m_{\star}^{\mathrm{max}}=1000\,\mathrm{M}_{\odot}\) and (iv) with \(m_{\mathrm{ch}}=10\,M_{\odot}\) and \(m_{\star}^{\mathrm{max}}=100\,\mathrm{M}_{\odot}\) and \(m_{\star}^{\mathrm{max}}=100\,\mathrm{M}_{\odot}\).
Fig. 3 shows the predicted MDFs for the four IMFs assuming different values for the EDF (Eq. 7) exponent as denoted by the color (see Fig. 1). We find that at \(\mathrm{[Fe/H]}>-3\), the halo MDF is essentially independent of the assumed Pop III IMF and EDF and is in almost perfect agreement with the one derived by Bonifacio et al. (2021). It is worth noting that the MDF is a genuine prediction of our model since no free parameter was tuned to reproduce it (Section 2.3). At \(\mathrm{[Fe/H]}<-3\), the MDFs resulting from different models start to differentiate and a clear trend emerges: the MDF steepens as we
Figure 4: Same as Fig. 3 but normalized to \(\rm[Fe/H]=-4.25\) and compared to the ultra metal poor MDF from the SAGA database. The downwards pointing arrows indicate the possibility that the SAGA MDF might be biased towards the lowest metallicities (see Section 3).
Figure 3: Mean metallicity distribution functions of VMP inner halo stars, normalized at \(\rm[Fe/H]=-2.05\), in comparison to the observations by Bonifacio et al. (2021; points with errorbars). Four Larson-type (Eq. 6) IMFs for Pop III stars are considered: the first three with mass range \(m_{\bullet}=0.1-1000\) M\({}_{\odot}\) and characteristic mass \(m_{\rm chh}=1\) M\({}_{\odot}\) (top-left), \(m_{\rm chh}=10\) M\({}_{\odot}\) (top-right) and \(m_{\rm chh}=100\) M\({}_{\odot}\) (bottom-left) and the fourth with \(m_{\bullet}=0.1-100\) M\({}_{\odot}\) and \(m_{\rm chh}=10\) M\({}_{\odot}\) (bottom-right panel). For each IMF, results are shown for different values of the Pop III EDF exponent (see Eq. 7), \(\alpha_{e}\), as denoted by the color. All mixing values of the Heger & Woosley 2010 yields are assumed to be equally probable.
Figure 5: Differential CEMP fractions of very metal poor inner halo stars, for the four IMFs and different \(\alpha_{e}\) values considered (see Section 4.1 or caption of Fig. 3). Datapoints show the observational estimates of Yong et al. (2013a; points with errorbars) and Placco et al. (2014; stars).
Figure 6: Cumulative [C/Fe] distribution functions of very metal poor, inner halo stars, normalized at [C/Fe]**+2, in comparison with observations from the SAGA database.
move both to a higher characteristic mass \(m_{\rm ch}\) (or higher \(m_{\bullet}^{\rm max}\)) at fixed EDF or to a lower \(\alpha_{e}\) at fixed IMF, i.e., as more Pop III stars with high masses and explosion energies are formed. This is to be expected, since high energy SNe, hypernovae and massive PISNe yield more iron than faint and ccSNe and, thus, accelerate the chemical enrichment in their host halos resulting in a steeper MDF. Nevertheless, all MDFs are in agreement with the Bonifacio et al. (2021) one within the (large) errorbars.
In addition, all model MDFs are in agreement with the SAGA MDF at \(\rm[Fe/H]<-4\) within errors (Fig. 4). Yet, the ones predicted by the \([m_{\rm ch},m_{\bullet}^{\rm max}]=[1,1000]\) M\({}_{\odot}\) and the \([m_{\rm ch},m_{\bullet}^{\rm max}]=[10,100]\) M\({}_{\odot}\) models, which result in less (or zero) PISNe formed, lie above the observational datapoints in most metallicity bins. Instead the ones with \(m_{\rm ch}=10\) M\({}_{\odot}\) and \(m_{\rm ch}=100\) M\({}_{\odot}\) (and \(m_{\bullet}^{\rm max}=1000\) M\({}_{\odot}\)) lie on top and below the observations, respectively. As explained in Section 3, it is probable that the number of stars at \(\rm[Fe/H]<-4.5\) with respect to the number of stars at \(\rm[Fe/H]\approx-4\) will decline in the future as more stars in the range \(-4.5\leq\rm[Fe/H]\leq-4\) will be followed up with high resolution observations. Therefore, we could say that the latter two models are preferable here, or else that PISNe are required to steepen the ultra metal poor tail of the MDF. However, this could be a premature conclusion, given that only 14 stars with \(\rm[Fe/H]<-4.5\) have been observed to date.
In Fig. 5, we compare our predicted CEMP fractions to the observations of Yong et al. (2013) and Placco et al. (2014; see Section 3). All halo stars with \(\rm[Fe/H]<-5\) are predicted to be carbon enhanced, \(\rm[C/Fe]>+0.7\). At higher metallicities and for a given Pop III IMF, we find that EDFs skewed towards high explosion energies (i.e. with smaller \(\alpha_{e}\), see Fig. 1) result in lower CEMP fractions; naturally, since more energetic Pop III SNe yield less [C/Fe] at fixed mass and mixing level (e.g. see Vanni et al. in prep).
The dependence of the yielded [C/Fe] on the Pop III stellar mass is not straightforward. At \(10\leq m_{\bullet}/{\rm M}_{\odot}\leq 100\) and at a given explosion energy, the ejected [C/Fe] appears to increase with mass, especially at low mixing levels, but the relation is not monotonous and tends to reverse at the highest explosion energy \(E_{\star}=10\times 10^{51}\) erg (Heger and Woosley, 2010). The opposite is true for PISNe; the yielded [C/Fe] decreases dramatically with stellar mass from \(\sim 10^{13}\) at \(m_{\bullet}=140\) M\({}_{\odot}\) to \(\leq 10^{-1}\) at \(m_{\bullet}=260\) M\({}_{\odot}\)(Heger and Woosley, 2002). In addition, only the lowest explosion energies (\(E_{51}\leq 1.5\)) and the lowest mixing levels produce [C/Fe] that exceed those of the least massive PISNe (\(m_{\bullet}\la 170\) M\({}_{\odot}\)). Yet, all non-PISNe yield higher [C/Fe] than PISNe with \(m_{\bullet}\la 195\)\(M_{\odot}\).
Nevertheless, Fig. 5 reveals a clear trend. As we increase the characteristic mass from \(m_{\rm ch}=1\)\(M_{\odot}\), to \(m_{\rm ch}=10\)\(M_{\odot}\), and \(m_{\rm ch}=100\)\(M_{\odot}\) (resulting in \(M_{\rm PISN}/M_{\rm PopIII}\approx 0.04,0.11\), and 0.22, respectively) the predicted CEMP fraction for a given EDF decreases. For \(m_{\rm ch}=1\) M\({}_{\odot}\) we can reproduce the observed \(F_{\rm CEMP}\) for \(\alpha_{e}>1\) while for \(m_{\rm ch}=10\) M\({}_{\odot}\) we need a higher \(\alpha_{e}\ga 1.5\). That means that as the number of PISNe increases, the number of hypernovae should drop (from \(<22\%\) for \(m_{\rm ch}=1\) M\({}_{\odot}\) to \(<8\%\) for \(m_{\rm ch}=10\) M\({}_{\odot}\)). For \(m_{\rm ch}=100\) M\({}_{\odot}\) even the EDF with \(\alpha_{e}=4\) (giving \(\sim 99\%\) faint SNe) cannot produce enough CEMP stars to meet the observations. This is because when \(m_{\rm ch}=100\) M\({}_{\odot}\), PISNe dominate the ISM enrichment thus washing out the high [C/Fe] yielded by faint SNe (Pagnini et al., 2023). Indeed, in the case where \(m_{\bullet}^{\rm max}=100\) M\({}_{\odot}\), i.e., when no PISNe are allowed to form, our model fits the observations for \(\alpha_{e}\sim 1.5-2\).
Finally, in Fig. 6, we compare the cumulative CDFs, for stars with [Fe/H]\(\la-2\), predicted by our models with observations from the SAGA database. We find that the CDFs become steeper as the \(m_{\rm ch}\) of the Pop III IMF increases, or else as more PISNe form. Models with \(m_{\rm ch}=100\)M\({}_{\odot}\) (bottom left panel) significantly underpredict the number of stars with [C/Fe]\(>\)+4, yet all other models are in agreement with the observations within errors. At fixed IMF, we see that when the model CDFs are normalized to [C/Fe]=+2, they show no clear dependence on the assumed EDF. That is not true, however, when we consider the CDFs extending down to lower [C/Fe]. There, our now-familiar trend is evident; the higher the energy of Pop III SNe (or else the lower the \(\alpha_{e}\)), the lower the yielded [C/Fe] and, therefore, the steeper the resulting CDF (see Appendix A).
### Metal contribution from Pop III stars
In the previous Section, it became clear that there exist degeneracies between the EDF and the IMF of Pop III stars. Furthermore, given the currently large errors in the observational data, it is hard to single out a preferred model. Therefore, it is useful to examine the average behaviour of all our "successful" models, see Table 1, i.e., those that are in better agreement with the observed MDF, CDF and \(F_{\rm CEMP}\).
Fig. 7 (left) shows the present-day distribution of VMP inner halo stars in the [C/Fe]-[Fe/H] diagram, averaged among all models of Table 1. Notice that the slope of the [C/Fe]-[Fe/H] relation and the region showing the highest probability (i.e., that of C-normal stars) do not strongly depend on the choice of model. However, the exact value of \(P_{i}\) in each [C/Fe]-[Fe/H] bin is model-dependent. For example, models with a higher number of low energy SNe (higher \(\alpha_{e}\)) and/or a lower characteristic mass of the IMF, produce more stars, i.e., a higher probability, \(P_{i}\), at \(\rm[Fe/H]\la-4\) (see Fig. 3). Therefore, the average probability shown here is only a rough estimate, as \(P_{i}\) varies across models and there is no reason to assume that each of the models in Table 1 should have equal weight in the calculation of the average.
The main bulk of the observed C-normal population is in very good agreement with our model predictions, and coincides with the region predicted to have the highest density of stars. Furthermore, the sparser CEMP-no stars are also well represented by our models. Similar to the observations, our models show a sharply decreasing [C/Fe] with increasing [Fe/H]. However, our [C/Fe]-[Fe/H] relation appears shifted towards lower [Fe/H] compared to the observed one. As a result the CEMP stars with the highest carbonities in each [Fe/H] bin are not reproduced by our models. This is a problem faced by several other works (e.g., Cooke and Madau, 2014; Komiya et al., 2020; Jeon et al., 2021). In Section 7, we discuss possible solutions to this discrepancy coming both from the modelling and from the observational side.
The right panel of Fig. 7, depicts the _minimum_ metal fraction contributed by Pop III stars, as a function of metallicity and carbon
\begin{table}
\begin{tabular}{l|c|c c c} \hline & & Model 1 & Model 2 & Model 3 \\ \hline \multirow{3}{*}{IMF} & \(m_{\rm ch}\) & 1 M\({}_{\odot}\) & 10 M\({}_{\odot}\) & 10 M\({}_{\odot}\) \\ & \(m_{\bullet}^{\rm max}\) & 1000 M\({}_{\odot}\) & 1000 M\({}_{\odot}\) & 100 M\({}_{\odot}\) \\ \cline{1-1} \cline{2-5} & \(\alpha_{e}\) & 1.0-2.0 & 1.5 - 2.5 & 1.5-2.0 \\ \hline \end{tabular}
\end{table}
Table 1: Pop III stellar parameters for the models that successfully reproduce the observed MDF (Bonifacio et al., 2021), the CDF and the CEMP fractions (Yong et al., 2013; Placco et al., 2014) of very metal-poor stars in the inner halo. In all models below, all values of stellar mixing given by Heger and Woosley (2010) are assumed to be equally probable.
enhancement. In particular, the colors denote the minimum \(f_{Z}^{\rm Pop\ III}\equiv m_{Z}^{\rm Pop\ III}/m_{Z}^{\rm tot}\) of all stars belonging to each [C/Fe]-[Fe/H] bin in our models (Table 1), where \(m_{Z}^{\rm tot}=m_{Z}^{\rm Pop\ III}+m_{Z}^{\rm Pop\ II}\) is the total mass of metals in a star and \(m_{Z}^{\rm Pop\ III}\) and \(m_{Z}^{\rm Pop\ II}\) the metals' mass that it has inherited from Pop II and Pop II progenitors, respectively. We find that all C-enhanced stars at \(\rm[Fe/H]<-3\), are at least \(\sim 20\%\) enriched by Pop III progenitors.12 As we go towards higher [C/Fe], this value increases rapidly, to \(>50\%\) for stars with \(\rm[C/Fe]>+1\), and to \(>80\%\) for stars with \(\rm[C/Fe]>+1.5\) (at the same \(\rm[Fe/H]<-3\)). Moreover, all stars with \(\rm[C/Fe]\gtrsim+2\) and/or \(\rm[Fe/H]\lesssim-4.7\) are _pure_ Pop III descendants; their abundance patterns are less than 5% contaminated by Pop II stars.
Footnote 12: Note that our model does not include binary transfer, i.e., CEMP-s stars.
In addition to the Pop III enriched CEMP stars, there exists a group of C-enhanced stars that have been entirely enriched by Pop II stars, at \(\rm[Fe/H]>-2.8\) and \(\rm[C/Fe]<+2\) (dark blue area in Fig. 7). Our adopted Pop II SN yields (Limongi & Chieffi, 2018) have a maximum \(\rm[C/Fe]=+0.69\) at \(\rm[Fe/H]\leq-2\), and are, therefore, not able to begel VMP C-enhanced stars. Instead, we find that these (Pop II enriched) CEMP stars are descendants of Pop II AGB stars and can form in minihalos only after the following conditions are met: (i) SN explosions expel all, or nearly all, gas from the halo, leaving none, or only a small fraction, of the iron they produced, (ii) subsequent accretion of pristine/metal-poor gas from the IGM leads to low [Fe/H] in the ISM, (iii) previously formed AGB stars release carbon enriching the ISM to high [C/Fe] (see also Rossi et al., 2023).
The stars at \(0<\rm[C/Fe]<+0.7\) are predominantly enriched by Pop II progenitors. They correspond to the highest density region of Fig. 7 (left), which implies that most of the observed C-normal stars are not Pop III star descendants. Yet, at low \(\rm[C/Fe]<0\), the Pop III metal contribution starts dominating again. This is a natural consequence of the fact that Pop II stars yield a minimum \(\rm[C/Fe]\approx 0.07\) at \(\rm[Fe/H]\leq-2\)(Limongi & Chieffi, 2018), while energetic Pop III SNe can reach down to \(\rm[C/Fe]\approx-1.3\)(Heger & Woosley, 2002, 2010; see Section 6.1).
Fig. 8 shows the _average_ contribution by different Pop III progenitors for stars in each [C/Fe]-[Fe/H] bin, for all models in Table 1 with: \(m_{\rm\star}^{\rm max}=1000\) M\({}_{\odot}\) (left); and \(m_{\rm\star}^{\rm max}=100\) M\({}_{\odot}\), i.e., when no PISNe are allowed to form (right). We find that at fixed [C/Fe], the surviving CEMP stars with the lowest [Fe/H] have been enriched mostly by faint SNe. Instead, PISNe enrichment dominates the pollution of the most [Fe/H]-rich CEMP stars. Notice that yields from PISNe with \(m_{\rm\star}=(140-150)\) M\({}_{\odot}\) can reach significantly higher [C/Fe] than many non-PISNe, e.g. Heger & Woosley (2010); Nomoto et al. (2013). In the absence of PISNe, however, all CEMP stars (except the Pop II AGB-descendants at \(\rm[Fe/H]>-2.8\)) are on average \(>30\%\) enriched by faint SNe. Compared to faint SNe and PISNe, the overall contribution of cCsNe, high energy SNe and hypernovae to the surviving stars is not as prominent. This is due to the fact that the EDFs of our preferred models (Table 1) are skewed towards low explosion energies -- high \(\alpha_{e}\) exponent -- and hence produce much fewer SNe of high energies. Nonetheless, there appears a region in the diagram (at \(\rm[C/Fe]<0\) and \(\rm[Fe/H]\lesssim-2.5\)) where stars are predominantly enriched by primordial hypernovae. For comparison,
Figure 7: Left: Distribution of very metal-poor, inner-halo stars on the [Fe/H]–[C/Fe] diagram. The color denotes the probability \(P_{i}=N_{*}^{I}/N_{*}^{\rm tot}\) of stars to belong in each bin \(i\) (averaged among all models of Table 1), where \(N_{*}^{I}\) is the number of stars in the bin and \(N_{*}^{\rm tot}\) is the total number of \(\rm[Fe/H]\leq-2\) stars. Right: Same as left panel only here the color denotes the _minimum_ metal fraction inherited by Pop III ancestors for all the stars in each [Fe/H]–[C/Fe] bin. Datapoints in both panels, show the C-normal and CEMP-no stars from the samples of Yong et al. (2013; \(\times\) points), Placco et al. (2014; points), and various authors (diamonds; see Section 3).
Figure 8: Distribution of very metal-poor, inner-halo stars in the [C/Fe]–[Fe/H] diagram, with the color denoting the mean fraction of metals inherited from Pop III faint SNe, ccSNe, high energy SNe, hypernovae, and PISNe, based on the models of Table 1, with Pop III mass range \(m_{\bullet}=0.1-1000\,{\rm M}_{\odot}\) (left); and \(m_{\bullet}=0.1-100\,{\rm M}_{\odot}\), i.e., when no PISNe are allowed to form (right). Datapoints show the observations as in Fig. 7.
in Appendix A we show the results of a model in which all types of Pop III SNe are equally probable.
The average contribution of any SN type, at a given [C/Fe]-[Fe/H] bin, varies depending on the assumed EDF. However, the qualitative trends described above do not: we find that long-lived descendants of each kind of Pop III SNe always occupy the same regions on the [C/Fe]-[Fe/H] diagram. In particular, certain [C/Fe]-[Fe/H] combinations can _only_ be produced by an enrichment of a specific type of Pop III SN. Hypernovae descendants predominantly populate a well defined region at \(\rm[C/Fe]<0\) and \(\rm[Fe/H]\leq-2.5\), while the \(\rm[C/Fe]\leq-0.5\) are also populated with PISNe descendants. Thus, without hypernovae or PISNe these areas are not represented in our models (compare, e.g., the bottom panels in Fig. 8).
## 5 The impact of stellar mixing
The convective mixing between stellar layers can influence quite strongly the chemical signature of Pop III SNe. When mixing precedes fallback, heavier nuclei that would not have been ejected otherwise can escape into the ISM. In the previous Section, we adopted a uniform distribution for the mixing parameter, \(f_{\rm mix}\) of the Heger & Woosley (2010) yields (see Section 2.2.6). Here, we explore how varying \(f_{\rm mix}\) impacts our model's predictions.
The left and right columns of Fig. 9, show our results as a function of \(f_{\rm mix}\), assuming an EDF given by Eq. 7 with \(\alpha_{e}=1\) and \(\alpha_{e}=2\), respectively. Both cases assume a Pop III IMF of the form of Larson (1998; Eq. 6) with \(m_{\rm ch}=10\)\(\rm M_{\odot}\) and \(m_{\bullet}^{\rm max}=1000\)\(M_{\odot}\), i.e., the second IMF considered in Section 4.1.
Similar to the case of varying the Pop III EDF and IMF (Section 4.1), the predicted MDFs at [Fe/H] > -3 remain unaffected by the assumed stellar mixing and in agreement with the one by Bonifacio et al. (2021; 1st row of Fig. 9). At \(\rm[Fe/H]<-3\), the MDFs resulting from different models start to differentiate and this difference becomes pronounced at [Fe/H]\(\leq-4\); for a given EDF, lower mixing levels produce flatter MDFs. The comparison with the SAGA MDF is crucial to potentially discard models (2nd row of Fig. 9). Models with \(f_{\rm mix}\leq 0.0631\) produce MDFs that lie above the observational values in several metallicity bins. Still we cannot completely discard them since they are in agreement with the observations within errors. On the other hand, higher mixing levels (\(f_{\rm mix}\geq 0.1\)) result in more heavy elements, such as iron, being released into the ISM. The chemical enrichment in the first minihalos proceeds faster giving rise to steeper MDFs that underpredict the number of hyper metal poor stars by more than 1-2 dex.
The third row of Fig. 9 shows the fraction of CEMP-no stars, \(F_{\rm CEMP}\) (Eq. 14), in the inner-halo, as predicted by the different models. We find that mixing levels \(f_{\rm mix}\leq 0.1\) yield almost identical \(F_{\rm CEMP}\) in both models. Those lie well below the observations in the \(\alpha_{e}=1\) model, but are in agreement with the observations in the \(\alpha_{e}=2\) model, due to the lower number of high energy SNe and hypernovae there (7% for \(\alpha_{e}=2\) compared to 43% for \(\alpha_{e}=1\)). The two highest mixing levels, \(f_{\rm mix}=\)-0.1855 and 0.2512, underestimate the CEMP fractions in both models and do not produce any stars at \(\rm[Fe/H]<-6.5\). We also notice that the \(F_{\rm CEMP}\) dependence on mixing is weaker for the \(\alpha_{e}=1\) model, which generates more energetic SNe. Indeed, the higher the explosion energy, or equivalently the smaller the fallback, the weaker the effect of mixing on stellar ejecta13 (see Fig. 10).
Footnote 13: In the limit where the fallback is zero, the stellar yields are independent of mixing.
The \(F_{\rm CEMP}\)-[Fe/H] relation can convey only limited information, since CEMP is a binary classification - a star either has \(\rm[C/Fe]>+0.7\) or not. The CDF, instead, can be more informative. The bottom row of Fig. 9 shows the cumulative CDF for all inner halo stars with \(\rm[Fe/H]<-2\) and \(\rm[C/Fe]>+2\) in each model. Here, the effect of mixing is pronounced in both the \(\alpha_{e}=1\) and the \(\alpha_{e}=2\) model. In both cases, models with \(f_{\rm mix}\geq 0.0631\) predict too few stars with high carbonicities (\(\rm[C/Fe]\geq+3\)). Instead, the CDFs for lower \(f_{\rm mix}\) are in good agreement with the observations for \(\alpha_{e}=1\) and even better for \(\alpha_{e}=2\).
Regardless of the degeneracy between the stellar mixing and the explosion energy in the yielded abundance ratios -increasing either the mixing or the explosion energy yields lower [C/Fe]- we find that when the fraction of energetic SNe is significant (e.g., when \(\alpha_{e}=1\)) even the lowest mixing level is inconsistent with the observations. Similarly, the highest mixing levels fail to reproduce the observations even when the EDF is dominated by faint SNe. More specifically, inspection of the second and fourth row of Fig. 9 reveals that a typical \(f_{\rm mix}\leq 0.0631\) is favoured by our model. This result is not at odds with our adoption of a uniform mixing distribution in Sections 4.1 and 4.2, since only 3 out of the 14 mixing levels of (Heger & Woosley, 2010) are above \(f_{\rm mix}=0.0631\). We should note that Heger & Woosley (2010) reached similar conclusions. In particular, they found that regardless of the assumed Pop III IMF and explosion energy, an \(f_{\rm mix}\leq 0.0631\) provides the best fit to the abundance patterns of the C-enhanced stars HE1327-2326 (Frebel et al., 2005) and HE0107-5240 (Christlieb et al., 2002).
It would be valuable to generalise this result to other stellar evolutionary models. However, this endeavor is no easy feat, first, due to the fact that the mixing prescription in Heger & Woosley (2010) is artificial and not described using physical principles, and second, due to the many differences in the physical and numerical assumptions employed by different groups. Nevertheless, one can get a crude idea by comparing the yielded abundance ratios obtained in different studies to their adopted parameter values.
Fig. 10 compares the [C/Fe] yields provided by Heger & Woosley (2010) for Pop III SNe as a function of mixing and explosion energy, to those computed for Pop III SNe by Iwamoto et al. (2005), Tominaga et al. (2007), Marassi et al. (2014) and Limongi & Chieffi (2012). In the first three works, mixing is a free parameter like in the Heger & Woosley (2010) models but is parametrized in terms of a minimum and maximum mass coordinates within which the mass of each element is uniformly mixed. Those two together with the "mass cut" parameter, i.e., the mass coordinate below which all material falls back onto the central remnant, are calibrated14 to reproduce the abundance pattern of the stars: (i) HE1327-2326 and HE0107-5240 in the case of Iwamoto et al. (2005) (ii) the CEMP star SMSSJ031300 observed by Keller et al. (2014) in the case of Marassi et al. (2014); and (iii) the average abundance pattern of four C-normal EMP stars Cayrel et al. (2004) in Tominaga et al. (2007). Naturally, a more extended mixing region at fixed mass cut, or a lower mass cut at fixed mixing, would result in lower yielded [C/Fe]. In Limongi & Chieffi (2012), stellar mixing is not artificial but it is coupled to nuclear burning and only the mass cut (or equivalently the explosion energy) is calibrated to reproduce the average abundance pattern of the C-normal stellar sample of Cayrel et al. (2004).
Footnote 14: One can find the adopted parameter values in the respective papers.
Since the free parameters in the above models have been fitted to reproduce specific stars, they give a quasi-constant [C/Fe], regardless of the assumed stellar mass; the Iwamoto et al. (2005) and Marassi
Figure 9: Predicted metallicity distribution function of inner halo stars (\(7\,{\rm kpc}<R_{gal}<20\,{\rm kpc}\)), normalized at \({\rm[Fe/H]}=-2.05\) (top row), and \({\rm[Fe/H]}=-4.25\) (second row), in comparison with observations from Bonifacio et al. (2021) and the SAGA database (see Section 3 for details). The third row shows the predicted fraction of CEMP stars (\({\rm[C/Fe]}\geq+0.7\)) in each [Fe/H] bin, in comparison to the observations by Yong et al. (2013a; points with errorbars) and Placco et al. (2014; stars). The bottom row shows the cumulative [C/Fe] distribution function (CDF) of VMP inner halo stars, normalized at \({\rm[C/Fe]}=+2\), in comparison with observations from the SAGA database. Colors in all panels denote the mixing parameter adopted for \(m_{\bullet}=10-100\,{\rm M}_{\odot}\) Pop III stars in each run, as indicated by the colorbar. Lines and shaded areas represent the mean and standard deviation of 100 runs. The left and right panels show the model results assuming an EDF for Pop III SNe (Eq. 7) with exponent \(\alpha_{\rm e}=1\) and \(\alpha_{\rm e}=2\), respectively. In both cases higher mixing for Pop III stars results in steeper MDFs and CDFs and lower CEMP fractions.
et al. (2014) yields that have been calibrated to reproduce CEMP stars, lie above our typical \(f_{\rm mix}\)=0.0631 level in all energy bins, while the ones of Tominaga et al. (2007) and Limongi & Chieffi (2012) that have been calibrated to reproduce C-normal stars are consistent with the highest mixing levels and/or the highest explosion energies of Heger & Woosley (2010).
## 6 Discussion
We have developed a new SAM of galaxy formation, named NEFTITI (_NEar-FiEld cosmology: Re-Tracing Invisible Tlmes_), and combined it with a cosmological N-body simulation of a MW analogue, to shed light on the properties of the first Pop III SNe, and in particular their energy distribution function, EDF, parameterized in our model with \(\alpha_{e}\) (Eq. 7 and Fig. 1). NEFTITI follows the formation and evolution of individual Pop III and Pop II stars, and considers, for the first time in a SAM for the MW formation, the contribution of Pop III SNe with different masses, stellar mixing and explosion energies. Subsequently, we have investigated how varying these Pop III stellar parameters affects the properties of the surviving VMP stars in the galactic halo.
### Comparison with previous results and model limitations
For a given Pop III IMF, we find that a higher contribution of low energy SNe (higher \(\alpha_{e}\)) results in a greater CEMP-no fraction, and a flatter MDF and CDF for stars with \(\rm[Fe/H]<-2\). In particular, for a Larson (1998) IMF with characteristic mass \(m_{\rm ch}=1\) M\({}_{\odot}\) and range \(m_{\star}=[0.1,1000]\) M\({}_{\odot}\), we reproduce the observed CEMP fractions by Yong et al. (2013a) and Placco et al. (2021) if 40%-80% of Pop III stars with \(10\leq m_{\star}/\)M\({}_{\odot}\leq 100\) explode as faint SNe and only 20%-2% as hypernovae (and the rest with intermediate energies; Fig. 1). When we increase \(m_{\rm ch}\) to 10 M\({}_{\odot}\), the data are best fitted with 63%-90% of faint SNe, while for \(m_{\rm ch}=100\) M\({}_{\odot}\) we always underpredict the fraction of CEMP stars, due to the high number of primordial PISNe (Fig. 5). The effects of the Pop III properties on the halo MDF are only prominent at \(\rm[Fe/H]\leq-3\), yet all models are in agreement with the current observations within errors (Figs 3 and 4). However, this comparison is somewhat hampered by limitations of the data (see the next subsection).
These results are not easily compared to previous works due to the different assumptions taken (e.g., the Pop III IMF), and the fact that, until now, none have considered simultaneously the contribution of Pop III SNe of all different energies and Pop II stars. However, we report that Hartwig et al. (2018) find a best matching to the Placco et al. (2014) CEMP fractions by assuming that 40% of Pop III stars explode as faint SNe and the rest as normal ccsNe, but without including high energy SNe, hypernovae and PISNe with \(m_{\star}>150\) M\({}_{\odot}\), all of which would lower \(F_{\rm CEMP}\). Contrarily, the simple minihalo model for Pop III star enrichment by Cooke & Madau 2014 manages to reproduce the observed \(F_{\rm CEMP}\) even when 100% of Pop III SNe explode with high energy, when using a flat IMF in the range 10-100 M\({}_{\odot}\) (hence without including PISNe). This overestimation of the CEMP fraction relative to our model is to be expected, since Cooke & Madau (2014) do not consider the contribution from Pop II stars which, according to our simulation, dominates the chemical enrichment at \(\rm[C/Fe]<+0.7\) (see also Vanni et al. 2023).
One of our key findings is that all VMP stars with subsolar [C/Fe] are predominantly imprinted by Pop III hypernovae and/or PISNe, regardless of our model assumptions (Fig. 8). This result stems from the fact that our adopted Pop II metal yields (Limongi & Chieffi 2018) reach a minimum \(\rm[C/Fe]=0.07\) at \(\rm[Fe/H]\leq-2\), whereas Pop III hypernovae and PISNe can reach [C/Fe] as low as \(\approx-0.9\) and \(\approx-1.3\), respectively (Heger & Woosley 2002, 2010). The uncertainties associated with Pop II yields are crucial in this regard. Ritter et al. (2018) similarly estimate a yielded \(\rm[C/Fe]>0\) for massive stars with \(\rm[Fe/H]<-2\), and Kobayashi et al. (2006) and Nomoto et al. (2013) find a yielded \(\rm[C/Fe]<-0.1\) only at \(\rm[Fe/H]<-3.5\) (see Fig. 5 of Liang et al. 2023). Yet, Woosley & Weaver (1995) suggest Pop II [C/Fe] yields reaching down to \(\approx-0.5\). Therefore, only if we adopted the Woosley & Weaver (1995) yields, we would anticipate a higher contribution of Pop II stars at subsolar [C/Fe]. It should be noted that our model does not include SN type Ia that yield \(\rm[C/Fe]<-1\) (Thielemann et al. 1986; Iwamoto et al. 1999; Seitenzahl et al. 2013). However, we expect their contribution at such low metallicites to be minimal (e.g. Salvadori et al. 2015). Type Ia SNe have a typical delay time of \(0.1-1\) Gyr (see Chen et al. 2021 and references therein), while the majority of our SF minihalos reach \(\rm[Fe/H]>-2\) within 0.1 Gyr of their formation. In any case, SNe type Ia descendants can be distinguished from those of Pop III hypernovae and PISNe by comparing their complete abundance patterns (see, e.g., Fig. 8 in Nomoto et al. 2013 and Fig. 12 in Salvadori et al. 2019).
Besides the success of our model in reproducing the MDF, the CDF and the fraction of CEMP stars at [Fe/H]\(\leq-2\), we find that our predicted \(\rm[C/Fe]-[Fe/H]\) relation lies at the lower side of the observations (Fig. 7). This discrepancy has been reported by several previous works, even though the [C/Fe]-[Fe/H] stellar distribution depends on the particularities of each assumed model.15 Cooke & Madau (2014), who investigate early chemical enrichment by Pop III stars in isolated minihalos, find that they cannot reproduce the highest [C/Fe] observed in CEMP-no stars. Using a model calibrated on the UFD Bootes I, Rossi et al. (2023) find that true Pop III descendants have16 A\(\rm(C)<6\), similar to us: the upper envelope of our [C/Fe]-[Fe/H] relation corresponds to A\(\rm(C)\sim 6-6.5\) at \(\rm[Fe/H]\leq-4.5\). Yet, they predict the formation of Pop II AGB descendants with high C-abundances (A(C)\(\sim\)7-7.7) even at \(\rm[Fe/H]<-4\). As explained in Section 4.2, such CEMP stars only form after one or more SN explosions blow out \(\sim\)all gas from within a halo. This process removes the iron rich signature of Pop II SNe, allowing previously formed AGB stars to enrich the newly accreted, nearly pristine gas to high [C/Fe]. We find that this condition is satisfied in our model only at \(z<8\) when the IGM has already been enriched to \(\rm[Fe/H]>-3\) (Fig. 7)17. However, we must note that our DM simulation does not resolve halos with \(M_{\rm vir}<10^{7}\) M\({}_{\odot}\). Hydrodynamic simulations, instead, suggest that the minimum mass of the first SF minihalos can range between \(10^{5.5}-10^{7.5}\) M\({}_{\odot}\) depending on the relative velocity between baryons and DM (see Schauer et al. 2019 and references therein). Including lighter minihalos in our simulation could allow the formation of AGB-descendant CEMP stars at lower metallicities. In such case, one must make sure that their abundances in s-process elements, inherited by their AGB progenitors, are in accordance with observations.
Footnote 15: For example one can easily infer from equations 5 and 12 that at fixed [C/Fe], [Fe/H]\(\propto\epsilon_{\rm SF}\).
Footnote 16: A\(\rm(C)\equiv log(N_{C}/N_{\rm HI})+12\)
Footnote 17: The mass loading factor \(\eta=\dot{M}_{\rm gas,cl}/\rm SFR\propto 1/\mu_{\rm esc}^{2}\) (Eq. 10) is a decreasing function of redshift (Barkana & Loeb 2001), therefore a complete blown-out of the gas at fixed \(M_{\rm vir}\) occurs more easily at low \(z\).
Another factor that can affect our results is the assumption of the instantaneous mixing approximation.18 In reality, SNe ejecta may
only mix with a fraction of the available cold gas (see Salvadori et al., 2019 and Magg et al., 2020 for an estimate of the dilution factor). The resulting abundances in the SF clouds are higher, due to less dilution, and may even differ from the gross yields of the SNe (towards higher [C/Fe]; Ritter et al., 2015). However, the minimum dilution mass can not be arbitrarily small but is limited by the mass enclosed within the final size of the SN remnant (Magg et al., 2020). Models that take into account realistic prescriptions for the diffusion of SNe ejecta still find it challenging to reproduce CEMP stars with A(C)-\(7\)-\(7.5\)(Chiaki et al., 2020; Jeon et al., 2021, Vanni et al. in prep). Komiya et al. (2020) find that only an extremely inefficient mixing of SN yields can reproduce the highest [C/Fe] CEMP-no stars, but this results in an inconsistent metallicity distribution function. They concluded that binary mass transfer from AGB stars is necessary to explain the [C/Fe] abundances of \(\left[{\rm Fe/H}\right]<-4\) stars. Sarmento et al. (2019) do manage to reach A(C) \(\gtrsim 7.5\) using an Pop III IMF with \(m_{\rm ch}=60-120\,{\rm M}_{\odot}\) and \(m_{\star}=\left[20-120\right]{\rm M}_{\odot}\) (the range yielding the highest [C/Fe]), but do not report on their predicted MDF and CEMP fraction. We find that adopting this IMF in our model results in a too flat MDF at \(\left[{\rm Fe/H}\right]<-4\), inconsistent with the SAGA observations.
Finally we must note that our results could also depend on the adopted merger tree. Chen et al. (2023), for example, find that the predicted MDFs in different MW-like analogues can differ by \(\sim 1\) dex at \(\left[{\rm Fe/H}\right]=-4\). We plan to explore the level of this dependence in a future work.
### Key observables and their intrinsic uncertainties
We have shown that we can constrain the properties of primordial SNe by comparing our model predictions to observations of VMP stars in the Galactic halo. In particular, we find that both the mixing and the explosion energies as well as the IMF of Pop III stars have a strong impact on the present day CEMP-no fraction, the MDF and the CDF.
Stellar mixing affects strongly the halo MDF at \(\left[{\rm Fe/H}\right]<-4\), and the CDF at \(\left[{\rm C/Fe}\right]>+4\) (Fig. 9), where the sample of high-resolution follow-up observations can be deemed unbiased. Instead, the effect
Figure 10: Comparison of the Heger & Woosley (2010) [C/Fe] yields for Pop III faint SNe (\(E_{51}=0.3\,-\,0.6\); top-left), ccSNe (\(E_{51}=0.9\,-\,1.5\); top-right), high energy SNe (\(E_{51}=1.8\,-\,3.0\); bottom-left) and hypernovae (\(E_{51}=5\,-\,10\); bottom-right panel) with the [C/Fe] yield of zero-metallicity stars by Iwamoto et al. (2005; stars), Tomínigas et al. (2007; triangles), Marassi et al. (2014; circles) and Limongi & Chieffi (2012; X symbols). Colors denote the mixing level given by Heger & Woosley (2010), as indicated by the colorbar. For each mixing level, the top (bottom) colorbar solid line corresponds to the lowest (highest) explosion energy shown in each panel, with the area between the two lines shaded. Filled black datapoints show [C/Fe] yields of Pop III stars with explosion energies within (or close) to the range shown in each panel, while empty datapoints show SNe yields of different energies.
of adopting different Pop III IMFs and EDFs appears more prominent when we consider a broader metallicity range, i.e. a MDF extending from the lowest [Fe/H] up to \(\rm[Fe/H]>-3\) (Figs 3 and 4), and a CDF extending from the highest carbonicities down to \(\rm[C/Fe]<+1\) (Figs 6 and A1). Unfortunately, the currently available observations at these abundance ranges are incomplete, and follow-up is biased towards the lowest metallicities and highest carbonicities.
In addition, all the aforementioned observables suffer from large observational errors. Arents et al. (2022) report that there are significant systematic differences in the carbon abundances among various surveys of Galactic halo stars that can translate to more than 50% differences in the estimated CEMP fractions (see their Fig. 1). Similar uncertainties apply to the determination of [Fe/H] and the MDF (see e.g., Fig. 12 of Youakim et al. 2020). These systematics can arise from different resolution and pipeline approaches, different assumptions in the employed synthetic grids, and/or comparison of stars in different evolutionary phases.
An additional source of large systematic errors comes from the simplifying assumption of one-dimensional (1D), local thermodynamic equilibrium (LTE) hydrostatic model atmospheres that is used in standard spectroscopic abundance analyses. Accounting for 3D non-LTE effects has been found to lower [C/Fe] estimates by as much as \(\sim\)1 dex while raising [Fe/H] by \(<\)0.15 dex (Collet et al. 2006; Amarsi et al. 2019,a; Norris and Yong 2019). Naturally, this has a dramatic effect on the fraction of CEMP stars; Norris and Yong (2019) found that after applying 3D non-LTE corrections, the Yong et al. (2013) CEMP-no fraction at \(-4.5\leq\rm[Fe/H]\leq-3\) is reduced by \(\sim 60\%\) while the number of CEMP-no stars in the Yoon et al. (2016) sample decreases by \(\sim 73\%\). Correcting for 3D non-LTE effects will also move the observed [C/Fe]-[Fe/H] stellar distribution downwards and, perhaps, resolve the discrepancy with our predictions (Fig. 7).
Finally, several CEMP stars with \(\rm A(C)>6.5\) (and all of them at \(\rm[Fe/H]<-4.5\)) that are not reproduced by our model, have either no Ba measurements or have only upper limits for barium enhancement at [Ba/Fe]-0.6. If a high Ba enhancement is confirmed for those stars in the future, then their high carbonicities could be explained by enrichment from a Pop II AGB progenitor (Rossi et al. 2023) or mass transfer from a Pop III/II AGB companion (Komiya et al. 2020).
## 7 Conclusions and Future Outlook
For the first time, we explore the energy distribution function, EDF, of the first SNe in the context of a cosmological galaxy formation model of a MW-analogue. Our model follows the formation and evolution of individual Pop III stars, which is uniquely determined by their initial mass, stellar mixing and explosion energy. Their contribution in the chemical enrichment of their host minihalos is imprinted in the present day properties of very metal-poor galactic halo stars, such as their MDF, CDF and CEMP fractions. We draw the following main conclusions:
1. _Pop III Energy Distribution Function._ The fraction of CEMP stars, \(F_{\rm CEMP}\), is highly sensitive to the primordial EDF, especially at \(\rm[Fe/H]\leq-3\). Assuming an EDF of the form: \(dE/dN\propto E^{-\alpha_{e}}\), we find that we can reproduce the observed CEMP fractions for \(\alpha_{e}\sim 1-2.5\), depending on the adopted IMF for Pop III stars (Fig. 5 and Table 1). This value corresponds to a \(\sim 40-90\%\) probability for Pop III stars with \(m_{\bullet}=10-100\rm M_{\odot}\) to explore as faint SNe, and a \(20-0.5\%\) probability for them to explode as hypernovae (intermediate energy SNe have intermediate probabilities; Fig. 1). The effect of the Pop III EDF (and of their IMF) on the halo MDF is only prominent at \(\rm[Fe/H]\lesssim-3\) but there the observational uncertainties are so large that they render any comparison inconclusive (Figs 3 and 4).
2. _Pop III Initial Mass Function._ A top-heavy primordial IMF (with characteristic mass \(m_{\rm ch}=100\rm~{}M_{\odot}\) in the range 0.1-1000 \(\rm M_{\odot}\)) is disfavoured, as it underestimates the CEMP fraction and results in a too steep CDF, even if all Pop III stars with \(m_{\bullet}=10-100\rm~{}M_{\odot}\) explode as faint SNe (Figs 5 and 6).
3. _Pop III stellar mixing._ At a given EDF and IMF, lower mixing for Pop III stars results in a flatter MDF, higher CEMP fractions and a CDF skewed towards higher [C/Fe]. We find that typically very low mixing (\(f_{\rm mix}\lesssim 0.0631\) as provided by Heger and Woosley 2010) is required to reproduce the observations (Fig. 9 and 10).
4. _Pop II descendants._ The great majority of very metal-poor stars lie at \(0<\rm[C/Fe]<+0.7\), i.e. they are C-normal. We predict that these stars have been predominantly polluted by normal Pop II SNe, in agreement with recent studies investigating the abundance patterns of C-normal stars and their small star-to-star scatter (Vanni et al. 2023). In addition, we find a population of CEMP stars at \(\rm[Fe/H]\gtrsim-2.8\), which were born from gas enriched by Pop II AGB stars.
5. _Pop III descendants._ Regardless of the assumed model, all CEMP stars at \(\rm[Fe/H]\lesssim-2.8\) have been enriched to \(>20\%\) by Pop III progenitors. This value increases to \(>95\%\) at \(\rm[C/Fe]\gtrsim+2\) (Fig. 7). At fixed [C/Fe], CEMP stars with the lowest metallicities are faint SNe descendants, while as we move to higher [Fe/H] the contribution of higher energy Pop III SNe prevails. According to our results, very metal-poor stars with \(\rm[C/Fe]\lesssim 0\) are predominantly imprinted by primordial hypernovae (at \(\rm[Fe/H]\lesssim-2.5\)) and PISNe (at \(\rm[Fe/H]\gtrsim-2.5\); Fig. 8).
We have demonstrated that the Pop III EDF can be equally important to their IMF in shaping the abundances of EMP halo stars. We find that only EDFs that are weighted towards low explosion energies combined with bottom heavy IMFs (even if they extend to \(1000\rm~{}M_{\odot}\)) can reproduce simultaneously the MDF, the CDF and the fraction of CEMP stars in the Galactic halo. However, this comparison alone does not allow a tighter constraint on the Pop III IMF, mixing and EDF due to the degeneracies between them and, most importantly, to the large uncertainties associated with the observed relations.
We have shown that, regardless of the assumed model, the descendants of each type of primordial SNe, always appear at specific regions in the [C/Fe]-[Fe/H] diagram. However, their prevalence there varies depending on the IMF and EDF of Pop III stars (Figs 8 and B2). In a following study, we will quantify this variation and compare with the observed fractions of confirmed Pop III-enriched stars. In addition, we plan to follow additional chemical elements, or even full abundance patters (e.g., Vanni et al. in prep). This way, we will gain further insight into the properties of primordial SNe and potentially break the aforementioned degeneracies.
Additional constraints from forthcoming large spectroscopic surveys, such as WEAVE (Dalton 2016) and 4MOST (Christlieb et al. 2019), as well as surveys dedicated to identifying Pop III descendants (e.g., Aguado et al. 2023) and complementary studies of high-z gaseous absorption systems imprinted by the first stellar generations (e.g. Saccardi et al. 2023), will greatly boost our efforts to unveil the nature of the first SNe.
## Acknowledgements
The authors acknowledge support from the ERC Starting Grant NEFERTITI H2020/808240.
## Data Availability
The data underlying this article will be shared on reasonable request to the corresponding author.
|
2309.08419 | Explicit solutions and linear inviscid damping in the Euler-Boussinesq
equation near a stratified Couette flow in the periodic strip | This short note provides explicit solutions to the linearized Boussinesq
equations around the stably stratified Couette flow posed on
$\mathbb{T}\times\mathbb{R}$. We consider the long-time behavior of such
solutions and prove inviscid damping of the perturbed density and velocity
field for any positive Richardson number, with optimal rates. The explicit
solution is obtained through the limiting absorption principle whereas the
inviscid damping is proved using oscillatory integral methods. | Michele Coti Zelati, Marc Nualart | 2023-09-15T14:22:24Z | http://arxiv.org/abs/2309.08419v2 | Explicit solutions and linear inviscid damping in the Euler-Boussinesq equation near a stratified Couette flow in the periodic strip
###### Abstract.
This short note provides explicit solutions to the linearized Boussinesq equations around the stably stratified Couette flow posed on \(\mathbb{T}\times\mathbb{R}\). We consider the long-time behavior of such solutions and prove inviscid damping of the perturbed density and velocity field for any positive Richardson number, with optimal rates. The explicit solution is obtained through the limiting absorption principle whereas the inviscid damping is proved using oscillatory integral methods.
Key words and phrases:Inviscid damping, stationary-phase method, Boussinesq approximation 2020 Mathematics Subject Classification: 35Q31, 76B70, 76E05
###### Contents
* 1 Introduction
* 2 Proof of Theorem 1
* 3 Heuristics for the explicit solutions
* 4 Proof of Theorem 2
* A The Whittaker functions
* 5 Acknowledgments
## 1. Introduction
The Euler equations under the Boussinesq approximation
\[(\partial_{t}+\tilde{\mathbf{u}}\cdot\nabla)\tilde{\omega} =-\mathfrak{g}\,\partial_{x}\tilde{\rho}, \tag{1.1}\] \[(\partial_{t}+\tilde{\mathbf{u}}\cdot\nabla)\tilde{\rho} =0,\]
models the evolution of an incompressible, non-homogeneous ideal fluid whose velocity field is \(\tilde{\mathbf{u}}=\nabla^{\perp}\Delta^{-1}\tilde{\omega}\), with associated vorticity \(\tilde{\omega}=\nabla^{\perp}\cdot\tilde{\mathbf{u}}\) and where the density of the fluid is given by \(\tilde{\rho}\). Here, \(\mathfrak{g}\) is the gravity constant.
The physical domain in which we consider the Euler-Boussinesq system (1.1) is the periodic strip \(\mathbb{T}\times\mathbb{R}\), where
\[\bar{\mathbf{u}}=(y,0),\quad\bar{\rho}(y)=1-\vartheta y,\quad\partial_{y}p=- \mathfrak{g}\bar{\rho}(y), \tag{1.2}\]
constitutes a steady solution for the equations of motion and represents a stably stratified Couette flow whose density slope is \(\vartheta>0\). Our interest lies in describing the linearized long-time dynamics of solutions to (1.1) that are near the stationary configuration (1.2). As such, we consider the perturbed velocity \(\tilde{\mathbf{u}}=\bar{\mathbf{u}}+\mathbf{u}\) and
density profile \(\tilde{\rho}=\bar{\rho}+\vartheta\rho\), and define the corresponding vorticity perturbation \(\omega=\nabla^{\perp}\cdot\mathbf{u}\). The linearized Euler-Boussinesq system (1.1) nearby the stably stratified Couette flow (1.2) then takes the form
\[\begin{cases}\partial_{t}\omega+y\partial_{x}\omega=-\beta^{2}\partial_{x}\rho \\ \partial_{t}\rho+y\partial_{x}\rho=\partial_{x}\psi,\\ \Delta\psi=\omega,\end{cases} \tag{1.3}\]
with \(\psi\) being the stream-function of the velocity field \(\mathbf{u}\) and \(\beta=\sqrt{\vartheta\mathfrak{g}}>0\). In the periodic setting \(x\in\mathbb{T}\), it is advantageous to write
\[\omega(t,x,y)=\sum_{m\in\mathbb{Z}}\omega_{m}(t,y)\mathrm{e}^{imx},\quad\rho(t,x,y)=\sum_{m\in\mathbb{Z}}\rho_{m}(t,y)\mathrm{e}^{imx},\quad\psi(t,x,y)=\sum _{m\in\mathbb{Z}}\psi_{m}(t,y)\mathrm{e}^{imx}. \tag{1.4}\]
Thus, (1.3) now reads
\[(\partial_{t}+imy)\omega_{m} =-im\beta^{2}\rho_{m} \tag{1.5}\] \[(\partial_{t}+imy)\rho_{m} =im\psi_{m}, \tag{1.6}\]
where further
\[\begin{cases}\Delta_{m}\psi_{m}=\omega_{m},\\ \lim_{|y|\to\infty}\psi_{m}=0,\end{cases}\qquad\Delta_{m}:=\partial_{y}^{2}-m^ {2}, \tag{1.7}\]
for all \(m\in\mathbb{Z}\). Our first result shows that (1.5)-(1.6), and thus (1.3) through (1.4) can be solved explicitly in the physical space \(y\in\mathbb{R}\) as truncated convolutions of oscillating Whittaker functions against suitable combinations of the initial data. The Whittaker functions \(W_{0,\gamma}\) with \(\gamma\in\mathbb{C}\) satisfy (see [13, 9])
\[\partial_{\zeta}^{2}W_{0,\gamma}+\left(-\frac{1}{4}+\frac{1/4-\gamma^{2}}{ \zeta^{2}}\right)W_{0,\gamma}=0,\qquad W_{0,\gamma}(\zeta)\sim\mathrm{e}^{- \frac{1}{2}\zeta}\quad\text{ as }\quad\zeta\to\infty, \tag{1.8}\]
and constitute the main ingredient in the construction of the explicit solutions.
**Theorem 1**.: _Let \(\beta>0\). Given initial conditions \((\omega^{0},\rho^{0})\) such that_
\[\int_{\mathbb{T}}\omega^{0}(x,y)\mathrm{d}x=\int_{\mathbb{T}}\rho^{0}(x,y) \mathrm{d}x=0, \tag{1.9}\]
_the unique solution to (1.3) is given through (1.4) and (1.7) by \(\psi_{0}=\rho_{0}\equiv 0\),_
\[\begin{split}\psi_{m}(t,y)=\frac{\mathrm{e}^{-imyt}}{2|m|\pi}\cos (\gamma\pi)\left(\int_{0}^{\infty}\mathrm{e}^{im\eta t}W(\eta)\int_{0}^{ \infty}W(\xi)G_{m}(\eta,\xi,y)\mathrm{d}\xi\mathrm{d}\eta\right.\\ \left.-\int_{0}^{\infty}\mathrm{e}^{-im\eta t}W(\eta)\int_{0}^{ \infty}W(\xi)G_{m}(-\eta,-\xi,y)\mathrm{d}\xi\mathrm{d}\eta\right)\end{split} \tag{1.10}\]
_and_
\[\begin{split}\rho_{m}(t,y)=\frac{\mathrm{e}^{-imyt}}{2|m|\pi} \cos(\gamma\pi)\left(\int_{0}^{\infty}\mathrm{e}^{im\eta t}\frac{W(\eta)}{ \eta}\int_{0}^{\infty}W(\xi)G_{m}(\eta,\xi,y)\mathrm{d}\xi\mathrm{d}\eta \right.\\ \left.+\int_{0}^{\infty}\mathrm{e}^{-im\eta t}\frac{W(\eta)}{ \eta}\int_{0}^{\infty}W(\xi)G_{m}(-\eta,-\xi,y)\mathrm{d}\xi\mathrm{d}\eta \right),\end{split} \tag{1.11}\]
_for all \(m\neq 0\). Here, for \(\gamma:=\sqrt{1/4-\beta^{2}}\) we denote \(W(\cdot)=W_{0,\gamma}(2|m|\cdot)\), where \(W_{0,\gamma}\) satisfies (1.8). Further,_
\[G_{m}(\eta,\xi,y)=\Delta_{m}\left(\rho_{m}^{0}(\xi+y-\eta)-\frac{1}{\beta^{2} }\xi\omega_{m}^{0}(\xi+y-\eta)\right).\]
The zero modes in \(x\in\mathbb{T}\) of initial data evolving according to (1.3) are constants of motion, so that (1.9) does not actually constitute a restriction on the initial data. The expressions (1.10) and (1.11) give rise to real-valued solutions \(\psi\), \(\omega\) and \(\rho\) via (1.4) due to the fact that \(\psi_{m}=\overline{\psi_{-m}}\) and \(\rho_{m}\) = \(\overline{\rho_{-m}}\), which are
straightforward consequences of \(\omega^{0}\) and \(\rho^{0}\) being real-valued. In particular, we shall assume without loss of generality throughout the article that \(m\geq 1\).
Our second result consists in the derivation of sharp decay estimates, which quantify the phenomenon of inviscid damping.
**Theorem 2**.: _Let \(\beta>0\) and assume that the initial data \((\omega^{0},\rho^{0})\) to (1.3) satisfies (1.9). Let \(\mathbf{u}=(u^{x},u^{y})=\nabla^{\perp}\psi=(-\partial_{y}\psi,\partial_{x}\psi)\) be the corresponding velocity field. We have the following estimates._
* _If_ \(\beta^{2}\neq 1/4\)_, let_ \(\mu=\mathrm{Re}\sqrt{1/4-\beta^{2}}\) _and_ \(\nu=\mathrm{Im}\sqrt{1/4-\beta^{2}}\)_. Then,_ \[\|u^{x}(t)\|_{L^{2}} \lesssim\frac{1}{t^{\frac{1}{2}-\mu}}\left(\|\rho^{0}\|_{L^{2}_{x }H^{3}_{y}}+\|\omega^{0}\|_{L^{2}_{x}H^{3}_{y}}\right),\] (1.12) \[\|u^{y}(t)\|_{L^{2}} \lesssim\frac{1}{t^{\frac{3}{2}-\mu}}\left(\|\rho^{0}\|_{L^{2}_{ x}H^{4}_{y}}+\|\omega^{0}\|_{L^{2}_{x}H^{4}_{y}}\right),\] (1.13) \[\|\rho(t)\|_{L^{2}} \lesssim\frac{1}{t^{\frac{1}{2}-\mu}}\left(\|\rho^{0}\|_{L^{2}_{ x}H^{3}_{y}}+\|\omega^{0}\|_{L^{2}_{x}H^{3}_{y}}\right),\] (1.14) _for every_ \(t\geq 1\)_._
* _If_ \(\beta^{2}=1/4\)_, then_ \[\|u^{x}(t)\|_{L^{2}} \lesssim\frac{1+\log(t)}{t^{\frac{1}{2}}}\left(\|\rho^{0}\|_{L^{ 2}_{x}H^{3}_{y}}+\|\omega^{0}\|_{L^{2}_{x}H^{3}_{y}}\right),\] (1.15) \[\|u^{y}(t)\|_{L^{2}} \lesssim\frac{1+\log(t)}{t^{\frac{3}{2}}}\left(\|\rho^{0}\|_{L^{ 2}_{x}H^{4}_{y}}+\|\omega^{0}\|_{L^{2}_{x}H^{4}_{y}}\right),\] (1.16) \[\|\rho(t)\|_{L^{2}} \lesssim\frac{1+\log(t)}{t^{\frac{1}{2}}}\left(\|\rho^{0}\|_{L^{ 2}_{x}H^{3}_{y}}+\|\omega^{0}\|_{L^{2}_{x}H^{3}_{y}}\right),\] (1.17) _for every_ \(t\geq 1\)_._
The inviscid damping estimates (1.12)-(1.17) describe the long-time dynamics of solutions to (1.3) and show the linear asymptotic stability of the stratified Couette configuration (1.2) for the Euler-Boussinesq system (1.1). The decay is produced by two phenomena. Firstly, there is _mixing_ due to the background Couette flow and secondly there is _stratification_ due to the background density. The effect of mixing has been thoroughly studied in the homogeneous Euler equations both at the linear [11, 12, 7, 15] and non-linear level [1, 6, 8].
Estimates analogous to those of Theorem 2 have been already obtained in [14] using an explicit formula for solutions on the Fourier side (inspired by an early work of Hartman [5] in 1975), and in [2] via an energy method. Our approach is rather based on a stationary-phase type argument, exploiting the explicit solutions of Theorem 1 in physical space and obtaining decay rates related to the regularity (and more precisely on the asymptotic expansion) of the Whittaker functions about the origin. While these formulae do not produce a new result in the periodic strip \(\mathbb{T}\times\mathbb{R}\), our method allows to treat the physically relevant case of the periodic channel \(\mathbb{T}\times[0,1]\), see [3], and it is therefore more robust in this sense. In [3] explicit solutions are not available, however one similarly can write solutions to (1.3) through oscillatory integrals now involving a limiting absorption principle in which the regularity of the limiting functions (and thus the gained time-decay via stationary-phase arguments) is related to that of the Whittaker functions.
### Notation and assumptions
Throughout the article, we assume that \(\beta>0\) and \(m\geq 1\). To quantify the regularity of the initial data, for \(j\geq 0\) we introduce
\[Q_{j,m}=\|\rho^{0}_{m}\|_{H^{2+j}_{y}(\mathbb{R})}+\|\omega^{0}_{m}\|_{H^{2+j }_{y}(\mathbb{R})}.\]
As usual, we say that \(A\lesssim B\) when there exists \(C>0\) such that \(A\leq CB\).
### Plan of the article
In Section 2 we prove Theorem 1 and in Section 3 we provide an heuristic explanation for the form of the solutions (1.10), (1.11). Section 4 is devoted to the proof of Theorem 2. In the Appendix A we provide the main asymptotic expansions for the Whittaker functions that are used to establish Theorem 2.
## 2. Proof of Theorem 1
The proof consists on showing that \(\psi_{m}\), \(\rho_{m}\) and \(\omega_{m}\) given by (1.10), (1.11) and (1.7) respectively, satisfy the linearized Euler-Boussinesq equations (1.5)-(1.6). According to (1.10), (1.11) and (1.7), we write
\[\psi_{m}(t,y)=\mathrm{e}^{-imyt}\Psi_{m}(t,y),\qquad\rho_{m}(t,y)=\mathrm{e}^{- imty}\mathrm{P}_{m}(t,y),\qquad\omega_{m}(t,y)=\mathrm{e}^{-imyt}\Omega_{m}(t,y), \tag{2.1}\]
where
\[\Omega_{m}:=-m^{2}t^{2}\Psi_{m}-2imt\partial_{y}\Psi_{m}+\Delta_{m}\Psi_{m}. \tag{2.2}\]
Now, with this formulation we must check that \(\Omega_{m},\Psi_{m}\) satisfy
\[\partial_{t}\Omega_{m} =-im\beta^{2}\mathrm{P}_{m} \tag{2.3}\] \[\partial_{t}\mathrm{P}_{m} =im\Psi_{m}. \tag{2.4}\]
Clearly, (2.4) follows directly from (1.10), (1.11) and (2.1). To show (2.3), we first notice that
\[\partial_{t}\Omega_{m}=-2m^{2}t\Psi_{m}-m^{2}t^{2}\partial_{t}\Psi_{m}-2im \partial_{y}\Psi_{m}-2imt\partial_{t}\partial_{y}\Psi_{m}+\Delta_{m}\partial_ {t}\Psi_{m},\]
where, from (1.10) and (2.1), we have that
\[-2m^{2}t\Psi_{m} =-2m^{2}t\frac{\cos(\gamma\pi)}{2m\pi}\left(\int_{0}^{\infty} \frac{1}{imt}\partial_{\eta}\left(\mathrm{e}^{im\eta t}\right)W(\eta)\int_{0} ^{\infty}W(\xi)G_{m}(\eta,\xi,y)\mathrm{d}\xi\mathrm{d}\eta\right.\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\left.+\int_{0} ^{\infty}\frac{1}{imt}\partial_{\eta}\left(\mathrm{e}^{-im\eta t}\right)W(\eta )\int_{0}^{\infty}W(\xi)G_{m}(-\eta,-\xi,y)\mathrm{d}\xi\mathrm{d}\eta\right)\] \[=-2mi\frac{\cos(\gamma\pi)}{2m\pi}\left(\int_{0}^{\infty} \mathrm{e}^{im\eta t}\partial_{\eta}\left(W(\eta)\int_{0}^{\infty}W(\xi)G_{m} (\eta,\xi,y)\mathrm{d}\xi\right)\mathrm{d}\eta\right.\] \[\qquad\qquad\qquad\qquad\qquad\qquad\left.+\int_{0}^{\infty} \mathrm{e}^{-im\eta t}\partial_{\eta}\left(W(\eta)\int_{0}^{\infty}W(\xi)G_{m} (-\eta,-\xi,y)\mathrm{d}\xi\right)\mathrm{d}\eta\right),\]
while
\[-m^{2}t^{2}\partial_{t}\Psi_{m}=-m^{2}t^{2}\frac{\cos(\gamma\pi) }{2m\pi}\left(\int_{0}^{\infty}\mathrm{e}^{im\eta t}im\eta W(\eta)\int_{0}^{ \infty}W(\xi)G_{m}(\eta,\xi,y)\mathrm{d}\xi\mathrm{d}\eta\right.\\ \left.+\int_{0}^{\infty}\mathrm{e}^{-im\eta t}im\eta W(\eta)\int_{ 0}^{\infty}W(\xi)G_{m}(-\eta,-\xi,y)\mathrm{d}\xi\mathrm{d}\eta\right)\\ =im\frac{\cos(\gamma\pi)}{2m\pi}\left(\int_{0}^{\infty}\mathrm{e} ^{im\eta t}\partial_{\eta}^{2}\left(\eta W(\eta)\int_{0}^{\infty}W(\xi)G_{m} (\eta,\xi,y)\mathrm{d}\xi\right)\mathrm{d}\eta\right.\\ \left.+\int_{0}^{\infty}\mathrm{e}^{-im\eta t}\partial_{\eta}^{2} \left(\eta W(\eta)\int_{0}^{\infty}W(\xi)G_{m}(-\eta,-\xi,y)\mathrm{d}\xi \right)\mathrm{d}\eta\right)\]
and
\[-2im\partial_{y}\Psi_{m}=-2im\frac{\cos(\gamma\pi)}{2m\pi}\left( \int_{0}^{\infty}\mathrm{e}^{im\eta t}W(\eta)\int_{0}^{\infty}W(\xi)\partial_ {y}G_{m}(\eta,\xi,y)\mathrm{d}\xi\mathrm{d}\eta\right.\\ \left.-\int_{0}^{\infty}\mathrm{e}^{-im\eta t}W(\eta)\int_{0}^{ \infty}W(\xi)\partial_{y}G_{m}(-\eta,-\xi,y)\mathrm{d}\xi\mathrm{d}\eta\right),\]
with also
\[-2imt\partial_{t}\partial_{y}\Psi_{m} =-2imt\frac{\cos(\gamma\pi)}{2m\pi}\left(\int_{0}^{\infty}\mathrm{e} ^{im\eta t}im\eta W(\eta)\int_{0}^{\infty}W(\xi)\partial_{y}G_{m}(\eta,\xi,y) \mathrm{d}\xi\mathrm{d}\eta\right.\] \[\left.+\int_{0}^{\infty}\mathrm{e}^{-im\eta t}im\eta W(\eta)\int _{0}^{\infty}W(\xi)\partial_{y}G_{m}(-\eta,-\xi,y)\mathrm{d}\xi\mathrm{d}\eta\right)\] \[=2im\frac{\cos(\gamma\pi)}{2m\pi}\left(\int_{0}^{\infty}\mathrm{e }^{im\eta t}\partial_{\eta}\left(\eta W(\eta)\int_{0}^{\infty}W(\xi)\partial_{ y}G_{m}(\eta,\xi,y)\mathrm{d}\xi\right)\mathrm{d}\eta\right.\] \[\left.-\int_{0}^{\infty}\mathrm{e}^{-im\eta t}\partial_{\eta} \left(\eta W(\eta)\int_{0}^{\infty}W(\xi)\partial_{y}G_{m}(-\eta,-\xi,y) \mathrm{d}\xi\right)\mathrm{d}\eta\right)\]
and finally
\[\Delta_{m}\partial_{t}\Psi_{m} =im\frac{\cos(\gamma\pi)}{2m\pi}\left(\int_{0}^{\infty}\mathrm{e }^{im\eta t}\eta W(\eta)\int_{0}^{\infty}W(\xi)\Delta_{m}G_{m}(\eta,\xi,y) \mathrm{d}\xi\mathrm{d}\eta\right.\] \[\left.+\int_{0}^{\infty}\mathrm{e}^{-im\eta t}\eta W(\eta)\int_{ 0}^{\infty}W(\xi)\Delta_{m}G_{m}(-\eta,-\xi,y)\mathrm{d}\xi\mathrm{d}\eta \right).\]
Therefore, it is straightforward to see that
\[\partial_{t}\Omega_{m}=im\frac{\cos(\gamma\pi)}{2m\pi}\left(\int_{0}^{\infty} \mathrm{e}^{im\eta t}\Omega_{m}^{(+)}(\eta)\mathrm{d}\eta+\int_{0}^{\infty} \mathrm{e}^{im\eta t}\Omega_{m}^{(-)}(\eta)\mathrm{d}\eta\right),\]
where
\[\Omega_{m}^{(+)}(\eta) =-2\partial_{\eta}\left(W(\eta)\int_{0}^{\infty}W(\xi)G_{m}(\eta,\xi,y)\mathrm{d}\xi\right)+\partial_{\eta}^{2}\left(\eta W(\eta)\int_{0}^{ \infty}W(\xi)G_{m}(\eta,\xi,y)\mathrm{d}\xi\right)\] \[\quad-2W(\eta)\int_{0}^{\infty}W(\xi)\partial_{y}G_{m}(\eta,\xi,y )\mathrm{d}\xi+2\partial_{\eta}\left(\eta W(\eta)\int_{0}^{\infty}W(\xi) \partial_{y}G_{m}(\eta,\xi,y)\mathrm{d}\xi\right)\] \[\quad+\eta W(\eta)\int_{0}^{\infty}W(\xi)\Delta_{m}G_{m}(\eta,\xi,y)\mathrm{d}\xi\]
and similarly
\[\Omega_{m}^{(-)}(\eta) =-2\partial_{\eta}\left(W(\eta)\int_{0}^{\infty}W(\xi)G_{m}(-\eta,-\xi,y)\mathrm{d}\xi\right)+\partial_{\eta}^{2}\left(\eta W(\eta)\int_{0}^{ \infty}W(\xi)G_{m}(-\eta,-\xi,y)\mathrm{d}\xi\right)\] \[\quad+2W(\eta)\int_{0}^{\infty}W(\xi)\partial_{y}G_{m}(-\eta,-\xi,y)\mathrm{d}\xi-2\partial_{\eta}\left(\eta W(\eta)\int_{0}^{\infty}W(\xi) \partial_{y}G_{m}(-\eta,-\xi,y)\mathrm{d}\xi\right)\] \[\quad+\eta W(\eta)\int_{0}^{\infty}W(\xi)\Delta_{m}G_{m}(-\eta,- \xi,y)\mathrm{d}\xi.\]
Now, note that
\[\Omega_{m}^{(+)}(\eta) =\eta\partial_{\eta}^{2}\left(W(\eta)\int_{0}^{\infty}W(\xi)G_{m }(\eta,\xi,y)\mathrm{d}\xi\right)+2\eta\partial_{\eta}\left(W(\eta)\int_{0}^{ \infty}W(\xi)\partial_{y}G_{m}(\eta,\xi,y)\mathrm{d}\xi\right)\] \[\quad+\eta W(\eta)\int_{0}^{\infty}W(\xi)\Delta_{m}G_{m}(\eta, \xi,y)\mathrm{d}\xi\] \[=\eta W^{\prime\prime}(\eta)\int_{0}^{\infty}W(\xi)G_{m}(\eta,\xi, y)\mathrm{d}\xi+2\eta W^{\prime}(\eta)\int_{0}^{\infty}W(\xi)\partial_{\eta}G_{m}( \eta,\xi,y)\mathrm{d}\xi\] \[\quad+\eta W(\eta)\int_{0}^{\infty}W(\xi)\partial_{\eta}^{2}G_{m }(\eta,\xi,y)\mathrm{d}\xi+2\eta W^{\prime}(\eta)\int_{0}^{\infty}W(\xi) \partial_{y}G_{m}(\eta,\xi,y)\mathrm{d}\xi\] \[\quad+2\eta W(\eta)\int_{0}^{\infty}W(\xi)\partial_{\eta}\partial_{ y}G_{m}(\eta,\xi,y)\mathrm{d}\xi+\eta W(\eta)\int_{0}^{\infty}W(\xi)\Delta_{m}G_{m}( \eta,\xi,y)\mathrm{d}\xi\]
and further observe that \(\left(\partial_{y}+\partial_{\eta}\right)G_{m}(\eta,\xi,y)=0\), from which we deduce that
\[\left(\partial_{\eta}^{2}+2\partial_{\eta}\partial_{y}+\partial_{y}^{2}\right)G _{m}(\eta,\xi,y)=(\partial_{\eta}+\partial_{y})^{2}\mathcal{G}_{m}(\eta,\xi,y)=0\]
and
\[\Omega_{m}^{(+)}(\eta)=\eta\Delta_{m}W(\eta)\int_{0}^{\infty}W(\xi)G_{m}(\eta, \xi,y)\mathrm{d}\xi\]
Now, using (1.8), we see that
\[\Delta_{m}W(\zeta)+\beta^{2}\frac{W(\zeta)}{\zeta^{2}}=0,\]
and thus we can write
\[\Omega_{m}^{(+)}(\eta)=-\beta^{2}\frac{W(\eta)}{\eta}\int_{0}^{\infty}W(\xi)G _{m}(\eta,\xi,y)\mathrm{d}\xi\]
and similarly for \(\Omega_{m}^{(-)}(\eta)\). We finish by assembling and recognising \(\mathrm{P}_{m}\),
\[\partial_{t}\Omega_{m} =im\frac{\cos(\gamma\pi)}{2m\pi}\left(-\int_{0}^{\infty}\mathrm{e }^{im\eta t}\beta^{2}\frac{W(\eta)}{\eta}\int_{0}^{\infty}W(\xi)G_{m}(\eta, \xi,y)\mathrm{d}\xi\mathrm{d}\eta\right.\] \[\qquad\qquad\qquad\qquad\qquad\left.-\int_{0}^{\infty}\mathrm{e }^{-im\eta t}\beta^{2}\frac{W(\eta)}{\eta}\int_{0}^{\infty}W(\xi)G_{m}(-\eta, -\xi,y)\mathrm{d}\xi\mathrm{d}\eta\right)\] \[=-im\beta^{2}\mathrm{P}_{m}\]
With this, the proof is concluded.
## 3. Heuristics for the explicit solutions
The presence of the Whittaker functions in (1.10) and (1.11) is key in the proof of Theorem 1, since they relate \(\Omega_{m}^{(\pm)}\) to \(\mathrm{P}_{m}\) due to (1.8). In fact, this is essentially the main reason why (1.10) and (1.11) provide solutions to (1.3). However, the proof of Theorem 1 does not explain why these Whittaker functions arise in (1.10) and (1.11). This is precisely the purpose of this section, which sets the framework for obtaining (1.10) and (1.11) via the method of the limiting absorption principle.
### Generalized stream-functions and densities
Writing (1.5)-(1.6) in the compact stream-function formulation
\[\partial_{t}\begin{pmatrix}\psi_{m}\\ \rho_{m}\end{pmatrix}+imL_{m}\begin{pmatrix}\psi_{m}\\ \rho_{m}\end{pmatrix}=0, \tag{3.1}\]
we directly obtain its solution as
\[\begin{pmatrix}\psi_{m}\\ \rho_{m}\end{pmatrix}=\mathrm{e}^{-imL_{m}t}\begin{pmatrix}\psi_{m}^{0}\\ \rho_{m}^{0}\end{pmatrix}, \tag{3.2}\]
where \(L_{m}\) is the linear operator defined by
\[L_{m}=\begin{pmatrix}\Delta_{m}^{-1}(y\Delta_{m})&\beta^{2}\Delta_{m}^{-1}\\ -1&y\end{pmatrix}. \tag{3.3}\]
Using Dunford's formula [4, 10], we have that
\[\begin{pmatrix}\psi_{m}(t,y)\\ \rho_{m}(t,y)\end{pmatrix}=\frac{1}{2\pi i}\int_{\partial\mathrm{D}}\mathrm{e }^{-imct}(c-L_{m})^{-1}\begin{pmatrix}\psi_{m}^{0}(y)\\ \rho_{m}^{0}(y)\end{pmatrix}\,\mathrm{d}c, \tag{3.4}\]
where here \(\mathrm{D}\) is any domain containing the spectrum \(\sigma(L_{m})\). On the periodic strip, the spectrum \(\sigma(L_{m})\) is continuous and consists on the real line \(\mathbb{R}\). Hence, we can reduce the contour of integration to
\[\begin{pmatrix}\psi_{m}(t,y)\\ \rho_{m}(t,y)\end{pmatrix}=\frac{1}{2\pi i}\lim_{\varepsilon\to 0}\int_{- \infty}^{+\infty}\mathrm{e}^{-imy_{0}t}\left[(-y_{0}-i\varepsilon+L_{m})^{-1}-( -y_{0}+i\varepsilon+L_{m})^{-1}\right]\begin{pmatrix}\psi_{m}^{0}\\ \rho_{m}^{0}\end{pmatrix}\,\mathrm{d}y_{0}. \tag{3.5}\]
For \(\varepsilon>0\), we denote
\[\begin{pmatrix}\psi_{m,\varepsilon}^{\pm}(y,y_{0})\\ \rho_{m,\varepsilon}^{\pm}(y,y_{0})\end{pmatrix}:=(-y_{0}\pm i\varepsilon+L_{m} )^{-1}\begin{pmatrix}\psi_{m}^{0}(y)\\ \rho_{m}^{0}(y)\end{pmatrix}\]
and obtain the coupled system of equations for the generalized stream-functions \(\psi^{\pm}_{m,\varepsilon}\) and generalized densities \(\rho^{\pm}_{m,\varepsilon}\)
\[\omega^{0}_{m}(y) =(y-y_{0}\pm i\varepsilon)\Delta_{m}\psi^{\pm}_{m,\varepsilon}(y,y _{0})+\beta^{2}\rho^{\pm}_{m,\varepsilon}(y,y_{0}),\] \[\rho^{0}_{m}(y) =(y-y_{0}\pm i\varepsilon)\rho^{\pm}_{m,\varepsilon}(y,y_{0})- \psi^{\pm}_{m,\varepsilon}(y,y_{0}).\]
We first solve for the generalized densities
\[\rho^{\pm}_{m,\varepsilon}(y,y_{0})=\frac{1}{y-y_{0}\pm i\varepsilon}\left( \rho^{0}_{m}(y)+\psi^{\pm}_{m,\varepsilon}(y,y_{0})\right) \tag{3.6}\]
and from there we obtain the following inhomogeneous _Taylor-Goldstein equation_ for the generalized stream-functions \(\psi^{\pm}_{m,\varepsilon}\),
\[\Delta_{m}\psi^{\pm}_{m,\varepsilon}+\beta^{2}\frac{\psi^{\pm}_{m, \varepsilon}}{(y-y_{0}\pm i\varepsilon)^{2}}=\frac{\omega^{0}_{m}}{y-y_{0} \pm i\varepsilon}-\beta^{2}\frac{\rho^{0}_{m}}{(y-y_{0}\pm i\varepsilon)^{2}},\] (TG)
along with the vanishing of \(\psi^{\pm}_{m,\varepsilon}\) at infinity.
### Explicit solutions for the generalized stream-functions and densities
The Taylor-Goldstein equation (TG) admits a fairly explicit Green's function given by
\[\mathcal{G}^{\pm}_{m,\varepsilon}(y,y_{0},z)=-\frac{1}{2m}\begin{cases}W(y_{0} -z\mp i\varepsilon)W(y-y_{0}\pm i\varepsilon),&y\geq z,\\ W(z-y_{0}\pm i\varepsilon)W(y_{0}-y\mp i\varepsilon),&y\leq z,\end{cases}\]
where we recall \(W(\zeta)=W_{0,\gamma}(2m\zeta)\) for \(\gamma^{2}=\frac{1}{4}-\beta^{2}\) and it is such that
\[\partial_{\zeta}^{2}W+\left(-\frac{1}{4}+\frac{1/4-(1/4-\beta^{2})}{4m^{2} \zeta^{2}}\right)W=0,\quad W(\zeta)\sim\mathrm{e}^{-m\zeta},\;\text{as}\; \zeta\to\infty, \tag{3.7}\]
since the Whittaker function \(W_{0,\gamma}\) satisfies (1.8). To obtain suitable formulas for the generalized stream-functions and densities, define
\[H^{\pm}_{m,\varepsilon}(z,y_{0}):=\Delta_{m}\rho^{0}_{m}(z)-\frac{1}{\beta^{2 }}\Delta_{m}\big{(}(z-y_{0}\pm i\varepsilon)\omega^{0}_{m}(z)\big{)}.\]
and assume that the initial data vanish at infinity. Then, the solution \(\psi^{\pm}_{m,\varepsilon}(y,y_{0})\) to (TG) is
\[\psi^{\pm}_{m,\varepsilon}(y,y_{0})=\frac{1}{\beta^{2}}(y-y_{0}\pm i \varepsilon)\omega^{0}_{m}(y)-\rho^{0}_{m}(y)+\int_{-\infty}^{+\infty} \mathcal{G}^{\pm}_{m,\varepsilon}(y,y_{0},z)H^{\pm}_{m,\varepsilon}(z,y_{0}) \mathrm{d}z. \tag{3.8}\]
and the generalized density is given by
\[\rho^{\pm}_{m,\varepsilon}(y,y_{0})=\frac{1}{\beta^{2}}\omega^{0}_{m}(y)+\frac {1}{y-y_{0}\pm i\varepsilon}\int_{-\infty}^{+\infty}\mathcal{G}^{\pm}_{m, \varepsilon}(y,y_{0},z)H^{\pm}_{m,\varepsilon}(z,y_{0})\mathrm{d}z. \tag{3.9}\]
### The limiting absorption principle
With (3.8) and (3.9) at hand, one may precisely compute the limiting absorption principle, that is, we may precisely compute (3.5) For instance, to obtain \(\psi_{m}(t,y)\) one may compute
\[\lim_{\varepsilon\to 0}\int_{\mathbb{R}}\mathrm{e}^{-imy_{0}t}\int_{\mathbb{R}} \left(\mathcal{G}^{-}_{m,\varepsilon}(y,y_{0},z)-\mathcal{G}^{+}_{m, \varepsilon}(y,y_{0},z)\right)H^{-}_{m,\varepsilon}(z,y_{0})\mathrm{d}z \mathrm{d}y_{0}\]
and note that with the change of variables \(\xi=z-y_{0}\) and \(\eta=y-y_{0}\),
\[\int_{\mathbb{R}}\mathcal{G}_{m}^{\pm}(y,z)H_{m,\varepsilon}^{-}(z, y_{0})\mathrm{d}z =-\frac{1}{2m}W(y-y_{0}\pm i\varepsilon)\int_{-\infty}^{y}W(y_{0}-z \mp i\varepsilon)H_{m,\varepsilon}^{-}(z,y_{0})\mathrm{d}z\] \[\quad-\frac{1}{2m}W(y_{0}-y\mp i\varepsilon)\int_{y}^{\infty}W(z- y_{0}\pm i\varepsilon)H_{m,\varepsilon}^{-}(z,y_{0})\mathrm{d}z\] \[=-\frac{1}{2m}W(\eta\pm i\varepsilon))\int_{-\infty}^{\eta}W(- \xi\mp i\varepsilon))H_{m,\varepsilon}^{-}(\xi+y-\eta,y-\eta)\mathrm{d}\xi\] \[\quad-\frac{1}{2m}W(-\eta\mp i\varepsilon)\int_{\eta}^{\infty}W( \xi\pm i\varepsilon)H_{m,\varepsilon}^{-}(\xi+y-\eta,y-\eta)\mathrm{d}\xi.\]
Setting \(G_{m,\varepsilon}^{\pm}(\eta,\xi,y):=H_{m,\varepsilon}^{\pm}(\xi+y-\eta,y-\eta)\) and \(G_{m}:=\lim_{\varepsilon\to 0}G_{m,\varepsilon}^{\pm}=G_{m,0}^{\pm}\), we find that
\[\int_{\mathbb{R}}\big{(}\mathcal{G}_{m}^{-}(y,z)-\mathcal{G}_{m} ^{+}(y,z)\big{)} H_{m,\varepsilon}^{-}(z,y_{0})\mathrm{d}z\] \[=\frac{1}{2m}\Big{(}W(\eta+i\varepsilon)-W(\eta-i\varepsilon) \Big{)}\int_{-\infty}^{\eta}W(-\xi-i\varepsilon)G_{m,\varepsilon}^{-}(\eta, \xi,y)\mathrm{d}\xi\] \[\quad+\frac{1}{2m}W(\eta-i\varepsilon)\int_{-\infty}^{\eta} \Big{(}W(-\xi-i\varepsilon)-W(-\xi+i\varepsilon)\Big{)}G_{m,\varepsilon}^{-}( \eta,\xi,y)\mathrm{d}\xi\] \[\quad+\frac{1}{2m}\Big{(}W(-\eta-i\varepsilon)-W(-\eta+i \varepsilon)\Big{)}\int_{\eta}^{\infty}W(\xi+i\varepsilon)G_{m,\varepsilon}^{-} (\eta,\xi,y)\mathrm{d}\xi\] \[\quad+\frac{1}{2m}W(-\eta+i\varepsilon)\int_{\eta}^{\infty} \Big{(}W(\xi+i\varepsilon)-W(\xi-i\varepsilon)\Big{)}G_{m,\varepsilon}^{-}( \eta,\xi,y)\mathrm{d}\xi.\]
Taking the limit as \(\varepsilon\) vanishes is not trivial. Indeed, \(W\) has a branch cut in the negative real axis, see Appendix A, and is thus not continuous there. For this reason, we need the analytic continuation of \(W\), recorded in the following lemma, whose proof is postponed to Appendix A.
**Lemma 3.1** (Analytic continuation).: _Let \(\eta\geq 0\) and \(0<\varepsilon<1\). Then,_
\[\lim_{\varepsilon\to 0}W(-\eta+i\varepsilon)-W(-\eta-i\varepsilon)=2i\cos( \gamma\pi)W(\eta).\]
The whole limiting procedure can be carried out rigorously and produces the explicit formulas exhibited in Theorem 1. However, for the sake of brevity, we opted for showing the validity of the explicit formulas by checking they satisfy the linearized system of equations. When the equations (1.3) are posed in \(\mathbb{T}\times[0,1]\), the limiting procedure becomes much more complicated. Nevertheless, it is still possible to obtain asymptotic expansions on the resulting stream-function and density near the critical layer that capture the same nature of the explicit formulas (1.10) and (1.11) for the spatial setting \(\mathbb{T}\times\mathbb{R}\), we refer the reader to [3].
## 4. Proof of Theorem 2
In this section we obtain the point-wise decay rates in time for the stream function \(\psi_{m}(t,y)\) and the density \(\rho_{m}(t,y)\). These will be obtained as direct consequence of the following lemma, which concerns the time decay of general oscillatory integrals. Before stating it, we introduce the following spaces of functions.
**Definition 4.1**.: For \(\delta_{0}>0\) we define
\[X:=\Big{\{}f:[0,\infty)\times\mathbb{R}\to\mathbb{R}\text{ such that }\|f\|_{X}:=\|f\|_{L^{\infty}_{\eta}\big{(}0,\delta_{0};L^{2}_{ \eta}(\mathbb{R})\big{)}}<\infty\Big{\}}\]
and also
\[Y:=\Big{\{}f:[0,\infty)\times\mathbb{R}\to\mathbb{R}\text{ such that }\|f\|_{Y}:=\|f\|_{L^{1}_{\eta}\big{(} \delta_{0},\infty;L^{2}_{\eta}(\mathbb{R})\big{)}}<\infty\Big{\}}\,.\]
**Lemma 4.2**.: _Let \(0<\alpha<1\), \(\delta_{0}=\frac{1}{2m}\) and \(t\geq 1\). Let \(F=F(\eta,y):(0,\infty)\times\mathbb{R}\to\mathbb{R}\) be such that \(\|F(\eta,\cdot)\|_{L^{2}_{\eta}(\mathbb{R})}\) vanishes as \(\eta\to+\infty\) and \(\partial_{\eta}F\in Y\). We have the following._
1. _Assume_ \(F\) _admits the decomposition_ \(F(\eta,y)=\eta^{-\alpha}E_{1}(\eta,y)\)_, for some_ \(E_{1}\in X\) _and_ \(\partial_{\eta}F(\eta,y)=\eta^{-\alpha-1}E_{2}(\eta,y)\) _for some_ \(E_{2}\in X\)_. Then,_ \[\left\|\int_{0}^{\infty}\mathrm{e}^{im\eta t}F(\eta,y)\mathrm{d}\eta\right\|_{ L_{y}^{2}(\mathbb{R})}\lesssim\frac{1}{(mt)^{1-\alpha}}\left(\|E_{1}\|_{X}+\|E_{2}\|_{X} \right)+\frac{1}{mt}\|\partial_{\eta}F\|_{Y}.\]
2. _Assume_ \(F\) _admits the decomposition_ \(F(\eta,y)=\eta^{-\alpha}\left(E_{1,1}(\eta,y)+\log(\eta)E_{1,2}(\eta,y)\right)\) _and_ \(\partial_{\eta}F(\eta,y)=\eta^{-\alpha-1}(E_{2,1}(\eta,y)+\log(\eta)E_{2,2}( \eta,y))\) _for some_ \(E_{i,j}\in X\)_, with_ \(i,j\in\{1,2\}\)_. Then,_ \[\left\|\int_{0}^{\infty}\mathrm{e}^{im\eta t}F(\eta,y)\mathrm{d}\eta\right\|_ {L_{y}^{2}(\mathbb{R})}\lesssim\frac{1+\log(mt)}{(mt)^{1-\alpha}}\sum_{i,j\in \{1,2\}}\|E_{i,j}\|_{X}+\frac{1}{mt}\|\partial_{\eta}F\|_{Y}.\]
Proof.: Let \(\delta\in(0,\delta_{0})\) and set
\[\mathcal{I}(y)=\int_{0}^{\infty}\mathrm{e}^{im\eta t}F(\eta,y)\mathrm{d}\eta= \left(\int_{0}^{\delta}+\int_{\delta}^{\infty}\right)\mathrm{e}^{im\eta t}F( \eta,y)\mathrm{d}\eta=\mathcal{I}_{1}(y)+\mathcal{I}_{2}(y).\]
\(\bullet\)**Proof of (i).** We begin by estimating \(\mathcal{I}_{1}(y)\). Since we integrate in \((0,\delta)\) and \(\delta\in(0,\delta_{0})\), we can write \(F(\eta,y)=\eta^{-\alpha}E_{1}(\eta,y)\) and directly estimate using Minkowsky inequality
\[\left\|\mathcal{I}_{1}\right\|_{L_{y}^{2}(\mathbb{R})}\leq\|E_{1}\|_{X}\int_{ 0}^{\delta}\eta^{-\alpha}\mathrm{d}\eta=\frac{\|E_{1}\|_{X}}{1-\alpha}\delta^ {1-\alpha}.\]
On the other hand, since \(F\) vanishes at infinity, integrating by parts we can write
\[\mathcal{I}_{2} =\frac{1}{imt}\int_{\delta}^{\infty}\partial_{\eta}\left(\mathrm{ e}^{im\eta t}\right)F(\eta,y)\mathrm{d}\eta\] \[=-\frac{1}{imt}\mathrm{e}^{im\delta t}F(\delta,y)-\frac{1}{imt} \int_{\delta}^{\delta_{0}}\mathrm{e}^{im\eta t}\partial_{\eta}F(\eta,y) \mathrm{d}\eta-\frac{1}{imt}\int_{\delta_{0}}^{\infty}\mathrm{e}^{im\eta t} \partial_{\eta}F(\eta,y)\mathrm{d}\eta.\]
and we estimate
\[\left\|\mathcal{I}_{2}\right\|_{L_{y}^{2}(\mathbb{R})}\lesssim\frac{1}{mt} \delta^{-\alpha}\left(\|E_{1}\|_{X}+\|E_{2}\|_{X}\right)+\frac{1}{mt}\|\partial _{\eta}F\|_{Y}.\]
Therefore, we conclude that
\[\|\mathcal{I}\|_{L_{y}^{2}(\mathbb{R})}\lesssim\left(\delta^{1-\alpha}+\frac{ 1}{mt}\delta^{-\alpha}\right)\left(\|E_{1}\|_{X}+\|E_{2}\|_{X}\right)+\frac{1 }{mt}\|\partial_{\eta}F\|_{Y}.\]
For \(\delta=\frac{1}{4mt}<\delta_{0}\) we obtain the desired decay estimate
\[\left\|\int_{0}^{\infty}\mathrm{e}^{im\eta t}F(\eta,y)\mathrm{d}\eta\right\|_ {L_{y}^{2}(\mathbb{R})}\lesssim\frac{1}{(mt)^{1-\alpha}}\left(\|E_{1}\|_{X}+\| E_{2}\|_{X}\right)+\frac{1}{mt}\|\partial_{\eta}F\|_{Y}.\]
\(\bullet\)**Proof of (ii).** For \(\mathcal{I}_{1}\), since we have the expansion \(F(\eta,y)=\eta^{-\alpha}(E_{1,1}(\eta,y)+\log(\eta)E_{1,2}(\eta,y))\) for \(\eta\in(0,\delta_{0})\), we have that since \(\delta<1\),
\[\|\mathcal{I}_{1}\|_{L_{y}^{2}(\mathbb{R})} \lesssim\int_{0}^{\delta}\eta^{-\alpha}\left(1+|\log(\eta)|\right) \left(\|E_{1,1}\|_{X}+\|E_{1,2}\|_{X}\right)\mathrm{d}\eta\] \[\lesssim\delta^{1-\alpha}\left(1+\big{|}\log\left(\delta\right) \big{|}\right)\left(\|E_{1,1}\|_{X}+\|E_{1,2}\|_{X}\right).\]
As for \(\mathcal{I}_{2}\), integrating by parts, since \(F\) vanishes at infinity and using the asymptotic expansion \(\partial_{\eta}F(\eta,y)=\eta^{-\alpha-1}(E_{2,1}(\eta,y)+\log(\eta)E_{2,2}( \eta,y))\), one can estimate
\[\|\mathcal{I}_{2}\|_{L_{y}^{2}(\mathbb{R})} \leq\frac{1}{mt}\left(\|F(\delta,\cdot)\|_{L_{y}^{2}(\mathbb{R})} +\int_{\delta}^{\delta_{0}}\|\partial_{\eta}F(\eta,\cdot)\|_{L_{y}^{2}( \mathbb{R})}\mathrm{d}\eta+\int_{\delta_{0}}^{\infty}\|\partial_{\eta}F(\eta, \cdot)\|_{L_{y}^{2}(\mathbb{R})}\mathrm{d}\eta\right)\] \[\lesssim\frac{1}{mt}\delta^{-\alpha}\left(1+\big{|}\log\left( \delta\right)\big{|}\right)\sum_{i,j\in\{1,2\}}\|E_{i,j}\|_{X}+\frac{1}{mt}\| \partial_{\eta}F\|_{Y}.\]
Choosing once again \(\delta=\frac{1}{4mt}\) yields the estimate
\[\left\|\int_{0}^{\infty}\mathrm{e}^{im\eta t}F(\eta,y)\mathrm{d}\eta\right\|_{L_{ y}^{2}(\mathbb{R})}\lesssim\frac{1}{(mt)^{1-\alpha}}(1+\log(mt))\sum_{i,j\in\{1,2\}} \|E_{i,j}\|_{X}+\frac{1}{mt}\|\partial_{\eta}F\|_{Y},\]
which concludes the proof.
We now obtain the decay estimates for the stream-function \(\psi_{m}\).
**Proposition 4.3**.: _The following holds for all \(t\geq 1\)._
* _If_ \(\beta^{2}\neq 1/4\)_, then_ \[\|\psi_{m}(t)\|_{L_{y}^{2}(\mathbb{R})}\lesssim m^{-3}t^{-\frac{3}{2}+\mu}Q_{2, m}.\]
* _If_ \(\beta^{2}=1/4\)_, then_ \[\|\psi_{m}(t)\|_{L_{y}^{2}(\mathbb{R})}\lesssim m^{-3}t^{-\frac{3}{2}}(1+\log (mt))Q_{2,m}.\]
Proof.: We have from Theorem 1 that
\[\psi_{m}(t,y)=\frac{\mathrm{e}^{-imyt}}{2m\pi}\cos(\gamma\pi) \left(\int_{0}^{\infty}\mathrm{e}^{im\eta t}W(\eta)\int_{0}^{\infty}W(\xi)G_{m} (\eta,\xi,y)\mathrm{d}\xi\mathrm{d}\eta\right.\] \[\left.-\int_{0}^{\infty}\mathrm{e}^{-im\eta t}W(\eta)\int_{0}^{ \infty}W(\xi)G_{m}(-\eta,-\xi,y)\mathrm{d}\xi\mathrm{d}\eta\right).\]
We show the decay estimates for
\[\mathcal{T}^{+}(y):=\int_{0}^{\infty}\mathrm{e}^{im\eta t}W(\eta)\int_{0}^{ \infty}W(\xi)G_{m}(\eta,\xi,y)\mathrm{d}\xi\mathrm{d}\eta \tag{4.1}\]
since one can directly replicate the arguments to obtain the same estimates for
\[\mathcal{T}^{-}(y):=\int_{0}^{\infty}\mathrm{e}^{-im\eta t}W(\eta)\int_{0}^{ \infty}W(\xi)G_{m}(-\eta,-\xi,y)\mathrm{d}\xi\mathrm{d}\eta.\]
The time decay is achieved appealing to Lemma 4.2. Integrating (4.1) by parts in \(\eta\) provides
\[\mathcal{T}^{+}(y) =\int_{0}^{\infty}\frac{1}{imt}\partial_{\eta}(\mathrm{e}^{im \eta t})W(\eta)\int_{0}^{\infty}W(\xi)G_{m}(\eta,\xi,y)\mathrm{d}\xi\mathrm{d}\eta\] \[=-\frac{1}{imt}\int_{0}^{\infty}\mathrm{e}^{im\eta t}\left(W^{ \prime}(\eta)\int_{0}^{\infty}W(\xi)G_{m}(\eta,\xi,y)\mathrm{d}\xi+W(\eta) \int_{0}^{\infty}W(\xi)\partial_{\eta}G_{m}(\eta,\xi,y)\mathrm{d}\xi\right) \mathrm{d}\eta.\]
and further define
\[F(\eta,y):=W^{\prime}(\eta)\int_{0}^{\infty}W(\xi)G_{m}(\eta,\xi,y)\mathrm{d} \xi+W(\eta)\int_{0}^{\infty}W(\xi)\partial_{\eta}G_{m}(\eta,\xi,y)\mathrm{d}\xi. \tag{4.2}\]
Clearly,
\[\partial_{\eta}F(\eta,y) =W^{\prime\prime}(\eta)\int_{0}^{\infty}W(\xi)G_{m}(\eta,\xi,y) \mathrm{d}\xi+2W^{\prime}(\eta)\int_{0}^{\infty}W(\xi)\partial_{\eta}G_{m}( \eta,\xi,y)\mathrm{d}\xi\] \[\quad+W(\eta)\int_{0}^{\infty}W(\xi)\partial_{\eta}^{2}G_{m}(\eta, \xi,y)\mathrm{d}\xi.\]
We begin by checking that \(\partial_{\eta}F\in Y\). For this, (3.7) yields
\[\int_{\delta_{0}}^{\infty}|W^{\prime\prime}(\eta)|\mathrm{d}\eta=4m^{2}\int_{ \delta_{0}}^{\infty}\left|-\frac{1}{4}+\beta^{2}\frac{1}{(2m\eta)^{2}}\right| |W_{0,\gamma}(2m\eta)|\,\mathrm{d}\eta\lesssim m\|W_{0,\gamma}\|_{L^{1}}.\]
Similarly we easily estimate
\[\int_{\delta_{0}}^{\infty}|W^{\prime}(\eta)|\mathrm{d}\eta\leq\|W^{\prime}\|_{ L^{1}}=\|W^{\prime}_{0,\gamma}\|_{L^{1}}\]
\[\int_{\delta_{0}}^{\infty}|W(\eta)|{\rm d}\eta\leq\frac{1}{2m}\|W_{0,\gamma}\|_{L^ {1}(0,\infty)}.\]
Moreover we have that,
\[\left\|\int_{0}^{\infty}W(\xi)\partial_{\eta}^{j}G_{m}(\eta,\xi,y){\rm d}\xi \right\|_{L^{2}_{y}(\mathbb{R})}\leq\|W\|_{L^{1}}Q_{j,m}\leq\frac{1}{m}\|W_{0, \gamma}\|_{L^{2}}Q_{j,m}, \tag{4.3}\]
for all \(j\geq 0\). With this, we infer that \(\partial_{\eta}F\in Y\) and
\[\|\partial_{\eta}F\|_{Y}\lesssim Q_{2,m}.\]
Next, we check the asymptotic expansions of \(F(\eta,y)\) and \(\partial_{\eta}F(\eta,y)\) for \(\eta\in[0,\delta_{0}]\). For this, we will distinguish the two cases.
\(\bullet\)**Case \(\beta^{2}\neq 1/4\).** We can use Lemma A.1 to write
\[W(\eta)=\eta^{\frac{1}{2}-\mu}\mathcal{E}_{m,2}(\eta),\qquad W^{\prime}(\eta) =\eta^{-\frac{1}{2}-\mu}\mathcal{E}_{m,1}(\eta),\qquad W^{\prime\prime}(\eta) =\eta^{-\frac{3}{2}-\mu}\mathcal{E}_{m,2}(\eta),\]
which yields
\[F(\eta,y) =\eta^{-\frac{1}{2}-\mu}\mathcal{E}_{m,1}(\eta)\int_{0}^{\infty}W (\xi)G_{m}(\eta,\xi,y){\rm d}\xi+\eta^{\frac{1}{2}+\mu}\mathcal{E}_{m,0}(\eta )\int_{0}^{\infty}W(\xi)\partial_{\eta}G_{m}(\eta,\xi,y){\rm d}\xi\] \[=\eta^{-\frac{1}{2}-\mu}E_{1}(\eta,y),\]
with \(\|E_{1}\|_{X}\lesssim m^{-\frac{1}{2}-\mu}Q_{1,m}\) and
\[\partial_{\eta}F(\eta,y) =\eta^{-\frac{3}{2}-\mu}\mathcal{E}_{m,2}(\eta)\int_{0}^{\infty}W (\xi)G_{m}(\eta,\xi,y){\rm d}\xi+2u^{-\frac{1}{2}-\mu}\mathcal{E}_{m,1}(\eta) \int_{0}^{\infty}W(\xi)\partial_{\eta}G_{m}(\eta,\xi,y){\rm d}\xi\] \[\qquad\qquad+\eta^{\frac{1}{2}-\mu}\mathcal{E}_{m,0}(\eta)\int_{ 0}^{\infty}W(\xi)\partial_{\eta}^{2}G_{m}(\eta,\xi,y){\rm d}\xi\] \[=\eta^{-\frac{3}{2}-\mu}E_{2}(\eta,y),\]
with \(\|E_{2}\|_{X}\lesssim m^{-\frac{1}{2}-\mu}Q_{2,m}\). With this, for \(\alpha=\frac{1}{2}+\mu\), we show that \(F(\eta,y)\) defined above satisfies the conditions of Lemma 4.2 and we conclude that
\[\left\|\int_{0}^{\infty}{\rm e}^{im\eta t}F(\eta,y){\rm d}\eta\right\|_{L^{2} _{y}(\mathbb{R})}\lesssim\frac{1}{mt^{1-\alpha}}Q_{2,m},\]
which yields the claimed bound for \(\|\psi_{m}(t)\|_{L^{2}_{y}(\mathbb{R})}\).
\(\bullet\)**Case \(\beta^{2}=1/4\).** We shall now use the asymptotic expansions of Lemma A.2. We will use these expansions to check the validity of the hypothesis required to apply Lemma 4.2. In this direction, for \(\eta\in(0,\delta_{0})\) we can write
\[F(\eta,y) =\eta^{-\frac{1}{2}}\left[\mathcal{E}_{m,1,1}(\eta)+\log(\eta) \mathcal{E}_{m,1,2}(\eta)\right]\int_{0}^{\infty}W(\xi)G_{m}(\eta,\xi,y){\rm d }\xi\] \[\quad+\eta^{\frac{1}{2}}\left[\mathcal{E}_{m,0,1}(\eta)+\log(\eta )\mathcal{E}_{m,0,2}(\eta)\right]\int_{0}^{\infty}W(\xi)\partial_{\eta}G_{m}( \eta,\xi,y){\rm d}\xi\] \[=\eta^{-\frac{1}{2}}(E_{1,1}(\eta,y)+\log(\eta)E_{1,2}(\eta,y)),\]
with the uniform bounds
\[\|E_{1,1}\|_{X}\lesssim m^{-\frac{1}{2}}\left(1+\log\left(m\right)\right)Q_{1,m},\quad\|E_{1,2}\|_{X}\lesssim m^{-\frac{1}{2}}Q_{1,m}.\]
Similarly, for \(\partial_{\eta}F(\eta,y)\) we can write
\[\partial_{\eta}F(\eta,y) =\eta^{-\frac{3}{2}}\left[\mathcal{E}_{m,2,1}(\eta)+\log(\eta) \mathcal{E}_{m,2,2}(\eta)\right]\int_{0}^{\infty}W(\xi)G_{m}(\eta,\xi,y)\mathrm{ d}\xi\] \[\quad+2u^{-\frac{1}{2}}\left[\mathcal{E}_{m,1,1}(\eta)+\log(\eta) \mathcal{E}_{m,1,2}(\eta)\right]\int_{0}^{\infty}W(\xi)\partial_{\eta}G_{m}( \eta,\xi,y)\mathrm{d}\xi\] \[\quad+\eta^{\frac{1}{2}}\left[\mathcal{E}_{m,0,1}(\eta)+\log(\eta )\mathcal{E}_{m,0,2}(\eta)\right]\int_{0}^{\infty}W(\xi)\partial_{\eta}^{2}G_{ m}(\eta,v,\eta)\mathrm{d}\xi\] \[=\eta^{-\frac{3}{2}}\big{(}E_{2,1}(\eta,y)+\log(\eta)E_{2,2}(\eta, y)\big{)},\]
with the bounds
\[\|E_{2,1}\|_{X}\lesssim m^{-\frac{1}{2}}\left(1+\log\left(m\right)\right)Q_{2,m},\quad\|E_{2,2}\|_{X}\lesssim m^{-\frac{1}{2}}Q_{2,m}.\]
Hence, we apply Lemma 4.2 for \(\alpha=\frac{1}{2}\) and \(\delta=\frac{1}{4mt}\) to obtain
\[\left\|\int_{0}^{\infty}\mathrm{e}^{im\eta t}F(\eta,y)\mathrm{d} \eta\right\|_{L^{2}_{y}(\mathbb{R})} \lesssim\frac{1}{(mt)^{\frac{1}{2}}}(1+\log\left(mt\right))\sum_{ i,j\in\{1,2\}}\|E_{i,j}\|_{X}+\frac{1}{mt}\|\partial_{\eta}F\|_{Y}\] \[\lesssim\frac{1}{mt^{\frac{1}{2}}}(1+\log\left(mt\right))Q_{2,m}.\]
From here, the stated bound for \(\|\psi_{m}(t)\|_{L^{2}_{y}(\mathbb{R})}\) follows easily.
From the explicit expression of \(\psi_{m}(t,y)\) and replicating the proof of Proposition 4.3, one obtains the following result.
**Corollary 4.4**.: _The following holds for all \(t\geq 1\)._
* _If_ \(\beta^{2}\neq 1/4\)_, then_ \[\|\partial_{y}\psi_{m}(t)\|_{L^{2}_{y}(\mathbb{R})}\lesssim m^{-2}t^{-\frac{1} {2}+\mu}Q_{1,m}.\]
* _If_ \(\beta^{2}=1/4\)_, then_ \[\|\partial_{y}\psi_{m}(t)\|\lesssim m^{-2}t^{-\frac{1}{2}}\left(1+\log\left(mt \right)\right)Q_{1,m}.\]
Proof.: Note that
\[\partial_{y}\psi_{m}(t,y)=-imt\psi_{m}(t,y)+\frac{\mathrm{e}^{- imyt}}{2m\pi}\cos(\gamma\pi)\left(\int_{0}^{\infty}\mathrm{e}^{im\eta t}W(\eta) \int_{0}^{\infty}W(\xi)\partial_{y}G_{m}(\eta,\xi,y)\mathrm{d}\xi\mathrm{d}\eta\right.\] \[\left.-\int_{0}^{\infty}\mathrm{e}^{-im\eta t}W(\eta)\int_{0}^{ \infty}W(\xi)\partial_{y}G_{m}(-\eta,-\xi,y)\mathrm{d}\xi\mathrm{d}\eta\right).\]
In particular, we observe that
\[-imt \frac{\mathrm{e}^{-imyt}}{2m\pi}\cos(\mu\pi)\int_{0}^{\infty} \mathrm{e}^{im\eta t}W(\eta)\int_{0}^{\infty}W(\xi)G_{m}(\eta,\xi,y)\mathrm{d} \xi\mathrm{d}\eta\] \[=-\frac{\mathrm{e}^{-imyt}}{2m\pi}\cos(\mu\pi)\int_{0}^{\infty} \partial_{\eta}\left(\mathrm{e}^{im\eta t}\right)W(\eta)\int_{0}^{\infty}W(\xi )G_{m}(\eta,\xi,y)\mathrm{d}\xi\mathrm{d}\eta\] \[=\frac{\mathrm{e}^{-imyt}}{2m\pi}\cos(\mu\pi)\int_{0}^{\infty} \mathrm{e}^{im\eta t}W^{\prime}(\eta)\int_{0}^{\infty}W(\xi)G_{m}(\eta,\xi,y) \mathrm{d}\xi\mathrm{d}\eta\] \[\quad+\frac{\mathrm{e}^{-imyt}}{2m\pi}\cos(\mu\pi)\int_{0}^{\infty }\mathrm{e}^{im\eta t}W(\eta)\int_{0}^{\infty}W(\xi)\partial_{\eta}G_{m}(\eta, \xi,y)\mathrm{d}\xi\mathrm{d}\eta.\]
Under the observation that \((\partial_{\eta}+\partial_{y})\mathcal{G}_{m}(\eta,\xi,y)=0\), we conclude that
\[\partial_{y}\psi_{m}(t,y)=\frac{\mathrm{e}^{-imyt}}{2m\pi}\cos( \gamma\pi)\left(\int_{0}^{\infty}\mathrm{e}^{im\eta t}W^{\prime}(\eta)\int_{0}^ {\infty}W(\xi)\partial_{y}G_{m}(\eta,\xi,y)\mathrm{d}\xi\mathrm{d}\eta\right.\\ \left.-\int_{0}^{\infty}\mathrm{e}^{-im\eta t}W^{\prime}(\eta) \int_{0}^{\infty}W(\xi)\partial_{y}G_{m}(-\eta,-\xi,y)\mathrm{d}\xi\mathrm{d} \eta\right).\]
and the corollary follows applying Lemma 4.2, we omit the details.
We now obtain the decay in time of the perturbed density.
**Proposition 4.5**.: _The following holds for all \(t\geq 1\)._
* _If_ \(\beta^{2}\neq 1/4\)_, then_ \[\|\rho_{m}(t)\|_{L^{2}_{y}(\mathbb{R})}\lesssim m^{-1}t^{-\frac{1}{2}+\mu}Q_{1,m}.\]
* _If_ \(\beta^{2}=1/4\)_, then_ \[\|\rho_{m}(t)\|_{L^{2}_{y}(\mathbb{R})}\lesssim m^{-2}t^{-\frac{1}{2}}\left(1 +\log\left(mt\right)\right)Q_{1,m}.\]
Proof.: From Theorem 1,
\[\rho_{m}(t,y)=\frac{\mathrm{e}^{-imyt}}{2m\pi}\cos(\gamma\pi) \left(\int_{0}^{\infty}\mathrm{e}^{im\eta t}\frac{W(\eta)}{\eta}\int_{0}^{ \infty}W(\xi)G_{m}(\eta,\xi,y)\mathrm{d}\xi\mathrm{d}\eta\right.\\ \left.+\int_{0}^{\infty}\mathrm{e}^{-im\eta t}\frac{W(\eta)}{\eta }\int_{0}^{\infty}W(\xi)G_{m}(-\eta,-\xi,y)\mathrm{d}\xi\mathrm{d}\eta\right).\]
As before, we only show the decay estimate for
\[\mathcal{T}:=\int_{0}^{\infty}\mathrm{e}^{im\eta t}\frac{W(\eta)}{\eta}\int_{ 0}^{\infty}W(\xi)G_{m}(\eta,\xi,y)\mathrm{d}\xi\mathrm{d}\eta.\]
Denoting
\[F(\eta,y)=\frac{W(\eta)}{\eta}\int_{0}^{\infty}W(\xi)G_{m}(\eta,\xi,y) \mathrm{d}\xi\mathrm{d}\eta,\]
we shall apply Lemma 4.2. We compute
\[\partial_{\eta}F(\eta,y)=\left(\frac{W^{\prime}(\eta)}{\eta}-\frac{W(\eta)}{ \eta^{2}}\right)\int_{0}^{\infty}W(\xi)G_{m}(\eta,\xi,y)\mathrm{d}\xi+\frac{W (\eta)}{\eta}\int_{0}^{\infty}W(\xi)\partial_{\eta}G_{m}(\eta,\xi,y)\mathrm{d}\xi\]
and we observe the following bounds:
\[\int_{\delta_{0}}^{\infty}\left|\frac{W^{\prime}(\eta)}{\eta} \right|\mathrm{d}\eta \leq\delta_{0}{}^{-1}\|W^{\prime}\|_{L^{1}(\delta_{0},\infty)}= \delta_{0}{}^{-1}\|W^{\prime}_{0,\gamma}\|_{L^{1}(1,\infty)}\] \[\int_{\delta_{0}}^{\infty}\left|\frac{W(\eta)}{\eta^{2}}\right| \mathrm{d}\eta \leq\delta_{0}{}^{-2}\|W\|_{L^{1}(\delta_{0},\infty)}=\delta_{0}{ }^{-1}\|W_{0,\gamma}\|_{L^{1}(1,\infty)}\] \[\int_{\delta_{0}}^{\infty}\left|\frac{W(\eta)}{\eta}\right| \mathrm{d}\eta \leq\delta_{0}{}^{-1}\|W\|_{L^{1}(\delta_{0},\infty)}=\|W_{0, \gamma}\|_{L^{1}(1,\infty)}\]
since \(\delta_{0}{}^{-1}=2m\). Together with (4.3) we deduce that \(\partial_{\eta}F\in Y\) and we can estimate
\[\|\partial_{\eta}F\|_{Y}\lesssim Q_{1,m}.\]
We next treat each \(\beta^{2}\) case separately to obtain the correct asymptotic expansions.
\(\bullet\)**Case \(\beta^{2}\neq 1/4\).** Following the asymptotic expansions of Lemma A.1, we can write
\[F(\eta,y)=\eta^{-\frac{1}{2}-\mu}\mathcal{E}_{m,0}(\eta)\int_{0}^{\infty}W(\xi )G_{m}(\eta,\xi,y)\mathrm{d}\xi=\eta^{-\frac{1}{2}-\mu}E_{1}(\eta,y),\]
with \(\|E_{1}\|_{X}\lesssim m^{-\frac{1}{2}-\mu}Q_{0,m}\). Similarly, we have that
\[\partial_{\eta}F(\eta,y) =\left(\eta^{-\frac{3}{2}-\mu}\mathcal{E}_{m,1}(\eta)-\eta^{- \frac{3}{2}-\mu}\mathcal{E}_{m,0}(\eta)\right)\int_{0}^{\infty}W(\xi)G_{m}(\eta,\xi,y)\mathrm{d}\xi\] \[\qquad+\eta^{-\frac{1}{2}}\mathcal{E}_{m,0}(\eta)\int_{0}^{\infty }W(\xi)\partial_{\eta}G_{m}(\eta,\xi,y)\mathrm{d}\xi\] \[=\eta^{-\frac{3}{2}-\mu}E_{2}(\eta,y),\]
where \(\|E_{2}\|_{X}\lesssim m^{-\frac{1}{2}-\mu}Q_{1,m}\). Hence, taking \(\alpha=\frac{1}{2}+\mu\) we apply Lemma 4.2 swiftly and obtain the decay estimate
\[\left\|\int_{0}^{\infty}\mathrm{e}^{im\eta t}F(\eta,y)\mathrm{d}\eta\right\| _{L_{y}^{2}(\mathbb{R})}\lesssim m^{-1}t^{-\frac{1}{2}+\mu}Q_{1,m},\]
from which the proof follows.
\(\bullet\)**Case \(\beta^{2}=1/4\).** Thanks to the asymptotic expansions of Lemma A.1, we have
\[F(\eta,y) =\eta^{-\frac{1}{2}}\big{(}\mathcal{E}_{m,0,1}(\eta)+\log(\eta) \mathcal{E}_{m,0,2}(\eta)\big{)}\int_{0}^{\infty}W(\xi)G_{m}(\eta,\xi,y) \mathrm{d}\xi\] \[=\eta^{-\frac{1}{2}}\big{(}E_{1,1}(\eta,y)+\log(\eta)E_{1,2}(\eta,y)\big{)},\]
with the bounds
\[\|E_{1,1}\|_{X}\lesssim m^{-\frac{1}{2}}\left(1+\log\left(m\right)\right)Q_{0, m},\quad\|E_{1,2}\|_{X}\lesssim m^{-\frac{1}{2}}Q_{0,m}.\]
As for \(\partial_{\eta}F\), we have that
\[\partial_{\eta}F(\eta,y) =\eta^{-\frac{3}{2}}\big{(}\mathcal{E}_{m,1,1}(\eta)+\log(\eta) \mathcal{E}_{m,1,2}(\eta)\big{)}\int_{0}^{\infty}W(\xi)G_{m}(\eta,\xi,y) \mathrm{d}\xi\] \[\qquad-\eta^{-\frac{3}{2}}\big{(}\mathcal{E}_{m,0,1}(\eta)+\log( \eta)\mathcal{E}_{m,0,2}(\eta)\big{)}\int_{0}^{\infty}W(\xi)G_{m}(\eta,\xi,y) \mathrm{d}\xi\] \[\qquad+\eta^{-\frac{1}{2}}\big{(}\mathcal{E}_{m,0,1}(\eta)+\log( \eta)\mathcal{E}_{m,0,2}(\eta)\big{)}\int_{0}^{\infty}W(\xi)\partial_{\eta}G_{ m}(\eta,\xi,y)\mathrm{d}\xi\] \[=\eta^{-\frac{3}{2}}\big{(}E_{2,1}(\eta,y)+\log(\eta)E_{2,2}(\eta,y)\big{)},\]
where we can bound
\[\|E_{2,1}\|_{X}\lesssim m^{-\frac{1}{2}}\left(1+\log\left(m\right)\right)Q_{1,m},\quad\|E_{2,2}\|_{X}\lesssim m^{-\frac{1}{2}}Q_{1,m}.\]
Now, for \(\alpha=1/2\), we have that
\[\left\|\int_{0}^{\infty}\mathrm{e}^{im\eta t}F(\eta,y)\mathrm{d}\eta\right\| _{L_{y}^{2}(\mathbb{R})}\lesssim m^{-1}t^{-\frac{1}{2}}\left(1+\log\left(mt \right)\right)Q_{1,m}\]
due to Lemma 4.2. With this, the proof is complete.
## Appendix A The Whittaker functions
Here we give a description of the Whittaker function \(W_{0,\gamma}\) and its asymptotic expansions. For \(\mu=\mathrm{Re}\left(\sqrt{1/4-\beta^{2}}\right)\) and \(\nu=\mathrm{Im}\left(\sqrt{1/4-\beta^{2}}\right)\) we set \(\gamma=\mu+i\nu\). For \(\gamma\neq 0\) and \(\zeta\in\mathbb{C}\), the Whittaker function \(W_{0,\gamma}(\zeta)\) is given by
\[W_{0,\gamma}(\zeta)=\frac{\Gamma(-2\gamma)}{\Gamma(\frac{1}{2}-\gamma)}M_{0, \gamma}(\zeta)+\frac{\Gamma(2\gamma)}{\Gamma(\frac{1}{2}+\gamma)}M_{0,-\gamma}(\zeta)\] (A.1)
Here, \(\Gamma(\cdot)\) stands for the Gamma function and the Whittaker functions \(M_{0,\gamma}\) and \(M_{0,-\gamma}\) are given by
\[M_{0,\pm\gamma}(\zeta)=\mathrm{e}^{-\frac{1}{2}\zeta}\zeta^{\frac{1}{2}\pm\gamma}M \left(\tfrac{1}{2}\pm\gamma,1\pm 2\gamma,\zeta\right),\quad M(a,b,\zeta)=\sum_{s=0}^{ \infty}\frac{(a)_{s}}{(b)_{s}s!}\zeta^{s},\]
where \((a)_{s}=a(a+1)(a+2)\ldots(a+s-1)\). See [3, 9] for more details.
The asymptotic estimates for \(W_{0,\gamma}\) are deduced from the asymptotic estimates for \(M_{0,\pm\gamma}\), recorded in Lemma A.3 from [3], due to the relation (A.1).
**Lemma A.1**.: _Let \(\zeta\in\mathbb{C}\). Let \(B_{R}\subset\mathbb{C}\) denote the closed unit ball of radius \(R>0\) centered in the origin. Then,_
\[W_{0,\gamma}(\zeta)=\zeta^{\frac{1}{2}-\gamma}\mathcal{E}_{0,\gamma}(\zeta), \quad W^{\prime}_{0,\gamma}(\zeta)=\zeta^{-\frac{1}{2}-\gamma}\mathcal{E}_{1, \gamma}(\zeta),\]
_where \(\mathcal{E}_{j,\gamma}\in L^{\infty}(B_{R})\) and \(\|\mathcal{E}_{j,\gamma}\|_{L^{\infty}(B_{R})}\lesssim_{\gamma,R}1\), for \(j=0,1\)._
For \(\beta^{2}=1/4\), we have \(\gamma=0\) and (A.1) is no longer valid. Then, \(W_{0,0}\) is given by
\[W_{0,0}(\zeta)=\sqrt{\frac{\zeta}{\pi}}K_{0}\left(\frac{\zeta}{2}\right),\] (A.2)
where \(K_{0}\) is the modified Bessel function of second kind of order 0. See [9] for more details on \(K_{0}\). We next state the asymptotic expansions for \(W_{0,0}\), which follow from (A.2) and are shown in [3].
**Lemma A.2** ([3], Lemma A.4).: _Let \(\beta^{2}=1/4\) and \(\zeta\in\mathbb{C}\). Let \(B_{R}\subset\mathbb{C}\) denote the closed ball of radius \(R>0\) centered at the origin. Then,_
\[W_{0,0}(\zeta)=\zeta^{\frac{1}{2}}\big{(}\mathcal{E}_{0,1}(\zeta)-\log(\zeta) \mathcal{E}_{0,2}(\zeta)\big{)},\quad W^{\prime}_{0,0}(\zeta)=\zeta^{-\frac{1 }{2}}\big{(}\mathcal{E}_{1,1}(\zeta)-\log(\zeta)\mathcal{E}_{1,2}(\zeta)\big{)},\]
_where \(\mathcal{E}_{j,k}(\zeta)\) are entire functions in \(\mathbb{C}\) and \(\|\mathcal{E}_{j,k}\|_{L^{\infty}(B_{R})}\lesssim 1\), for \(j=0,1\) and \(k=1,2\)._
We finish our discussion with the proof of Lemma 3.1 when \(\beta^{2}\neq 1/4\).
Proof of Lemma 3.1.: From (A.1) we write
\[W(\zeta)=A(\gamma)M_{0,\gamma}(2m\zeta)+B(\gamma)M_{0,-\gamma}(2m\zeta).\]
The analytic continuation property of \(M_{0,\gamma}(\zeta)\), see [9], states that
\[M_{0,\gamma}(\zeta\mathrm{e}^{\pm\pi i})=\pm i\mathrm{e}^{\pm\gamma\pi i}M_{0,\gamma}(\zeta).\]
Therefore, we can write
\[M_{0,\gamma}(2m(-\eta+i\varepsilon))=M_{0,\gamma}(2m(\eta-i\varepsilon) \mathrm{e}^{i\pi})=i\mathrm{e}^{\gamma\pi i}M_{0,\gamma}(2m(\eta-i\varepsilon))\]
and
\[M_{0,\gamma}(2m(-\eta-i\varepsilon))=M_{0,\gamma}(2m(\eta+i\varepsilon) \mathrm{e}^{-i\pi})=-i\mathrm{e}^{-\gamma\pi i}M_{0,\gamma}(2m(\eta+i \varepsilon)).\]
Similarly, we have
\[M_{0,-\gamma}(2m(-\eta+i\varepsilon))=i\mathrm{e}^{-\gamma\pi i}M_{0,-\gamma} (2m(\eta-i\varepsilon))\]
and
\[M_{0,-\gamma}(2m(-\eta-i\varepsilon))=-i\mathrm{e}^{\gamma\pi i}M_{0,-\gamma} (2m(\eta+i\varepsilon)).\]
Now, we have that
\[W(-\eta+i\varepsilon)=i\mathrm{e}^{\gamma\pi i}A(\gamma)M_{0,\gamma}(2m(\eta- i\varepsilon))+i\mathrm{e}^{-\gamma\pi i}B(\gamma)M_{0,-\gamma}(2m(\eta-i \varepsilon))\]
and
\[W(-\eta-i\varepsilon)=-i\mathrm{e}^{-\gamma\pi i}A(\gamma)M_{0,\gamma}(2m(\eta +i\varepsilon))-i\mathrm{e}^{\gamma\pi i}B(\gamma)M_{0,-\gamma}(2m(\eta+i \varepsilon)).\]
Since both \(M_{0,\gamma}(\zeta)\) and \(M_{0,-\gamma}(\zeta)\) are continuous functions in the complex subset \(\left\{\zeta\in\mathbb{C}:\mathrm{Re}(\zeta)\geq 0\right\},\) it is easily seen that
\[\lim_{\varepsilon\to 0}W(-\zeta+i\varepsilon)-W(-\zeta-i\varepsilon) =i\left(\mathrm{e}^{\gamma\pi i}+\mathrm{e}^{-\gamma\pi i}\right) \left(A(\gamma)M_{0,\gamma}(\zeta)+B(\gamma)M_{0,-\gamma}(\zeta)\right)\] \[=2i\cos(\gamma\pi)W(\zeta).\]
## Acknowledgments
The research of MCZ was partially supported by the Royal Society URF\R1\191492 and EPSRC Horizon Europe Guarantee EP/X020886/1.
|
2309.04223 | HITA: An Architecture for System-level Testing of Healthcare IoT
Applications | System-level testing of healthcare Internet of Things (IoT) applications
requires creating a test infrastructure with integrated medical devices and
third-party applications. A significant challenge in creating such test
infrastructure is that healthcare IoT applications evolve continuously with the
addition of new medical devices from different vendors and new services offered
by different third-party organizations following different architectures.
Moreover, creating test infrastructure with a large number of different types
of medical devices is time-consuming, financially expensive, and practically
infeasible. Oslo City's healthcare department faced these challenges while
working with various healthcare IoT applications. To address these challenges,
this paper presents a real-world test infrastructure software architecture
(HITA) designed for healthcare IoT applications. We evaluated HITA's digital
twin (DT) generation component implemented using model-based and machine
learning (ML) approaches in terms of DT fidelity, scalability, and time cost of
generating DTs. Results show that the fidelity of DTs created using model-based
and ML approaches reach 94% and 95%, respectively. Results from operating 100
DTs concurrently show that the DT generation component is scalable and ML-based
DTs have a higher time cost. | Hassan Sartaj, Shaukat Ali, Tao Yue, Julie Marie Gjøby | 2023-09-08T09:14:50Z | http://arxiv.org/abs/2309.04223v3 | # HITA: An Architecture for System-level Testing of Healthcare IoT Applications
###### Abstract
System-level testing of healthcare Internet of Things (IoT) applications requires creating a test infrastructure with integrated medical devices and third-party applications. A significant challenge in creating such test infrastructure is that healthcare IoT applications evolve continuously with the addition of new medical devices from different vendors and new services offered by different third-party organizations following different architectures. Moreover, creating test infrastructure with a large number of different types of medical devices is time-consuming, financially expensive, and practically infeasible. Oslo City's healthcare department faced these challenges while working with various healthcare IoT applications. This paper presents a real-world software architecture (HITA) to create a test infrastructure for healthcare IoT applications. We discuss the quality requirements achieved by HITA and the status of work products developing as a part of HITA. We also present our experiences and lessons learned from the architectural work related to HITA.
Keywords:Healthcare Internet of Things (IoT)Software Architecture System Testing.
## 1 Introduction
Healthcare Internet of Things (IoT) applications follow a cloud-based architecture to create an interconnected network with various medical devices and third-party applications [11]. The ultimate goal is to create a central access point for medical professionals, patients, hospitals, pharmacies, and caretakers for delivering efficient healthcare services. Failure to provide timely healthcare services may lead to financial and human life loss. Automated and rigorous system-level testing of healthcare IoT applications is essential to ensure their dependability.
This work is conducted with Oslo City's healthcare department [2] under the national welfare technology program [1]. Oslo City's healthcare department is working with various industries for developing healthcare IoT applications to deliver patients with high-quality services. One of the primary objectives
is to create a test infrastructure for the system-level testing of healthcare IoT applications. Such test infrastructure requires integrating physical medical devices (e.g., medicine dispensers) and third-party applications (e.g., pharmacies) with a healthcare IoT application. A major testing challenge is that healthcare IoT applications evolve continuously with the addition of new medical devices, new/updated medical services, and new third-party applications. Integrating several different types of medical devices from various vendors is time-consuming, costly, and not a practical solution. Each third-party application has a limit on the maximum number of allowed requests for a particular time interval. Testing healthcare IoT applications within the limitations of third-party applications is challenging.
Several architectures have been proposed in the literature for developing healthcare IoT applications [21]. A few works also utilize architectures for various software testing activities, e.g., integration testing [20]. Our work focuses on designing an architecture to create a test infrastructure that enables automated system-level testing of healthcare IoT applications.
This paper presents a real-world software architecture (HITA) to create test infrastructure for healthcare IoT applications. HITA is designed considering the quality requirements of Oslo City such as evolvability, extensibility, heterogeneity, scalability, maintainability, availability, security, privacy, robustness, portability, and cost-effective solution. We discuss the status of work products developed as a part of HITA. We also describe our experiences and lessons learned from applying HITA in a real-world industrial context.
The remaining paper is organized as follows. Related works are discussed in Section 2, HITA is presented in Section 3, lessons learned are described in Section 4, and the paper's conclusion is given in Section 5.
## 2 Related Works
Several works are available related to architectures for healthcare IoT targeting various aspects such as; analysis of IoT architectures for healthcare applications [21], design patterns for healthcare IoT [18], architecture for intelligent IoT-based healthcare systems [10], architectural design decisions for developing digital twins of IoT systems [16], a tool for modeling IoT architectures [25], health monitoring architecture for IoT systems [6], an architecture for IoT-based remote patient monitoring [3], a requirements-based healthcare IoT architecture [15], distributed IoT architecture [4], health data sharing architecture [22], and an architecture for blockchain-driven healthcare IoT systems [26]. Our work focuses on an architecture to develop a test infrastructure for testing healthcare IoT applications.
Some works are also available concerning architecture-based testing including: analysis of architecture's function in software testing [7], architecture-based test criteria [14], regression testing with software architectures [19], architecture-driven integration testing [20], an architecture for analyzing fault tolerance [8], and reliability assessment using architecture models [9]. Compared with the
works mentioned above, our work presents an architecture for creating a test infrastructure to enable the system-level testing of healthcare IoT applications.
## 3 HITA: An Architecture for Test Infrastructure
Figure 1 shows a real-world software architecture (HITA) to create test infrastructure for healthcare IoT applications. HITA is designed based on two commonly used architectural patterns, i.e., _collaborative_ and _centralized_ for healthcare IoT [21]. HITA follows IoT reference architecture [13] which is composed of an _Application Layer_ including healthcare IoT core and testing process, _IoT Integration Middleware_ with the digital twins (DTs) and test stubs (TS) components, _Gateways_, and _Device_ comprising physical medical devices.
**Healthcare IoT Core.** The system under test (SUT) is a healthcare IoT application core that consists of several web and mobile clients for different users such as patients, medical professionals, caregivers, and health authorities. The primary communication channel for mobile clients (including iPad/Tablets) is the 4G/5G network due to its availability and access in remote areas. WiFi is
used as an alternative communication channel in rare cases. An important component of healthcare IoT applications is Application Programming Interfaces (APIs) developed according to Representational State Transfer (REST) architecture [12]. These REST APIs allow communication among various clients, third-party applications, and medical devices. The data interchange format used for this purpose is JavaScript Object Notation (JSON). To execute the tests on SUT, several different types of medical devices and third-party applications need to be integrated. HITA utilizes the DTs component for medical devices and TS components for third-party applications to handle integration challenges. API Gateways are used for secure communication with DTs and TS components using Hypertext Transfer Protocol (HTTP) and secure API keys.
**Medical Devices - DTs Component.** For integration with medical devices, HITA utilizes the concept of digital twins to create a virtual representation of physical devices. Each medical device from a different vendor is connected to a server with several APIs for integration (as shown in the right-bottom of Figure 1). The architecture for creating physical medical devices DTs consists of one _DT Server_ with APIs (e.g., _APIs DT-D\({}_{1}\)_) specific to a certain type of DTs representing a particular device (e.g., _DTs-D\({}_{1}\)_). APIs need to be developed following REST architecture [12] to allow easy integration with SUT. DTs of a medical device are generated using a model-driven engineering (MDE) approach in which structural aspects are modeled as a metamodel and behavioral aspects are specified using executable state machines. In the case of multiple versions of a medical device, a separate DT is required to be generated which is easy with the MDE approach. JSON data interchange format is used between the _DT Server_ and DTs. Schema Registry can be used for compatibility with medical devices supporting different JSON schema. The DT component also consists of _Data Persistence_ to preserve the state of DTs among various requests. The APIs of DTs are used to integrate DTs with SUT and physical medical devices. During testing, the DTs act as middle-ware between SUT and physical medical devices. DTs handle all communication traffic from SUT and communicate (via HTTP) with their physical twins when necessary.
**Third-party Applications - TS Component.** To handle the challenges of integrating third-party applications for testing purposes, HITA's TS component plays an important role. Each third-party application has dedicated servers with APIs for integration. The architecture for test stubs creation consists of one _TS Server_ with APIs (e.g., _APIs TS\({}_{1}\)_) simulating the behavior of various applications (e.g., _App 1_). The APIs for each test stub are required to be developed according to REST architecture [12] for easy integration with SUT. Test stubs play a key role in replicating the functionality of third-party applications. For APIs requiring data (e.g., health data), the architecture includes an artificial data store with multiple databases corresponding to APIs representing different applications. The data manipulation is performed using query language compliant with the database type.
**Testing Process.** The testing process starts with the test generation step using techniques for generating test data, test sequence, and test oracle. Before
testing SUT, it is important to ensure that DTs and TS components are accurately representing all behaviors. This can be done through pilot experiments evaluating the similarity in behaviors. The similarity in outputs should ideally be close to 100% because it can affect testing results. The generated tests in the form of test scripts are executed on the SUT. Test execution requires API keys for communicating with SUT according to test scripts. The results of test execution are evaluated to analyze errors, faults, and failures. Moreover, test optimization during test generation is required in the case of testing in a rapid-release environment and within a short time frame.
**HITA Operational Context.** A tester initiates the testing process for testing a particular aspect of SUT such as REST APIs testing or graphical user interface (GUI) testing. This requires SUT integrated and operated with DTs and TS components. Tests are executed on SUT through HTTP using JSON format. The SUT processes the request and communicates with medical devices DTs or third-party applications TS depending on the test case. Finally, SUT generates a JSON response containing test execution results and sends it to the test execution module. This process continues for a specified testing budget.
### Quality Attributes
**Scalability.** An important concern when testing a healthcare IoT application with a growing number of medical devices is _scalability_ of development efforts. HITA provides a component for digital twins that are used in place of physical medical devices during the testing process. Any number of digital twins corresponding to a particular medical device can be easily created and operated by utilizing model-driven practices. Digital twins eliminate the need for integrating several medical devices physically and the risk of damaging physical devices. The use of digital twins is also _cost-effective_ which is another key consideration for creating test infrastructure.
**Maintainability.** Model-driven generation of digital twins allows for achieving _maintainability_ quality. Furthermore, HITA utilizes one server for digital twin and one server for TS components that can operate locally or on the cloud depending upon industrial preferences. Using one server each for both components requires less _maintainability_ effort as compared to using individual servers for different applications.
**Extensibility.** The modular structure of HITA components allows for achieving _extensibility_. For the addition of a new medical device, digital twins and their APIs need to be created using the model-driven approach. The APIs of digital twins are used for communication with SUT and the physical device. In the case of adding new healthcare services or features from a third-party application, a test stub is required to be created consisting of APIs for communication with SUT. The artificial dataset is created for testing if the new application is data-intensive.
**Evolvability.** HITA implicitly achieves _evolvability_ quality attribute due to its overall modular structure and components with scalable, maintainable, and
extensible characteristics. This leads to achieving overall _evolvability_ requirements of SUT and integrated devices/applications during testing.
**Heterogeneity.** Creating test infrastructure for healthcare IoT applications involves integration with heterogeneous systems such as different medical devices and various types of third-party applications. In HITA, _heterogeneity_ of different types of medical devices is handled using model-driven creation of digital twins and other APIs. For the _heterogeneity_ of third-party applications, HITA creates test stubs in the form of APIs. The APIs of digital twins and test stubs allow easy integration of diverse systems with SUT.
**Security & Privacy.** The use of real health records (e.g., patients' health data) during the testing process may lead to security breaches and data privacy issues. To handle security concern, HITA impose authentication and authorization mechanisms on all components. For data privacy, the TS and DTs components consist of _Artificial Data Store_ and _Data Persistence_ respectively which contain synthetic data instead of real patients' health data.
**Availability.** A critical problem solved by HITA is the _availability_ of medical devices and third-party applications during testing. Running extensive tests with the goal of rigorous testing may lead to unavailable services. HITA uses third-party applications' test stubs in the form of APIs running on a server (_TS Server_). Similarly, digital twins have _DT Server_ to handle requests during test execution. These servers are dedicated for testing purposes and hosted locally or on the cloud according to the up-time required for the testing process.
**Robustness & Portability.** The goal of testing a healthcare IoT application is to identify errors, faults, and failures with the assumption that integrated applications are robust. HITA instructs the development of APIs for DTs and TS components following REST architecture, which provides a reliable mechanism for integration and communication among various applications [12]. Moreover, as a result of test execution, DTs and TS components generate responses with failure and success information that enables identifying errors/faults in SUT. _Portability_ is an additional feature of HITA. The architecture followed by TS and DTs components can work on local machines when testing in offline mode and remotely in different cloud environments.
### Work Products
As a part of the test infrastructure, the DTs component is currently being developed according to HITA. Specifically, digital twins for medicine dispensers are developed using a model-driven approach (available online [23]). The approach utilizes metamodeling and executable state machines to create and operate digital twins of medicine dispensers. The _DT Server_ is developed using the Flask framework that can work locally and can be easily deployed on any cloud. The APIs following the REST architecture are generated using the Flask-compatible Flask-RESTful framework. The REST APIs use JSON format for data interchanges with SUT and physical medicine dispensers. Moreover, the technologies used for developing the DTs component make them easily portable from one platform to another without requiring major changes.
For the system-level testing of SUT, we utilize two methods, (i) testing of the application's backend REST APIs, and (ii) testing based on the GUI of web and mobile clients. For REST APIs testing of SUT, initially, we explored automated testing tools EvoMaster [5] and RESTest [17] and identified several API failures. For GUI-based testing, we use Selenium Webdriver4 with the web client of SUT.
Footnote 4: [https://www.selenium.dev/documentation/webdriver/](https://www.selenium.dev/documentation/webdriver/)
The work on creating the TS component of HITA is a future step. Nevertheless, several tools available for creating test stubs are under consideration such as JSON Server5, and Mocki.io6. An initial consideration is using JSON Server which is an open-source tool for creating test stubs in the form of REST APIs.
Footnote 5: [https://www.npmjs.com/package/json-server](https://www.npmjs.com/package/json-server)
Footnote 6: [https://mocki.io/](https://mocki.io/)
## 4 Experiences and Lessons Learned
Following we outline our experience and lessons learned while developing HITA work products and analyzing them through experiments.
**DTs Role in Test Infrastructure.** System-level testing of healthcare IoT applications requires different medical devices in the loop. Each type of medical device from a different vendor is linked to a web server that has certain constraints such as maximum allowed requests. The test generation and execution process involves sending several requests to medical devices through a healthcare IoT application. This leads to the blocking of service or the damaging of a medical device. Further, testing with hundreds of such devices is costly and not a practical option. Based on such experiences from Oslo City, we propose the idea of using DTs in place of physical medical devices to enable testing with multiple digital representations of physical devices. Thus, DTs have an important role in this regard. DTs with dedicated _DT Server_ and APIs eliminate the risk of service blockage or device damage. Virtually representing physical devices, DTs are a scalable and cost-effective solution. Our experiments with 100 DTs in different batches (i.e., 10, 20, 30,..., 100) indicated scalability, heterogeneity, and cost-effectiveness of the DTs component.
**Modeling for DTs.** The model-driven approach for automated generation of DTs requires creating a metamodel as an abstract structural representation of similar types of medical devices, e.g., medicine dispensers, and modeling behavioral aspects of medical devices using executable state machines. Several modeling tools (e.g., IBM RSA and Papyrus) are available for this purpose. Test engineers need to have a fundamental level of familiarity with any of the modeling tools. Models developed in this way involve a one-time effort and can be reused for testing multiple evolution phases of SUT. In the case of adding new medical devices, only metamodel and executable state machines need to be fine-tuned.
**Fidelity Evaluation of DTs.** While utilizing DTs of physical devices, an important consideration is the fidelity of DTs corresponding to physical twins. For this purpose, we empirically evaluated the fidelity of medicine dispenser DTs (up
to 100) in terms of their functional similarities with a physical medicine dispenser device. The results highlighted the functionality of DTs was almost similar to medicine dispensers. Moreover, fidelity evaluation in terms of internal behaviors is challenging due to limited access to internal operations of physical medicine dispensers. A future direction is the evaluation of DTs' fidelity considering the internal behavior of medicine dispensers.
#### 3.0.1 Testing with Third-party Applications.
An initial version of HITA was experimented with third-party applications and without a TS component using testing tools like EvoMaster and RESTest. With third-party applications in a loop, we observed that it is difficult to analyze the primary sources of failures or faults, i.e., a failure occurred due to a fault in SUT or the integrated application [24]. Moreover, we observed that services provided by third-party applications often become unavailable during the testing process. This led to a significant bottleneck in the rigorous testing of healthcare IoT applications. Thus, creating test stubs for third-party applications is a viable solution.
#### 3.0.2 Domain-specific Testing Strategies.
Our experiments with EvoMaster and RESTest highlighted the need for domain-specific testing strategies for healthcare IoT applications [24]. We analyzed that automated realistic test data generation is a challenging and open research problem. For example, automatically generating a valid medication plan for a patient is not a simple task. Generating a valid medication plan requires information regarding the start date, dose intake, number of days to take medicines, number of doses, and the total number of medicines allowed in a roll of a medicine dispenser. This involves understanding domain properties related to medications and the context of a medicine dispenser. There is still a need for domain-specific testing strategies.
#### 3.0.3 Intelligent Test Generation Technique.
Healthcare IoT applications commonly have a two-way communication mechanism with different medical devices and third-party applications. Several scenarios require an integrated medical device or third-party application to initiate the first step of the process. Automatically generating test cases for such scenarios is challenging. For instance, the steps to assign an alert (received from a patient) to concerned personnel include: (i) the patient's medical device generates an alert, (ii) the alert is received as an unassigned alert, (iii) identify an appropriate person (doctor, nurse, caretaker, etc.) to assign the alert, and (iv) assign the alert with notification to health authorities. An alert should be generated beforehand to test the alert-assigning scenario. This requires an intelligent technique for automated test case generation since HITA is designed for creating test infrastructure.
#### 3.0.4 Test Optimization.
Testing an industrial healthcare IoT application in a production and a rapid-release environment requires certain conditions such as a designated time budget for test generation and execution. Executing a maximum number of test cases with the aim of rigorous testing for each release is desirable but not feasible, even using test stubs and digital twins. An approach for generating and executing optimized test cases is necessary to ensure the dependability of healthcare IoT applications within a given time frame.
## 5 Conclusion
In this paper, we presented real-world architectural work in collaboration with Oslo City's healthcare department. We introduced HITA - a software architecture for creating test infrastructure to facilitate automated system-level testing of healthcare IoT applications. HITA is designed considering quality attributes of paramount importance. We also described the status of work products developed or currently being developed as a part of HITA-based test infrastructure. Finally, we presented experience and lessons learned based on experiments conducted with work products of HITA that are valuable for industry practitioners working in a similar domain. Our lessons learned are generalizable to various IoT-based systems such as activity/fitness trackers, smart homes, and smart security systems.
**Acknowledgements.** This work is a part of the WTT4Oslo project (No. 309175) funded by the Research Council of Norway. All the experiments reported in this paper are conducted in a laboratory setting of Simula Research Laboratory; therefore, they do not by any means reflect the quality of services Oslo City provides to its citizens.
|
2310.00126 | Simulations for Meta-analysis of Magnitude Measures | Meta-analysis aims to combine effect measures from several studies. For
continuous outcomes, the most popular effect measures use simple or
standardized differences in sample means. However, a number of applications
focus on the absolute values of these effect measures (i.e., unsigned magnitude
effects). We provide statistical methods for meta-analysis of magnitude effects
based on standardized mean differences. We propose a suitable statistical model
for random-effects meta-analysis of absolute standardized mean differences
(ASMD), investigate a number of statistical methods for point and interval
estimation, and provide practical recommendations for choosing among them. | Elena Kulinskaya, David C. Hoaglin | 2023-09-29T20:32:18Z | http://arxiv.org/abs/2310.00126v1 | # Simulations for Meta-analysis of Magnitude Measures
###### Abstract
Meta-analysis aims to combine effect measures from several studies. For continuous outcomes, the most popular effect measures use simple or standardized differences in sample means. However, a number of applications focus on the absolute values of these effect measures (i.e., unsigned magnitude effects). We provide statistical methods for meta-analysis of magnitude effects based on standardized mean differences. We propose a suitable statistical model for random-effects meta-analysis of absolute standardized mean differences (ASMD), investigate a number of statistical methods for point and interval estimation, and provide practical recommendations for choosing among them.
## 1 Introduction
Meta-analysis aims to combine effect measures from several studies. For continuous outcomes, the most popular effect measures use simple or standardized differences in sample means. However, a number of applications focus on the corresponding magnitudes, without regard to their direction.
Meta-analyses of magnitude effects are quite common in ecology and evolutionary biology, in situations where the direction of the effect is less important. As a rationale, Garamszegi (2006) argued that "the mean of the absolute values of the effect sizes may show that weak or strong effects are at work in general without considering directional roles" or "the researcher may want to compare unsigned effect sizes between different groups of traits, such as between plumage and song traits." Clements et al. (2022) studied the impacts of ocean acidification on fish behavior and used the absolute value "due
to the inherent difficulty in assigning a functional direction to a change in behavior, as many behavioral changes can be characterized by both positive and negative functional trade-offs". Felix et al. (2023) studied physical and chemical leaf traits that could affect herbivory but "expected the direction of the effect to be highly context-dependent (i.e., different neighbours may cause either an increase or a decrease in the same leaf trait)". Other examples include Bailey et al. (2009) (absolute effects of plant genetic factors across levels of organization), Champagne et al. (2016) (influence of the neighboring plant on the focal plant herbivory level), and Costantini (2018) (sexual differentiation in resistance to oxidative stress across vertebrates).
Morrissey (2016) discussed the rationale for magnitude effects in evolutionary biology and proposed some statistical methods for meta-analysis of absolute mean values. We discuss his work in Section 2.1. However, the majority of the cited papers used the absolute standardized mean difference (ASMD), though some used the absolute values of Pearson correlation or log-response ratio. Interestingly, ASMD values are routinely used for testing the balance of individual covariates between the two groups of an observational study when assessing the quality of a propensity-scores-based model, with 0.1 as the standard cutoff (Rubin, 2001; Ali et al., 2019).
Typically, the systematic reviews include meta-analyses of both directional and unsigned effects. Worryingly, to meta-analyze their absolute values (magnitude effects), those reviews (Champagne et al. (2016); Costantini (2018); Clements et al. (2022); Felix et al. (2023)) use routine inverse-variance methods developed for directional effects, which have very different statistical properties. The likely explanation is the lack of statistical methods specifically for MA of magnitude effects. This article aims to fill this important gap. We develop statistical methods for meta-analysis of ASMD-based magnitude effects and study their performance by simulation.
## 2 Notation
We assume that each of the \(K\) studies in the meta-analysis consists of two arms, Treatment and Control, with sample sizes \(n_{iT}\) and \(n_{iC}\). The total sample size in Study \(i\) is \(n_{i}=n_{iT}+n_{iC}\). We denote the ratio of the Control sample size to the total by \(f_{i}=n_{iC}/n_{i}\). The subject-level data in each arm are assumed to be normally distributed with means \(\mu_{iT}\) and
\(\mu_{iC}\) and variances \(\sigma_{iT}^{2}\) and \(\sigma_{iC}^{2}\). (We appreciate, however, that real data are not exactly normal.) The sample means are \(\bar{x}_{ij}\), and the sample variances are \(s_{ij}^{2}\), for \(i=1,\ldots,K\) and \(j=C\) or \(T\).
## 3 Absolute mean difference
The mean difference (MD) effect measure is
\[\mu_{i}=\mu_{iT}-\mu_{iC},\text{ estimated by }y_{i}=\bar{x}_{iT}-\bar{x}_{iC},\]
with variance \(\sigma_{i}^{2}=\sigma_{iT}^{2}/n_{iT}+\sigma_{iC}^{2}/n_{iC}\), estimated by
\[v_{i}^{2}=\hat{\sigma}_{i}^{2}=s_{iT}^{2}/n_{iT}+s_{iC}^{2}/n_{iC}. \tag{3.1}\]
Sometimes the pooled sample variance is used instead of \(v_{i}^{2}\). Then, however, unequal variances in the Treatment and Control arms can adversely affect estimation (Kulinskaya et al., 2004).
The familiar common-effect model for MD assumes that \(\mu_{i}=\mu\) for all \(i\), whereas the random-effects model allows the \(\mu_{i}\) to come from a distribution with mean \(\mu\) and variance \(\tau^{2}\), usually \(N(\mu,\tau^{2})\). Point estimation of \(\mu\) often uses a weighted mean, \(\hat{\mu}=(\Sigma w_{i}y_{i})/(\Sigma w_{i})\), with \(w_{i}=1/\hat{\sigma}_{i}^{2}\) in the common-effect model and \(w_{i}=1/(\hat{\sigma}_{i}^{2}+\hat{\tau}^{2})\) in the random-effects model. Several popular methods base estimators of \(\tau^{2}\) on \(Q=\Sigma w_{i}(y_{i}-\bar{y}_{w})^{2}\), with \(\bar{y}_{w}=(\Sigma w_{i}y_{i})/(\Sigma w_{i})\) and, initially, \(w_{i}=1/\hat{\sigma}_{i}^{2}\). We return to these methods in Section 7.2.
The underlying normal distributions in the two arms result in normality of MD: \(y_{i}\sim N(\mu_{i},\sigma_{i}^{2})\). Hence, the absolute mean difference (AMD) \(|y_{i}|\) has a folded normal distribution \(FN(\mu,\sigma^{2})\)(Leone et al. (1961), (Johnson et al., 1995, p.453), Tsagris et al. (2014)). For simplicity of notation, we sometimes drop the subscript \(i\).
The first two moments of the \(FN(\mu,\sigma^{2})\) distribution are
\[\mu_{f}=\mathrm{E}(|y|)=2\sigma\phi(\mu/\sigma)+\mu\left[1-2\Phi(-\mu/\sigma) \right],\ \ \sigma_{f}^{2}=\mu^{2}+\sigma^{2}-\mu_{f}^{2}, \tag{3.2}\]
where \(\phi(\cdot)\) and \(\Phi(\cdot)\) are the density and the cdf of the standard normal distribution. Tsagris et al. (2014) give the moment-generating function and higher moments and the maximum-likelihood estimators of the parameters. When \(\mu=0\), \(FN(\mu,\sigma^{2})\) is a half-normal distribution with mean \(\sigma(2/\pi)^{1/2}\) and variance \(\sigma^{2}(1-(2/\pi))\). A difference
could be used as a centered-at-zero absolute mean effect measure, as suggested in Morrissey (2016).
From Equation (3.2), the expected \(|y_{i}|\) depends on both the standardized mean \(\delta_{i}=\mu_{i}/\sigma_{i}\) and the variance \(\sigma_{i}^{2}\), so AMD does not seem to be an appropriate effect measure for magnitude. Additionally, its variance is rather difficult to estimate. A naive estimate would be \(\hat{\sigma}_{f}^{2}=y^{2}+v^{2}-\hat{\mu}_{f}^{2}\). Substituting the MD \(y\) and its standard deviation \(v\) in the expression for \(\mu_{f}\) in Equation (3.2) results in an \(O(1/n)\) biased estimate of \(\mu_{f}\) and, therefore, of its variance. It is possible to eliminate this bias by using the second-order Taylor expansion of \(h(\mu,\sigma)=\mu_{f}\), but the corrected estimate appears to be rather complicated.
To summarize, dependence on the nuisance parameter \(\sigma_{i}^{2}\), lack of asymptotic normality, and difficulty in estimating the variance of AMD preclude use of AMD in meta-analysis. Dividing \(\mu_{f}\) in Equation (3.2) by \(\sigma\) results in a simpler expression that depends on only the standardized mean \(\delta=\mu/\sigma\) and appears to be much more convenient for further analysis, suggesting use of ASMD instead. Therefore, we abandon AMD in favor of ASMD in what follows.
## 4 Absolute standardized mean difference
The standardized mean difference effect measure is
\[\delta_{i}=\frac{\mu_{iT}-\mu_{iC}}{\sigma_{i}}.\]
The variances in the Treatment and Control arms are usually assumed to be equal. Therefore, \(\sigma_{i}\) is estimated by the square root of the pooled sample variance
\[s_{i}^{2}=\frac{(n_{iT}-1)s_{iT}^{2}+(n_{iC}-1)s_{iC}^{2}}{n_{iT}+n_{iC}-2}. \tag{4.1}\]
The plug-in estimator \(d_{i}=(\bar{x}_{iT}-\bar{x}_{iC})/s_{i}\), known as Cohen's \(d\), is biased in small samples. Hedges (1983) derived the unbiased estimator
\[g_{i}=J(m_{i})\frac{\bar{x}_{iT}-\bar{x}_{iC}}{s_{i}},\]
where \(m_{i}=n_{iT}+n_{iC}-2\), and
\[J(m)=\frac{\Gamma\left(\frac{m}{2}\right)}{\sqrt{\frac{m}{2}}\Gamma\left( \frac{m-1}{2}\right)},\]
often approximated by \(1-3/(4m-1)\). This estimator of \(\delta_{i}\), typically used in meta-analysis of SMD, is sometimes called Hedges's \(g\).
Denote by \(\tilde{n}_{i}=n_{iC}n_{iT}/n_{i}=n_{i}q_{i}(1-q_{i})\) the effective sample size in Study \(i\). The sample SMD \(d_{i}\) (and therefore Hedges's estimate \(g_{i}\)) has a scaled noncentral \(t\)-distribution with noncentrality parameter (NCP) \(\tilde{n}_{i}^{1/2}\delta_{i}\):
\[\tilde{n}_{i}^{1/2}d_{i}\sim t_{m_{i}}(\tilde{n}_{i}^{1/2}\delta_{i}). \tag{4.2}\]
Therefore, the ASMD \(|d_{i}|\) has a _folded_ scaled noncentral \(t\)-distribution with the same noncentrality parameter:
\[\tilde{n}_{i}^{1/2}|d_{i}|\sim FNT_{m_{i}}(\tilde{n}_{i}^{1/2}\delta_{i}). \tag{4.3}\]
Alternatively, \(d_{i}^{2}\) has a scaled noncentral \(F_{1,m_{i}}(\tilde{n}_{i}\delta_{i}^{2})\) distribution.
A central folded \(t\)-distribution has \(\mu=0\), and a half-\(t\) additionally has \(\sigma=1\). The half-\(t\) was introduced by Psarakis and Panaretoes [1990], who derived its moments and discussed its relations to other distributions. In particular, when \(\nu\to\infty\), the folded \(t_{\nu}\) converges to the folded normal distribution.
Gelman [2006] introduced the FNT distribution as a noninformative conditionally-conjugate prior for the standard deviation \(\tau\) of the variance component in random-effects meta-analysis. However, we have not found any publications on the moments of the FNT distribution.
## 5 Squared standardized mean difference
The square of a FNT(\(\lambda\)) random variable with \(\nu\) df has a non-central \(F_{1,\nu}(\lambda^{2})\)-distribution, as does the square of a noncentral \(t\) random variable. As \(\nu\to\infty\), the distribution \(F_{1,\nu}(\lambda^{2})\) converges to the noncentral \(\chi_{1}^{2}(\lambda^{2})\). And when \(\lambda^{2}\to 0\), the distribution converges to the central \(F_{1,\nu}\) distribution.
The first and second moments of the noncentral \(F(\lambda^{2})\) distribution (the special case of the doubly-noncentral \(F\)-distribution \(F_{\nu_{1},\nu_{2}}(\lambda_{1},\lambda_{2})\) with \(\lambda_{1}=\lambda^{2}\) and \(\lambda_{2}=0\)) with \(\nu_{1},\;\nu_{2}>4\) are [Johnson et al., 1995, (30.3)]
\[{\rm E}(X)=\frac{\nu_{2}(\nu_{1}+\lambda^{2})}{\nu_{1}(\nu_{2}-2)},\;\;\;{ \rm Var}(X)=2\left(\frac{\nu_{2}}{\nu_{1}}\right)^{2}\frac{(\nu_{1}+\lambda^ {2})^{2}+(\nu_{1}+2\lambda^{2})(\nu_{2}-2)}{(\nu_{2}-2)^{2}(\nu_{2}-4)}. \tag{5.1}\]
From Equation (4.2),
\[d_{i}^{2}\sim\tilde{n}_{i}^{-1}F_{1,m_{i}}(\tilde{n}_{i}\delta_{i}^{2}).\]
Using \(\nu_{1}=1\) and \(\nu_{2}=m_{i}\) in Equation (5.1), the moments of \(d_{i}^{2}\) are
\[\mathrm{E}(d_{i}^{2})=\left(\frac{m_{i}}{m_{i}-2}\right)(\tilde{n}_{i}^{-1}+ \delta_{i}^{2}), \tag{5.2}\]
\[\mathrm{Var}(d_{i}^{2})=\frac{2m_{i}^{2}}{(m_{i}-2)^{2}(m_{i}-4)}\left(\frac{m _{i}-1}{\tilde{n}_{i}^{2}}+\frac{2(m_{i}-1)\delta_{i}^{2}}{\tilde{n}_{i}}+ \delta_{i}^{4}\right). \tag{5.3}\]
From Equation (5.2), an unbiased estimate of the squared SMD \(\delta^{2}\) is
\[\widehat{\delta_{i}^{2}}=\frac{m_{i}-2}{m_{i}}d_{i}^{2}-\frac{1}{\tilde{n}_{i }}. \tag{5.4}\]
The variance of \(\widehat{\delta_{i}^{2}}\) is
\[\mathrm{Var}(\widehat{\delta_{i}^{2}})=\frac{2}{(m_{i}-4)}\left(\frac{m_{i}-1 }{\tilde{n}_{i}^{2}}+\frac{2(m_{i}-1)\delta_{i}^{2}}{\tilde{n}_{i}}+\delta_{i }^{4}\right). \tag{5.5}\]
Combining Equations (5.4) and (5.5),
\[\mathrm{E}(d_{i}^{4})=\frac{m_{i}^{2}}{(m_{i}-2)(m_{i}-4)}\left(\frac{3}{ \tilde{n}_{i}^{2}}+6\frac{\delta_{i}^{2}}{\tilde{n}_{i}}+\delta_{i}^{4}\right).\]
Hence,
\[\widehat{\delta_{i}^{4}}=\frac{(m_{i}-2)(m_{i}-4)}{m_{i}^{2}}d_{i}^{4}-\frac{ 6}{\tilde{n}_{i}}\frac{m_{i}-2}{m_{i}}d_{i}^{2}+\frac{3}{\tilde{n}_{i}^{2}}.\]
Substituting \(\widehat{\delta_{i}^{2}}\) from Equation (5.4) and the above estimate of \(\widehat{\delta_{i}^{4}}\) into Equation (5.5), we obtain an unbiased estimate of \(\mathrm{Var}(\widehat{\delta_{i}^{2}})\) :
\[\widehat{\mathrm{Var}}(\widehat{\delta_{i}^{2}})=\frac{2(m_{i}-2)}{m_{i}^{2}} d_{i}^{4}+\frac{4(m_{i}-2)}{m_{i}\tilde{n}_{i}}d_{i}^{2}-\frac{2}{\tilde{n}_{i}^{2}}. \tag{5.6}\]
The related problem of estimating the noncentrality \(\lambda^{2}\) from a single observation \(F^{\prime}\) from \(F_{\nu_{1},\nu_{2}}(\lambda^{2})\) is well investigated. The UMVUE estimator is \(\hat{\lambda}^{2}=\nu_{1}\nu_{2}^{-1}(\nu_{2}-2)F^{\prime}-\nu_{1}\), which, for our setting, becomes \(\widehat{\delta_{i}^{2}}\) but is inadmissible, as is its truncated-at-zero version. See [Johnson et al., 1995, Section 30.6] for discussion of point and interval estimation of \(\lambda^{2}\).
Steiger [2004] provides an explicit algorithm for finding a \((1-\alpha)\) confidence interval for the noncentrality parameter of a noncentral \(F\) distribution \(F(\cdot;\lambda^{2})\) based on an inverted \(F\) test. We obtain a confidence interval for \(\delta_{i}^{2}\) as follows:
* Calculate \(1-p=F_{1,m_{i}}(\tilde{n}_{i}d_{i}^{2};0)\).
* If \(1-p<\alpha/2\), \(\lambda_{upper}^{2}=0\). Otherwise, solve for \(\lambda_{upper}^{2}\) in \(F_{1,m_{i}}(\tilde{n}_{i}d_{i}^{2};\lambda_{upper}^{2})=\alpha/2\).
* If \(1-p<1-\alpha/2\)), \(\lambda^{2}_{lower}=0\). Otherwise, solve for \(\lambda^{2}_{lower}\) in \(F_{1,m_{i}}(\tilde{n}_{i}d_{i}^{2};\lambda^{2}_{lower})=1-\alpha/2\).
* The confidence interval for \(\delta^{2}_{i}\) is \(\tilde{n}_{i}^{-1}(\hat{\lambda}^{2}_{lower},\ \hat{\lambda}^{2}_{upper})\), and taking the square root of these estimated confidence limits yields the confidence interval for \(|\delta|\).
The above equations for the confidence limits have a unique solution because \(F_{\nu_{1},\nu_{2}}(\cdot;\lambda^{2})\) is a decreasing function of \(\lambda^{2}\). We call these confidence intervals, based on inverted \(F\) or \(\chi^{2}\) tests, \(F\)- or \(\chi^{2}\)-profile intervals.
## 6 Meta-analysis of squared SMD
We assume that the \(K\) studies, with sample sizes \((n_{iC},n_{iT})\) in the Control and Treatment arms, respectively, resulted in magnitude effects \(d_{i}^{2}\) or \(\widehat{\delta_{i}^{2}},\ i=1,\ldots,K\). We formulate common-effect and random effects models (REM) for magnitude effects in sections 6.1 and 6.2, respectively. Inference for \(\delta^{2}\) under REM is discussed in sections 6.3 and 6.4.
### Common-effect model for \(\delta^{2}\)
We formulate the common-effect model (also known as the fixed-effect model) for the magnitude effect as
\[\tilde{n}_{i}d_{i}^{2}\sim F_{1,m_{i}}(\tilde{n}_{i}\delta^{2}),\ \ i=1,\ldots,K. \tag{6.1}\]
The objective is to estimate the magnitude \(\delta^{2}\).
From Equation (5.4), any weighted average of the \(\widehat{\delta_{i}^{2}}\) is an unbiased estimate of \(\delta^{2}\). The simplest choice uses weights proportional to \(\tilde{n}_{i}\). Then
\[\widehat{\delta}^{2}=(\Sigma\tilde{n}_{i})^{-1}\sum_{1}^{K}\tilde{n}_{i} \widehat{\delta_{i}^{2}}=(\Sigma\tilde{n}_{i})^{-1}\left[\sum_{1}^{K}\frac{m_ {i}-2}{m_{i}}\tilde{n}_{i}d_{i}^{2}-K\right] \tag{6.2}\]
is distributed as a shifted and scaled sum of \(F_{1,m_{i}}(\tilde{n}_{i}\delta^{2})\)-distributed r.v.'s. Also, the simpler statistic
\[d^{2}=(\Sigma\tilde{n}_{i})^{-1}\Sigma\tilde{n}_{i}d_{i}^{2}\sim(\Sigma\tilde {n}_{i})^{-1}\left[\sum_{1}^{K}F_{1,m_{i}}(\tilde{n}_{i}\delta^{2})\right]. \tag{6.3}\]
This distribution appears rather complicated, and we are not aware of any publications or implementations of it. When \(\tilde{n}_{i}\rightarrow\infty\), it converges to a scaled (by \((\sum\tilde{n}_{i})^{-1}\)) sum of
\(\chi^{2}_{1}(\tilde{n}_{i}\delta^{2})\) distributions, which is just a scaled noncentral \(\chi^{2}_{K}(\delta^{2}\Sigma\tilde{n}_{i})\) distribution [Johnson et al., 1995, (29.5)]:
\[d^{2}=(\Sigma\tilde{n}_{i})^{-1}\sum\tilde{n}_{i}d^{2}_{i}\underset{\{m_{i}\} \rightarrow\infty}{\sim}(\Sigma\tilde{n}_{i})^{-1}\chi^{2}_{K}(\delta^{2} \Sigma\tilde{n}_{i}). \tag{6.4}\]
The statistic \((\sum\tilde{n}_{i})d^{2}\) can be used to test for \(\delta^{2}=0\) using the percentage points of the central \(\chi^{2}_{K}\) distribution, in the case of large sample sizes, or of the central version of Equation (6.3) directly by using the parametric bootstrap. An algorithm similar to that at the end of Section 5 can be used to obtain an approximate \((1-\alpha)\)-level \(\chi^{2}\)-profile confidence interval for \(\delta^{2}\).
### Random-effects model for \(\delta^{2}\)
We formulate the random-effects model for the magnitude effect as
\[\tilde{n}_{i}d^{2}_{i}\sim F_{1,m_{i}}(\tilde{n}_{i}\delta^{2}_{i}),\ \ \delta_{i}\sim N(\delta,\tau^{2}),\ \ i=1,\ldots,K. \tag{6.5}\]
The model for the \(\delta_{i}\) is the standard random-effects model, with parameters \(\delta\) and \(\tau^{2}\). The objective, however, is to estimate \(\delta^{2}\) instead of \(\delta\). From \(\delta_{i}/\tau\sim N(\delta/\tau,1)\) we obtain \(\delta^{2}_{i}\sim\tau^{2}\chi^{2}_{1}(\delta^{2}/\tau^{2})\).
The distribution of \(\tilde{n}_{i}d^{2}_{i}\) in Equation (6.5) is conditional on \(\delta^{2}_{i}\). Taking into account the distribution of \(\delta_{i}\), \(\tilde{n}_{i}d^{2}_{i}\) has a noncentral \(F\)-distribution mixed over its noncentrality parameter. By definition, the doubly-noncentral \(F\)-distribution \(F(p,q,\lambda_{1},\lambda_{2})\) is the distribution of the ratio of two independent noncentral chi-square random variables: \(F(p,q,\lambda_{1},\lambda_{2})=qX_{1}/pX_{2}\), where \(X_{1}\sim\chi^{2}_{p}(\lambda_{1})\) and \(X_{2}\sim\chi^{2}_{q}(\lambda_{2})\). Corollary 2 of Jones and Marchand [2021] states that if \(F|(Y_{1}=y_{1},Y_{2}=y_{2})\sim F(p,q,h_{1}y_{1},h_{2}y_{2})\) and \(Y_{1}\sim\chi^{2}_{p}(\lambda_{1})\) and \(Y_{2}\sim\chi^{2}_{q}(\lambda_{2})\) independently, then \((1+h_{2})F/(1+h_{1})\sim F(p,q,\frac{h_{1}\lambda_{1}}{1+h_{1}},\frac{h_{2} \lambda_{2}}{1+h_{2}})\).
For \(\tau^{2}>0\), we take \(h_{2}=0\), \(p=1\), \(q=m_{i}\), \(h_{1}=\tilde{n}_{i}\tau^{2}\), and \(\lambda_{1}=\delta^{2}_{i}/\tau^{2}\) and write \(\delta^{2}_{i}/\tau^{2}\sim\chi^{2}_{1}(\delta^{2}/\tau^{2})\) to obtain
\[\tilde{n}_{i}d^{2}_{i}\sim(1+\tilde{n}_{i}\tau^{2})F_{1,m_{i}}\left(\frac{ \tilde{n}_{i}\delta^{2}}{1+\tilde{n}_{i}\tau^{2}}\right),\ \ i=1,\ldots,K. \tag{6.6}\]
When \(\tau^{2}=0\), Equation (6.6) is still valid and reduces to Equation (6.1); that is, the random-effects model becomes the common-effect model. Under the REM,
\[\text{E}(\tilde{n}_{i}d^{2}_{i})=\frac{m_{i}}{m_{i}-2}(1+\tilde{n}_{i}\tau^{2} +\tilde{n}_{i}\delta^{2})\ \text{and}\ \text{E}(\widehat{\delta}^{2}_{i})=\tau^{2}+\delta^{2}\]
. Therefore, \(\widehat{\delta}^{2}\) given by Equation (6.2) or any other weighted mean of the \(\widehat{\delta}^{2}_{i}\) with constant weights would provide an unbiased estimate of \(\tau^{2}+\delta^{2}\).
### Inference for \(\delta^{2}\) from signed values of SMD
When the initial meta-analysis used the \(\hat{\delta}_{i}\) and estimated \(\tau^{2}\) by \(\hat{\tau}^{2}\), we can obtain a point estimate of the magnitude effect \(\delta^{2}\) as \(\widehat{\hat{\delta}^{2}}=\widehat{\delta^{2}}-\hat{\tau}^{2}\) or its truncated-at-zero version.
It is convenient to consider using a level \((1-\alpha)\) confidence interval for \(\delta\), \((L,U)\), as the basis for a level \((1-\alpha)\) confidence interval for \(\delta^{2}\).
By \(I_{1-\alpha}(\delta)=(L,U)\) we denote a level-\((1-\alpha)\) CI for \(\delta\) with level \((1-\alpha)\). To allow unequal division of \(\alpha\) between the two tails, we let \(\beta<\alpha\) be the part in the upper tail. \(L=\hat{\delta}-c_{1-\beta}v(\hat{\delta})\), \(U=\hat{\delta}-c_{\alpha-\beta}v(\hat{\delta})\), \(v(\hat{\delta})\) is the estimated standard deviation of \(\hat{\delta}\), and \(c_{\gamma}\) is the critical value at tail area \(\gamma\) from an appropriate symmetric distribution \(G\), such as normal or t.
When both confidence limits are on the same side of zero, say \(0<L<U\) (i.e., when \(\hat{\delta}/v(\hat{\delta})>c_{1-\beta}\)), the naive CI \((L^{2},U^{2})\) provides a CI for \(\delta^{2}\) with level \((1-\gamma)\geq(1-\alpha)\) for some \(0<\gamma<\alpha\) because \((L^{2},U^{2})\) also includes the values of \(\delta\) in \(-U<\delta<-L\). This extra coverage probability is
\[\begin{array}{rl}P(-U<\delta<-L)&=P(-\hat{\delta}+c_{\alpha-\beta}v(\hat{ \delta})<\delta<-\hat{\delta}+c_{1-\beta}v(\hat{\delta}))\\ &=P(c_{\alpha-\beta}<(\hat{\delta}-\delta+2\delta)/v(\hat{\delta})<c_{1-\beta} )\\ &=G(c_{1-\beta}-2\delta/v(\hat{\delta}))-G(c_{\alpha-\beta}-2\delta/v(\hat{ \delta})).\end{array} \tag{6.7}\]
When \(\beta=\alpha/2=.025\), the probability \(P(-U<\delta<-L)\) decreases from \(.025\) when \(\delta/v(\hat{\delta})=c_{1-\beta}\) to \(4.43\)e-05 when \(\delta/v(\hat{\delta})=3c_{1-\beta}/2\) to \(2.052\)e-09 when \(\delta/v(\hat{\delta})=2c_{1-\beta}\). The case \(L<U<0\) yields the same values when \(-\hat{\delta}/v(\hat{\delta})>c_{\alpha-\beta}\). The extra coverage seems small enough not to require correction of the confidence level.
However, to obtain exactly level \((1-\gamma)\) coverage for \(\delta^{2}\) for an arbitrary \(\gamma\), take, for simplicity, \(\beta=\alpha/2\), substitute \(\hat{\delta}\) for \(\delta\) in Equation (6.7), and solve for \(\alpha\) in the equation \(\gamma=\alpha-\hat{P}(-U<\delta<-L)\).
Similarly, when \(L<0<U\) or, equivalently, when \(c_{\alpha-\beta}<\hat{\delta}/v(\hat{\delta})<c_{1-\beta}\), we can choose the naive confidence interval \(I_{1-\gamma}(\delta^{2})=[0,\max(L^{2},U^{2}))\) for \(\delta^{2}\). This interval provides a CI for \(\delta^{2}\) with level \((1-\gamma)\geq(1-\alpha)\). Suppose \(-L>U\). Then \(I_{1-\gamma}(\delta^{2})\) also includes values of \(\delta\) for which \(U<\delta<-L\), which were not included in the initial level-\((1-\alpha)\) CI for \(\delta\). Thie extra coverage probability is
\[\begin{array}{rl}P(U<\delta<-L)&=P(\hat{\delta}-c_{\alpha-\beta}v(\hat{ \delta})<\delta<-\hat{\delta}+c_{1-\beta}v(\hat{\delta}))\\ &=P((\hat{\delta}-\delta)/v(\hat{\delta})<\min(c_{\alpha-\beta},c_{1-\beta}-2 \delta/v(\hat{\delta}))\\ &=\min(G(c_{\alpha-\beta}),G(c_{1-\beta}-2\delta/v(\hat{\delta}))).\end{array} \tag{6.8}\]
When \(\beta=\alpha/2=.025\), the probability \(P(U<\delta<-L)\) decreases from \(.025\) when \(\delta/v(\hat{\delta})<c_{1-\alpha/2}\) to \(1.84\)e-\(04\) when \(\delta/v(\hat{\delta})=1.5c_{1-\beta}/2\) to \(1.242\)e-\(08\) when \(\delta/v(\hat{\delta})=2c_{1-\beta}\).
To obtain exactly \((1-\alpha)\)-level coverage when \(L<0<U\), we can choose a value of \(\beta\)\(0<\beta<\alpha\) so that \(-L=U\) and take the corrected interval \((0,L^{2})\) as a level \((1-\alpha)\) CI for \(\delta^{2}\). This is equivalent to finding \(\beta\) such that \(c_{1-\beta}+c_{\alpha-\beta}=2\hat{\delta}/v(\hat{\delta})\). This equation always has a solution: when \(\beta\rightarrow\alpha\), \(c_{1-\beta}+c_{\alpha-\beta}\rightarrow-\infty\), and when \(\beta\to 0\), \(c_{1-\beta}+c_{\alpha-\beta}\rightarrow\infty\).
Our simulations included the above correction to the naive confidence interval for \(L<0<U\).
### Conditional inference for \(\delta^{2}\) given \(\hat{\tau}^{2}\)
Section 6.3 suggests the point estimate \(\widehat{\widehat{\delta}^{2}}=\widehat{\delta}^{2}-\hat{\tau}^{2}\) for the magnitude effect (conditional on \(\hat{\tau}^{2}\)). Obtaining a confidence interval for \(\delta^{2}\) given \(\hat{\tau}^{2}\) is more complicated because \(\widehat{\delta}^{2}\) and \(\hat{\tau}^{2}\) are not independent. A simple way forward uses Equation (6.6) and the statistic
\[\Lambda(\tau^{2})=\sum\frac{\tilde{n}_{i}d_{i}^{2}}{1+\tilde{n}_{i}\tau^{2}} \sim\sum F_{1,m_{i}}\left(\frac{\tilde{n}_{i}\delta^{2}}{1+\tilde{n}_{i}\tau^ {2}}\right)\underset{\{m_{i}\}\rightarrow\infty}{\sim}\chi_{K}^{2}\left(\sum \frac{\tilde{n}_{i}\delta^{2}}{1+\tilde{n}_{i}\tau^{2}}\right). \tag{6.9}\]
A conditional (given \(\hat{\tau}^{2}\)) test for \(\delta^{2}=0\) would compare \(\Lambda(\hat{\tau}^{2})\) against a percentile from the \(\chi_{K}^{2}\) distribution, or a critical value obtained by bootstrapping the distribution of \(\sum F_{1,m_{i}}\). In the same vein, to obtain a conditional (given \(\hat{\tau}^{2}\)) \(\chi^{2}\)-profile confidence interval for \(\delta^{2}\), we can substitute \(\hat{\tau}^{2}\) for \(\tau^{2}\) in Equation (6.9) and solve for the confidence limits for \(\delta^{2}|\hat{\tau}^{2}\) at the \(.025\) and \(.975\) percentage points.
## 7 Simulation study
### Simulation design
A number of other studies have used simulation to examine estimators of \(\tau^{2}\) or of the overall effect for SMD. Our simulation design largely follows that of Bakbergenuly et al. (2020), which includes a detailed summary of previous simulation studies and gives our rationale for choosing the ranges of values for \(\mu\), \(\delta\), and \(\tau^{2}\) that we consider realistic for a range of applications.
All simulations used the same numbers of studies (\(K=5,\ 10,\ 20,\ 30,\ 50,\ 100\)) and, for each combination of parameters, the same vector of total sample sizes (\(n_{1},\ldots,n_{K}\)) and the same proportion of observations in the Control arm (\(f_{i}=.5\) for all \(i\)). Thus, the
sample sizes in the Treatment and Control arms were approximately equal: \(n_{iT}=\lceil n_{i}/2\rceil\) and \(n_{iC}=n_{i}-n_{iT}\), \(i=1,\ldots,K\).
We studied equal and unequal study sizes. For equal-sized studies, the sample sizes were \(n_{i}=40,\;100,\;250,\;500\). In choosing unequal study sizes, we followed a suggestion of Sanchez-Meca and Marin-Martinez [2000], who selected sets of study sizes having skewness \(1.464\), which they considered typical in behavioral and health sciences. Table 1 gives the details.
We used a total of \(10,000\) repetitions for each combination of parameters. Thus, the simulation standard error for estimated coverage of \(\tau^{2}\), \(\delta\) or \(\delta^{2}\) at the \(95\%\) confidence level is roughly \(\sqrt{.95\times.05/10,000}=.00218\).
The simulations were programmed in R version 4.0.2.
We varied four parameters: the overall true SMD (\(\delta\)), the between-studies variance (\(\tau^{2}\)), the number of studies (\(K\)), and the studies' total sample size (\(n\) and \(\bar{n}\)). Table 1 lists the values of each parameter.
We generated the true effect sizes \(\delta_{i}\) from a normal distribution: \(\delta_{i}\sim N(\delta,\tau^{2})\). We generated the values of \(d_{i}\) directly from the appropriately scaled noncentral \(t\)-distribution, \(\tilde{n}_{i}^{1/2}d_{i}\sim t_{m_{i}}(\tilde{n}_{i}^{1/2}\delta_{i})\), and obtained the values of Hedges's \(g_{i}\) and \(d_{i}^{2}\) for further meta-analysis of SMD and of ASMD, respectively.
\begin{table}
\begin{tabular}{|l|l|l|} \hline Squared SMD & Equal study sizes & Unequal study sizes \\ \hline \(K\) (number of studies) & 5, 10, 20, 30, 50, 100 & 5, 10, 30 \\ \(n\) or \(\bar{n}\) (average (individual) study size & 40, 100, 250, 500 & 60 (24,32,36,40,168), \\ — total of the two arms) & & 100 (64,72,76,80,208), \\ For \(K=10\) and \(K=30\), the same set of unequal & & 160 (124,132,136,140,268) \\ study sizes was used twice or six times, respectively. & & \\ \(f\) (proportion of observations in the Control arm) & 1/2 & 1/2 \\ \(\delta\) (true value of the SMD) & 0, 0.2, 0.5, 1, 2 & 0, 0.2, 0.5, 1, 2 \\ \(\tau^{2}\) (variance of random effects) & 0(0.1)1 & 0(0.1)1 \\ \hline \end{tabular}
\end{table}
Table 1: _Data patterns in the simulations for squared SMD_
### Tests and estimators studied
Under the random-effects model for SMD, we used the generated values of Hedges's \(g_{i}\) to calculate the three estimators of \(\tau^{2}\) (MP, KDB, and SSC) that Bakbergenuly et al. (2020) and Bakbergenuly et al. (2022) recommended as the best available. Briefly, Mandel and Paule (1970) (MP) estimator \(\hat{\tau}_{MP}^{2}\) is based on the first moment of the large-sample chi-square distribution \(\chi_{K-1}^{2}\) of Cochran's \(Q\). Kulinskaya et al. (2011) derived \(O(1/n)\) corrections to moments of \(Q\). The KDB estimator \(\hat{\tau}_{KDB}^{2}\) is a moment-based estimator based on this improved approximation. A generalised \(Q\) statistic discussed in DerSimonian and Kacker (2007) and further studied for SMD by Bakbergenuly et al. (2020) and Bakbergenuly et al. (2022), allows the weights \(w_{i}\) to be arbitrary positive constants. The SSC estimator \(\hat{\tau}_{SSC}^{2}\) is a moment-based estimator with effective sample size weights \(\tilde{n}\).
As a baseline, we recorded the bias of these three estimators and the bias of the three point estimators of \(\delta\) that used the MP, KDB, or SSC estimate of \(\tau^{2}\) in the weights.
Point estimators for \(\delta\) are weighted averages of estimated SMDs \(g_{i}\). Estimators corresponding to MP and KDB (\(\hat{\delta}_{MP}\) and \(\hat{\delta}_{KDB}\) ) use inverse-variance-weights obtained by substitution of MP or KDB estimator of \(\tau^{2}\) into expression for an inverse-variance-weights \(w_{i}(\tau^{2})=(v_{i}^{2}+\tau^{2})^{-1}\). The SSC point estimator of \(\delta\) uses effective sample size weights \(\tilde{n}\).
Under the common-effect model for ASMD, we studied bias of \(d^{2}\), empirical levels and power of a chi-square test for \(\delta^{2}=0\) based on \((\sum\tilde{n}_{i})d^{2}\) (Equation (6.3)), and coverage of the chi-square profile confidence interval for \(\delta^{2}\) at the 95% nominal level.
In random-effects meta-analysis of ASMD, we studied the bias of three point estimators of \(\delta^{2}\) (\(\widehat{\widehat{\delta}_{MP}^{2}}\), \(\widehat{\widehat{\delta}_{KDB}^{2}}\), and \(\widehat{\widehat{\delta}_{SSC}^{2}}\)) calculated as \(\widehat{\widehat{\delta}_{\tau^{2}}^{2}}=\widehat{\delta}^{2}-\hat{\tau}^{2}\), where \(\widehat{\delta}^{2}\) is given by Equation (6.2) and \(\hat{\tau}^{2}\) is given by the corresponding estimator of \(\tau^{2}\), and of their truncated-at-zero versions, calculated as \(\widehat{\widehat{\delta}_{tr}^{2}}=\max(\widehat{\widehat{\delta}^{2}},0)\).
We studied coverage of the 95% confidence intervals for \(\delta^{2}\) based on the confidence intervals for the signed \(\delta\) values, described in Section 6.3. We considered both naive and corrected versions of these CIs. We used percentage points from the normal distribution for the MP, KDB, and SSC-based intervals and \(t_{K-1}\) percentage points for a second SSC-based interval, denoted by SSC_t.
Interval estimators for \(\delta\) corresponding to MP, KDB and SSC use the respective point estimator \(\hat{\delta}\) as the midpoint, and the half-width equals the estimated standard deviation of \(\hat{\delta}\) under the random-effects model times the critical value from the normal or (for SSC_t)
from the \(t\) distribution on \(K-1\) degrees of freedom.
We also studied coverage of the three conditional 95% confidence intervals for \(\delta^{2}\)\(\Lambda_{MP}\), \(\Lambda_{KDB}\), and \(\Lambda_{SSC}\) based on the statistic \(\Lambda(\tau^{2})\) given by Equation (6.9) in combination with the estimates \(\tau_{MP}^{2}\), \(\tau_{KDB}^{2}\), and \(\tau_{SSC}^{2}\).
Additionally, we studied empirical levels and power of the conditional tests of \(\delta^{2}=0\) based on the statistics \(\Lambda_{MP}\), \(\Lambda_{KDB}\), and \(\Lambda_{SSC}\) and the \(\sum F_{1,m_{i}}\) distribution or the \(\chi_{K}^{2}\) approximation to this distribution (Equation (6.9)). For comparison, we also studied empirical levels of the unconditional test based on \(\Lambda(\tau^{2})\) for known \(\tau^{2}\).
## 8 Simulation results
### Baseline estimation of \(\delta\) and \(\tau^{2}\)
In estimation of \(\delta\), the maximum average bias across all configurations was below 0.01, and the median bias was \(-0.002\) or less for all three estimators.
In estimation of \(\tau^{2}\), the maximum bias was higher, at 0.045 or less, but it decreased to 0.017 or less for \(n\geq 100\). The median bias was less than 0.0015. Bakbergenuly et al. (2020, 2022) give more details on the behavior of our chosen estimators.
### Bias of point estimators of \(\delta^{2}\), Appendix A
When \(\delta=0\), all three estimators had a small negative bias for \(\tau^{2}\leq 0.1\), but were almost unbiased for \(\tau^{2}\geq 0.2\). The truncated versions had positive bias, especially pronounced for \(K=5\), that increased with increasing \(\tau^{2}\). SSC was almost unbiased. For larger values of \(\delta\), bias varied more among the estimators when \(n=40\) and \(K=5\). However, for larger \(n\), the bias of all estimators was very small.
### Empirical levels of the conditional tests of \(\delta^{2}=0\), Appendix B
All three conditional tests of \(\delta^{2}=0\) at a 5% nominal level proved unfit for use. The levels were near zero when \(\tau^{2}=0\), but when \(K=5\), they increased to near nominal for \(\tau^{2}=0.2\) and increased to about 0.06 by \(\tau^{2}=1\). The tests based on the bootstrap \(F\) values behaved similarly, with somewhat lower levels. However, for \(K=10\), the levels increased
to about \(0.02\) and remained there, and for \(K=20\) they were near zero for all \(\tau^{2}\) values. In contrast, the unconditional test, which used known \(\tau^{2}\), produced consistent near-nominal levels. We believe that the disappointing behavior of the conditional tests arises from high correlation between the \(d_{i}^{2}\) and \(\hat{\tau}^{2}\) values. This correlation is well known for the folded normal distribution (Tsagris et al., 2014).
Coverage of naive and corrected confidence intervals for \(\delta^{2}\) based on signed SMD values, Appendix C
Coverage did not depend much on sample sizes. Confidence intervals based on normal critical values generally had low coverage for \(K<30\), especially for small \(K\) and \(\delta=0.2\) or \(0.5\), but their coverage improved with \(K\). There was no visible difference among the MP, KD, or SSC confidence intervals.
Naive SSC_t confidence intervals, based on \(t_{K-1}\) critical values, provided consistently good coverage for the vast majority of configurations. For \(\delta=0\) or \(\delta\geq 1\), their coverage was almost nominal for \(\tau^{2}\geq 0.2\). For \(0.5\geq\delta\geq 0.2\), coverage was above nominal when \(K\leq 10\), but for \(K\geq 20\) it decreased to nominal for \(\delta=0.5\). Even for \(K=100\), coverage was somewhat above nominal for large \(\tau^{2}\) values when \(\delta=0.2\).
For \(K\geq 50\), there was almost no difference in coverage between normal- and t-based intervals.
We also studied coverage of the corrected confidence intervals. Coverage of the corrected SSC*_t confidence intervals was above \(93.5\%\) for all configurations, but it was typically below nominal for \(\delta=0.2\) and \(0.5\), even for \(K=100\). Therefore, we do not recommend this correction.
### Coverage of conditional confidence intervals for \(\delta^{2}\), Appendix C
When \(\delta=0\), coverage of the conditional confidence intervals follows from the above results on the empirical levels of the respective conditional tests. There was not much difference among the MP, KD, and SSC conditional confidence intervals, nor among sample sizes from \(n=40\) to \(n=1000\). For \(K=5\), coverage was near \(1\) when \(\tau^{2}=0\), it slowly decreased to nominal for larger \(\tau^{2}\). For \(K=10\), coverage decreased from \(1\) to about \(98\%\), and for
\(K\geq 20\), coverage was near 1 for all \(\tau^{2}\) values. However, for larger values of \(\delta\), coverage was near nominal when \(\tau^{2}=0\) and then dropped dramatically for larger \(\tau^{2}\). This drop was more pronounced for \(K\leq 10\) and for larger \(\delta\). It was quite prominent when \(K=5\) and \(\delta=0.5\) but less so for \(K=30\) and \(\delta=0.5\), where it was above nominal, but the drop was present even when \(K=100\) and \(\delta=1\). Coverage then increased slowly with increasing \(\tau^{2}\), sometimes almost to nominal when \(\tau^{2}=1\). When \(\delta=2\), coverage was low for \(\tau^{2}=0\) and increased slowly with \(\tau^{2}\).
## 9 Discussion
Though common in ecology and evolutionary biology, meta-analysis of magnitude effects has received little statistical attention, and the methods used so far are not appropriate. We formulate a random-effects model for meta-analysis of ASMD and propose appropriate statistical methods for point and interval estimation in meta-analysis of ASMD.
Statistical properties of squared SMD are more straightforward than those of its absolute value. Therefore, our methodological development focuses mainly on inference for \(\delta^{2}\). However, for inference on \(|\delta|\), one only needs to take the square root of the estimated \(\delta^{2}\) and its confidence limits.
For point estimation of squared ASMD, we corrected an estimate of \(\delta^{2}\) by subtracting the estimated between-study variance \(\hat{\tau}^{2}\) (in the signed SMD meta-analysis). Our simulations show that this works well when using a good estimator of \(\tau^{2}\) such as MP, KD, or SSC.
For interval estimation, we considered three classes of statistical methods: naive and corrected intervals for \(\delta^{2}\) obtained from the signed SMD data and conditional methods based on the distribution of \(\delta^{2}\) given the estimated \(\tau^{2}\). We found that coverage of the conditional confidence intervals was rather erratic, and the corrected confidence intervals provided somewhat low coverage in the vicinity of zero. However, naive squaring of the SMD confidence limits, obtained with percentage points from the \(t_{K-1}\) distribution, provided reliable coverage across all configurations of the parameters in our simulations and can be recommended for use in practice.
## Acknowledgements
We are grateful to Prof Julia Koricheva who brought the meta-analysis of magnitude effects to our attention.
We would also like to thank Dr Michael Tsagris who kindly provided his simulation program for MLE estimation of parameters of the folded normal distribution used in Tsagris et al. (2014) and recommended the use of _Rfast_ R package for this purpose.
The work by E. Kulinskaya was supported by the Economic and Social Research Council [grant number ES/L011859/1].
|
2310.20590 | An Enhanced RRT based Algorithm for Dynamic Path Planning and Energy
Management of a Mobile Robot | Mobile robots often have limited battery life and need to recharge
periodically. This paper presents an RRT- based path-planning algorithm that
addresses battery power management. A path is generated continuously from the
robot's current position to its recharging station. The robot decides if a
recharge is needed based on the energy required to travel on that path and the
robot's current power. RRT* is used to generate the first path, and then
subsequent paths are made using information from previous trees. Finally, the
presented algorithm was compared with Extended Rate Random Tree (ERRT)
algorithm | Ronit Chitre, Arpita Sinha | 2023-10-31T16:26:42Z | http://arxiv.org/abs/2310.20590v1 | # An Enhanced RRT based Algorithm for Dynamic Path Planning and Energy Management of a Mobile Robot
###### Abstract
Mobile robots often have limited battery life and need to recharge periodically. This paper presents an RRT-based path-planning algorithm that addresses battery power management. A path is generated continuously from the robot's current position to its recharging station. The robot decides if a recharge is needed based on the energy required to travel on that path and the robot's current power. RRT* is used to generate the first path, and then subsequent paths are made using information from previous trees. Finally, the presented algorithm was compared with Extended Rate Random Tree (ERRT) algorithm [4].
RRT, path planning, robot energy management, autonomous systems
## I Introduction
This paper addresses a path planning problem with a battery power management system. Not all tasks are doable with one battery charge. We may need to recharge the battery several times before completing the task. An example can be a battery-powered autonomous robot harvesting a huge field. The robot must know when to travel to a charging point in such cases. A conservative approach will increase the total time to complete the job. The other extreme may lead to the robot running out of battery before it reaches the recharge point. In this paper, we develop an online algorithm that indicates the best time for the robot to return for recharging.
We use RRT* to plan the path from the current robot location to the recharge station to find the appropriate time to return to the base. However, the robot follows a path to complete its task and the return-to-base path is taken only when required. Since the return-to-base path planning needs to happen every instant, RRT* generates a new tree every time. We propose to use the trees built previously to reduce the computational time of RRT*. This concept is similar to the RRT algorithms applied to a dynamic environment. However, unlike the dynamic environment scenario where the robot moves on the RRT algorithm's path, here the robot follows a track to execute the task independently of the RRT path generated.
We assume the robot knows the energy required to execute the task and to travel back to the charging station. An application can be an onion harvester robot, as shown in Fig. 1. The robot must harvest along the rows of onions moving from one row to another. It also finds a path to the closest charging station at each instant. Since the robot knows its current battery level and can estimate the energy required for harvesting and traveling back to the charging station, it can decide when to return.
We survey the literature in the next subsection, followed by the problem formulation in Section II. An overview of RRT and RRT* is presented in Section III. The proposed algorithm is explained in Section IV, and energy management is addressed in Section IV-A. Simulation results are presented in Section V followed by the concluding remarks in Section VI.
### _Literature Survey_
RRT and RRT* algorithms are studied extensively in the literature. Several extensions are presented. These algorithms are sampling-based methods for robotic path planning which are probabilistically complete. RRT generates a path while RRT* gives an optimal path (in the limit number of nodes tend to infinity) based on some cost function. Some of the advantages of RRT and its variants include their applicability in complicated workspace space or configuration space, its capability to include robot motion constraints, and so on. An extensive survey on the extensions of RRT is available in [13] and the references therein. We present the papers relevant to the work presented in this paper.
In [1], the RRT* algorithm was used for replanning in a dynamic environment with random, unpredictable moving obstacles. When an obstacle moves to a node location that was included in the path, the path is replanned around the obstacle using the node that is immediately after the node closest to the obstacle. Authors in [2] and [9] also used a similar replanning approach to planning a path around obstacles. Additionally, the work in [2] also limited the number of nodes by removing
Fig. 1: Schematic of an onion field with onion harvester robot
childless nodes when the number of nodes exceeded a limit which will reduce its complexity.
Paper [3] proposed the Dynamic Rate Random Forest or the DRRT. It, too, discards nodes affected by an obstacle and reforms the tree. The Extended Rate Random Tree algorithm (ERRT) mentioned in [4] proposed using waypoints that are nodes from the tree generated in past iterations. When building the tree either a random node is selected or a node from the waypoint array is selected or the goal node is selected based on a certain probability distribution. Some further improvements to this were made in [10] which used the waypoint cache method and BG-RRT algorithm.
[5] developed a combination of RRT and A\({}^{*}\) in which multiple random nodes and the corresponding nearest nodes are selected, but only the node that minimizes a certain premade cost function is further examined. It also experimented with using other types of norms apart from the Euclidean norm while finding distance metrics. [12] combined the artificial potential field method and RRT by using two trees that advance towards each other with respect to an attractive or repulsive potential.
If RRT\({}^{*}\) takes too long to converge to a path, some other methods can be used to remove redundant turns and optimize the path, like the ant colony optimization algorithm that was used in [6]. [11] developed a new approach to RRT\({}^{*}\), which involves first generating a path from an initial point to a goal and then optimizing it by interconnecting directly visible points and doing intelligent sampling. The robot used in this study can not take sharp turns instantaneously and has limits on maximum and minimum turning angles. These nonholonomic constraints need to be accounted for in the path planning algorithm. Some ways to incorporate these constraints have been discussed in [7] and [8].
_Contributions -_ The problems addressed in the literature considered a dynamic environment and different methods of replanning using RRT are presented. We consider the case where the robot is following a pre-defined path, and the RRT* is used to plan a path to the base for re-charging. Therefore the starting point of the RRT* is changing at every instance. We propose a fast algorithm called "dynamic path RRT" that can find the cost to reach the base so that the performance of the robot to carry out its assigned task is not hampered. Hence, our problem setting is different from what exists in the literature. We also compared the performance of our proposed algorithm with some of the algorithms in the literature.
## II Problem Formulation
We consider an autonomous robot executing some task in a known environment. There are one or more charging stations. The robot follows a predefined path until it requires recharging its batteries. The robot plans an online path to the charging station using RRT*. We assume the power needed to move at a constant speed is known, and the robot moves at a fixed desired speed. So, the robot knows the energy it will need to return to the charging point. The robot can also measure the energy remaining in the batteries.
To model the robot, we choose a classic car model as
\[\frac{d\mathbf{x}}{dt} =\mathbf{v} \tag{1}\] \[\frac{d\psi}{dt} =\omega=\frac{v}{L}\tan\delta \tag{2}\]
where \(\mathbf{x}\) represents position in the \(xy\) plane, \(\mathbf{v}\) represents the velocity in the \(xy\) plane, \(v=\|\mathbf{v}\|\) is its speed, \(\psi\) represents the heading and \(\omega\) represents the angular velocity of the robot, \(L\) is the length of the robot and \(\delta\) is its steering angle. We assume \(v\) is fixed, so the input to the robot is \(\omega\).
We use simple geometry to make the robot go from current to new positions. Consider Fig. 2. Here, \(\theta=2\alpha\) and \(R=\frac{d}{2\sin\alpha}\). The linear and angular velocities are related by
\[\omega=\frac{v}{R}=\frac{2v\sin\alpha}{d} \tag{3}\]
where both \(\alpha\) and \(d\) are measurable. Since there exist limits on the steering angle \(\delta\leq\delta_{\text{max}}\), \(\omega\) will be restricted to
\[\omega\leq\left|\frac{v}{L}\tan(\delta_{\text{max}})\right| \tag{4}\]
So, if (3) demands an \(\omega\) outside the above limits, the final point will not be reachable. We associate a cost with the path from the current state to the next state. The cost is equal to the energy required to travel the path. For simplicity, we assume that the energy spent is proportional to the distance covered. However, formulations can also be used. Therefore, the cost (\(E\)) of the path is
\[E=kR\theta=2kR\alpha \tag{5}\]
where \(k\) is the gain relating distance to cost.
## III Overview of RRT and RRT\({}^{*}\)
RRT is a well-known probabilistic algorithm used for path planning in robotics. RRT\({}^{*}\) is a modification of RRT that can generate an optimal path. We give an overview of these algorithms to relate to our proposed algorithm and for the completeness of the paper. A pseudocode for the RRT algorithm is given below. First, the initial and final nodes are initialized i.e. \(q_{i}\) and \(q_{g}\). A random node \(q_{\text{rand}}\) is selected from the workspace. Then by checking all the nodes in the tree, the node closest to the random node is found: \(q_{\text{nearest}}\). Also, the random node is moved towards the nearest node up to a certain step size. Then in the steer() function angular velocity to go from the nearest node to the random node is computed, and if it lies outside the bounds set by turning rate constraints, it is set to \(\text{sign}(\omega)\omega_{\text{max}}\). The coordinates of the new node can then be
Fig. 2: Trajectory of the robot from one point to another
fixed by moving from the nearest node with the constant \(\omega\) that is calculated. The minimum distance is the RRT step size and the maximum distance is the radius of the ball used for finding the neighborhood. The pseudocode for this is given in algorithm 1. If the path from the nearest node to the new node intersects an obstacle then a different random node is picked. In this work we consider obstacles to be line segments thus, it is easy to check if the path is intersecting the obstacle. Before adding the new node to the tree its cost is computed by adding the cost of its parent and the energy required to go from the nearest node to the new node. The new node now becomes a child node of \(q_{\text{nearest}}\). If this new node happens to be the goal then RRT is ended and a path is generated.
```
procedureDynamic_Path_RRT(\(q_{i}\), \(q_{g}\)) \(T\leftarrow\) initialize_tree(\(q_{i}\), \(q_{g}\)) while goal_not_reacheddo \(q_{\text{rand}}\leftarrow\)random_sample() \(q_{\text{nearest}}\leftarrow\)get_nearest_node(\(q_{\text{rand}}\)) \(q_{\text{new}}\leftarrow\)steer(\(q_{\text{nearest}}\), \(q_{\text{random}}\)) ifis_obstacle_free(\(q_{\text{nearest}}\), \(q_{\text{new}}\)) then cost(\(q_{\text{new}}\)) = cost(\(q_{\text{nearest}}\)) + energy(\(q_{\text{nearest}}\), \(q_{\text{new}}\)) T\(\leftarrow\)insert_node(\(q_{\text{nearest}}\), \(q_{\text{new}}\)) p \(\leftarrow\)UniformRandom(0, 1) ifp \(<\) p\({}_{\text{scan}}\)then old_nodes \(\leftarrow\)scan_forest(F, \(q_{\text{new}}\)) check_connection(old_nodes, \(q_{\text{new}}\)) endif ifgoal_reached(T)then goal_not_reached = False endif endwhile endprocedure
```
**Algorithm 2** Dynamic Path RRT Algorithm
There is a very low probability of the goal node being selected while drawing random samples. Thus instead, the algorithm is stopped when a node becomes sufficiently close to the goal node. Another alternative [4] is to modify random_sample() such that a completely random node is drawn with a probability \(1-p_{\text{goal}}\) and the goal node is drawn with probability \(p_{\text{goal}}\)
\[P(q)=\begin{cases}1-p_{\text{goal}},&\text{if $q$ is a random node}\\ p_{\text{goal}},&\text{if $q$ is the goal node}\end{cases} \tag{6}\]
RRT algorithm will converge on a path from the initial point to the goal. However, the path may or may not be the optimal path. RRT\({}^{*}\) modifies this code to find the optimal path with two changes. First, in the steer function, after the random node is brought within a certain step size of the nearest node, all the nodes lying in a neighborhood around the random node of a size greater than the step size are selected and put into an array. Thus the nearest node will always be in this neighborhood. Instead of taking the nearest node as a parent, the node with the least cost in that neighborhood is chosen as a parent. Secondly, the nodes are "rewired" that is after a new node has been added it is checked if it can be connected with a neighboring node to reduce the cost of that node.
## IV Dynamic Replanning
We assume that the map of the environment is available to the robot. For example, the onion-harvesting robot can map the field while harvesting since the path to the base will be only through the harvested region. A suboptimal path to the goal is generated as the robot moves, and the energy required to transverse this path is computed. This energy is compared with the current battery level of the robot. If sufficient energy is available, the robot continues to harvest.Else the plucking is stopped, and an optimal path is computed using RRT\({}^{*}\). The energy required for the new path is used to make the decision on return to base.
We propose a new algorithm that relies on trees built in past iterations. Consider a tree already generated by the algorithm, and a path is found. We call this the old path and the tree being built in the current iteration is called new tree. Now the initial node is placed in the new position of the robot. As new nodes are added to the new tree, it checks if there are any nodes from the old path nearby. If so, the new tree is built further by replicating nodes from the old path.
```
procedureDynamic_Path_RRT(\(q\_{i}\),\(q\_{g}\)) \(T\leftarrow\) initialize_tree(\(q_{i}\), \(q_{g}\)) \(F\leftarrow\) initialize_forest(\(T\)) whilegoal_not_reacheddo \(q_{\text{rand}}\leftarrow\)random_sample() \(q_{\text{nearest}}\leftarrow\)get_nearest_node(\(q_{\text{rand}}\)) \(q_{\text{new}}\leftarrow\)steer(\(q_{\text{nearest}}\), \(q_{\text{random}}\)) ifis_obstacle_free(\(q_{\text{nearest}}\), \(q_{\text{new}}\))then cost(\(q_{\text{new}}\)) = cost(\(q_{\text{nearest}}\)) + energy(\(q_{\text{nearest}}\), \(q_{\text{new}}\)) T\(\leftarrow\)insert_node(\(q_{\text{nearest}}\), \(q_{\text{new}}\)) p \(\leftarrow\)UniformRandom(0, 1) ifp \(<\) p\({}_{\text{scan}}\)then old_nodes \(\leftarrow\)scan_forest(F, \(q_{\text{new}}\)) check_connection(old_nodes, \(q_{\text{new}}\)) endif ifgoal_reached(T)then goal_not_reached = False endif endwhile endprocedure
```
**Algorithm 3** Dynamic Path RRT Algorithm
Here it is important to introduce a new data structure - forest. It is the collection of the paths generated in past iterations. Not all paths need to be remembered since this might lead to excessive memory requirements. We propose to discard the oldest path from the forest when a new path is added. The scan_forest function checks if any of the nodes from the older iterations are present in a neighbourhood around the new node.
The check connection function iterates through all the nodes gathered from scan forest function and finds the node that can be attached to the newly added node with minimum cost. It
then calls the path building function. The pseudocode for this is in algorithm 3.
```
procedurecheck_connection(old_nodes, F, \(q_{\text{new}}\)) candidate_nodes[]\(\leftarrow\) initialize_array for\(q_{\text{old path}}\)in old_nodesdo \(q_{\text{old path}}\leftarrow\) steer(\(q_{\text{new}}\), \(q_{\text{old path}}\)) cost(\(q_{\text{old path}}\)) = cost(\(q_{\text{new}}\) + energy(\(q_{\text{new}}\), \(q_{\text{old path}}\)) candidate_nodes.add(\(q_{\text{old path}}\)) endfor \(q_{\text{old path}}\) = min\({}_{\text{cost}}\)candidate_nodes path_building(\(q_{\text{new}}\), \(q_{\text{old path}}\)) endprocedure
```
**Algorithm 3** Check Connection and Build Path
The path_building function takes in \(q_{\text{new}}\) i.e. the latest node added to the new tree and \(q_{\text{old path}}\) that is the minimum cost node returned by 'check connection'. Path building attaches \(q_{\text{old path}}\) to the new tree and then finds its child node which is also part of the old path. It then attaches that child node to the new tree and this goes on in a recursive process. The pseudocode for this is given in algorithm 4.
```
procedurepath_building(\(q_{\text{new tree}}\), \(q_{\text{old path}}\)) new_tree\(\leftarrow\)insert_node(\(q_{\text{new tree}}\), \(q_{\text{old path}}\)) for child_node in \(q_{\text{old path}}\).children do ifchild_node.part_of_old_path is True then new_child_node\(\leftarrow\) steer(\(q_{\text{old path}}\), child_node) endif path_building(\(q_{\text{old tree}}\), new_child_node) endfor endprocedure
```
**Algorithm 4** Path Building from Older Paths
### _Robot Energy Management_
Now that the dynamic path RRT algorithm is ready the procedure to determine robot's decision to go to the recharge point needs to be fixed.
```
proceduredecide_return(x, F) latest_path\(\leftarrow\)dynamic_path_RRT(x, \(q_{g}\)) iflatest_path.cost\(>\) safety_factor\(\times\)x.power then latest_path \(\leftarrow\) RRT\({}^{*}(\mathbf{x},q_{g},\mathbf{x}.\text{power})\) iflatest_path.cost\(>\) safety_factor\(\times\)x.power then latest_path\(\leftarrow\)execute(x) endif endif x += robot_velocity endprocedure
```
**Algorithm 5** Energy management algorithm
The pseudocode for this is given in algorithm 5. Here \(\mathbf{x}\) denotes the state of the robot i.e. its position, velocity, angle, and charge. Dynamic path RRT is first used to generate a suboptimal path. Then it is checked if the current battery energy is enough to execute that path with a pre-determined safety factor (\(<1\)). If yes, then the robot moves to its new position. If not it runs RRT* until a path with low enough cost is found or until a maximum node limit is reached. If RRT* was unsuccessful the robot returns to the charging station.
## V Simulation Results
We simulated the algorithm in Python on a computer with an Intel CORE i7 processor and Ubuntu 22.04 OS. Two different environments were considered. We compared the execution time of our algorithm with that of ERRT[4] and regular RRT. In the simulations, we assumed the forward speed of the robot to be 1 unit and a maximum steering angle of \(40^{\circ}\). The probability of scanning for old nodes was 0.7 and the probability for the sampling goal was 0.2. We used a step size of 0.05 unit for RRT and a waypoint sampling probability of 0.7 for ERRT. Rewiring was done in the first iteration when an initial tree was generated but not done later.
In the first scenario, the robot moves parallel to the \(y\)-coordinate. The snapshot of the path generated at the initial time and some intermediate time is shown in Figs. 3-4. The dynamic path RRT algorithm performed much better than ERRT and regular RRT. Please refer to Fig. 5. It is worth mentioning that in the time analysis of dynamic path RRT, the very first dot in the plot at \(y=0\) represents pure RRT*. That is why the time taken for the first iteration is much higher than all others. Planning a trajectory by using information from older iterations is far more efficient than using pure RRT in each case.
Figure 6 shows how the cost of each path varies as the position of the robot changes. This was done for different values of scan forest probability. This shows initially, the cost of RRT* is similar to the cost of the dynamic path planner. The costs of the path generated by varying \(p_{\text{scan}}\) in the algorithm are also similar. However, as the robot moves farther away, the algorithm gives paths with higher costs. Thus as the robot
Fig. 3: First tree formed using pure RRT\({}^{*}\)
moves further away from its starting point, running an RRT* intermittently will help.
The energy management algorithm was also tested on a field with dimensions \(6\times 6\) and no obstacles. The robot goes from \((3,0)\) to \((3,3)\) and then takes a 90-degree turn and starts moving towards \((0,3)\). There are two recharging stations at \((1,5)\) and \((5,1)\) respectively. The snapshots at different instances are shown in Fig. 7.
## VI Conclusion
This paper presents an innovative approach that addressed the critical challenges of path planning and energy management in autonomous mobile robots. The dynamic path RRT algorithm is proposed that generates paths from the robot's current position to the recharge station efficiently by using information from old iterations. It is then checked if the robot's current battery is sufficient to safely execute this path and a decision is taken based on this condition. We compared the proposed algorithm with ERRT as well as regular RRT and found that the proposed strategy is significantly faster. However, the cost of the path is usually more than RRT*. We also analyzed the proposed algorithm by varying the probability of building the connection with the older paths. It is observed that the new paths are found faster as the probability is increased, but the path cost also increases. A user can decide on the probability based on the trade-off between time and cost.
|
2309.15448 | Bio-Inspired Strategies for Optimizing Radiation Therapy under
Uncertainties | Radiation therapy is a critical component of cancer treatment. However, the
delivery of radiation poses inherent challenges, particularly in minimizing
radiation exposure to healthy organs surrounding the tumor site. One
significant contributing factor to this challenge is the patient's respiration,
which introduces uncertainties in the precise targeting of radiation. Managing
these uncertainties during radiotherapy is essential to ensure effective tumor
treatment while minimizing the adverse effects on healthy tissues. This
research addresses the crucial objective of achieving a balanced dose
distribution during radiation therapy under conditions of respiration
uncertainty. To tackle this issue, we begin by developing a motion uncertainty
model employing probability density functions that characterize breathing
motion patterns. This model forms the foundation for our efforts to optimize
radiation dose delivery. Next, we employ three bio-inspired optimization
techniques: Cuckoo search optimization (CSO), flower pollination algorithm
(FPA), and bat search Optimization (BSO). Our research evaluates the dose
distribution in Gy on both the tumor and healthy organs by applying these
bio-inspired optimization methods to identify the most effective approach. This
research ultimately aids in refining the strategies used in radiation therapy
planning under the challenging conditions posed by respiration uncertainty.
Through the application of bio-inspired optimization techniques and a
comprehensive evaluation of dose distribution, we seek to improve the precision
and safety of radiation therapy, thereby advancing cancer treatment outcomes. | Keshav Kumar K., NVSL Narasimham | 2023-09-27T07:32:58Z | http://arxiv.org/abs/2309.15448v1 | # Bio-Inspired Strategies for Optimizing Radiation Therapy under Uncertainties
###### Abstract
Radiotherapy,Respiration, Uncertainty, Cuckoo Search Optimization, Bat Search Optimization, Trade-off.
Radiation therapy is a critical component of cancer treatment. However, the delivery of radiation poses inherent challenges, particularly in minimizing radiation exposure to healthy organs surrounding the tumor site. One significant contributing factor to this challenge is the patient's respiration, which introduces uncertainties in the precise targeting of radiation. Managing these uncertainties during radiotherapy is essential to ensure effective tumor treatment while minimizing the adverse effects on healthy tissues. This research addresses the crucial objective of achieving a balanced dose distribution during radiation therapy under conditions of respiration uncertainty. To tackle this issue, we begin by developing a motion uncertainty model employing probability density functions that characterize breathing motion patterns. This model forms the foundation for our efforts to optimize radiation dose delivery. Next, we employ three bio-inspired optimization techniques: Cuckoo search optimization (CSO), flower pollination algorithm (FPA), and bat search Optimization (BSO). Our research evaluates the dose distribution in Gy on both the tumor and healthy organs by applying these bio-inspired optimization methods to identify the most effective approach. This research ultimately aids in refining the strategies used in radiation therapy planning under the challenging conditions posed by respiration uncertainty. Through the application of bio-inspired optimization techniques and a comprehensive evaluation of dose distribution, we seek to improve the precision and safety of radiation therapy, thereby advancing cancer treatment outcomes.
Keywords:Radiotherapy,Respiration, Uncertainty, Cuckoo Search Optimization, Batch Search Optimization, Trade-off.
## 1 Introduction
Radiotherapy is a medical procedure that uses ionizing radiation sources such as protons, electrons, and high-energy particles to slow the growth of malignant growths [1]. It is of the highest priority to successfully align ionizing radiation beams with the 3-D shape of the tumor while protecting adjacent healthy tissue [2, 3]. Yet, this task becomes progressively more complex when addressing tumors located in the thorax and abdominal areas due to the inherent motion of the tumor during the treatment process [4]. This motion is primarily induced by quasi-periodic breathing patterns and is particularly significant for thorax tumors like those in the lungs and breast [5]. The constant motion of tumors during radiotherapy presents a significant problem because the tumor's exact position is not consistently known. Among the various sources of uncertainties, this review primarily focuses on the intrafractional respiratory motion, which results from the involuntary physiological process of respiration [6]. The organs located within the thoracic and
upper abdominal regions, including the liver, lungs, prostate, pancreas, esophagus, breast, and kidneys, undergo motion as a result of breathing [7]. This movement brings about substantial uncertainties in various aspects, including imaging, treatment planning, and the administration of radiotherapy for thoracic and abdominal conditions. Incorporating margins is a common practice to address uncertainties in tumor localization during radiotherapy. Nevertheless, the use of these margins amplifies the potential for radiotherapy-associated toxicity, as it extends the reach of radiation to normal tissues within the Planned Target Volume (PTV). Most of these side effects are attributed to uncertainties in tumor localization caused by breathing-induced motion and setup errors. Radiation oncologists must carefully balance the clinical benefits of treatment with the risks to the patient's long-term quality of life when determining radiation dosage. This trade-off, due to uncertainties in tumor localization, can also hinder the effectiveness of radiotherapy by preventing the delivery of the necessary dose escalation for effective treatment.
In this research, we take lung cancer treatment. Lung cancer continues to hold the unfortunate distinction of being the foremost cause of cancer-related fatalities, not only in the United States but also globally [8]. Its annual death toll nearly equals the combined mortality rates of prostate, breast, and colon cancer. A 2020 report focusing on lung cancer emphasizes that it is the most commonly diagnosed cancer and the primary contributor to cancer-related deaths in Canada. Globally, cancer rates are projected to double by 2050, with lung cancer being the most prominent [9]. Radiotherapy is employed in the treatment of more than half of all cancer patients [10]. Specifically, we are directing our attention to external beam radiotherapy, a method that utilizes a linear accelerator affixed to a revolving gantry to administer high-energy photon beams to the patient. Photon beams deposit energy as they traverse tissue, affecting both tumor cells and the healthy tissue in their path. To minimize damage to healthy cells, radiation is delivered from various angles, allowing each beam to deliver a small dose to healthy tissue while concentrating a high dose in the overlapping region centred on the tumor.
Our research aims to comprehend the impact of motion uncertainty on lung radiotherapy quality and establish a framework that generates solutions resistant to this uncertainty. We propose a bio-inspired optimization approach specifically tailored to address motion uncertainty and demonstrate its effectiveness. Our goal is to strike a balance between protecting healthy tissue and effectively treating the tumor, taking into account the presence of uncertainty.
## 2 Literature Survey
The literature survey on optimizing radiotherapy under uncertainties presents a rich tapestry of research aimed at improving the precision and effectiveness of this crucial medical treatment. In the research [11], the primary objective is evident: to address the complexities introduced by intrafraction motion by employing feedback control of the radiation dose administered. This innovative technique combines pre-treatment 4-D computed tomography (4DCT) imaging with intrafraction respiratory-motion surrogates to estimate the total given dosage and the predicted motion trajectory throughout treatment in real-time. The optimization of intensity-modulated radiotherapy (IMRT) plans under free-breathing conditions is a significant advancement. Notably, this study demonstrates that the proposed stochastic control approach not only reduces irradiated tissue volume compared to traditional internal target volume (ITV) treatment but also significantly cuts down treatment time without compromising dosimetric quality. It represents a promising avenue to enhance the efficiency of radiotherapy, particularly in scenarios where respiratory gating may be impractical or less efficient. In the study [12], the focus transitions to the domain of 4D multi-image-based (4DMIB) optimization, a field with the potential to bolster the resilience of scanned particle therapy in the presence of motion induced by respiration. The review underscores the pressing need for more comprehensive clinical evidence regarding the essentiality of 4DMIB optimization, particularly for conditions influenced by anatomical variations. Despite the wealth of research and technical insights in this domain, clinical investigations remain sparse, often constrained by methodological limitations such as limited patient cohorts and considerations related to motion dynamics. Nevertheless, the report acknowledges that robust 3D optimized plans appear to conform well to clinical tolerances, rendering them suitable for treating mobile targets using scanned particle therapy. The clinical urgency for the adoption of 4DMIB optimization, however, is noted to be contingent upon more substantial empirical demonstration.
In the study [13], the development of a risk-based robust approach is introduced, with a particular focus on addressing uncertainties related to tumor shrinkage during radiotherapy. The core objective of this suggested model is to reduce the variability of delivered doses, especially in worst-case scenarios, and minimize total radiation exposure to healthy tissues. The model leverages adaptive radiotherapy, a fractionation technique that considers the tumor's response to treatment over time and re-optimizes the treatment plan based on an estimate of tumor shrinkage. The clinical application of this approach is exemplified through a case study of lung cancer. The outcomes of this investigation highlight the potential benefits of the robust-adaptive model in terms of ensuring dose consistency within
the tumor target while minimizing the impact on organs at risk. Furthermore, the model demonstrates superior performance in terms of maintaining uniform tumor dose distribution and overall plan reliability, underscoring its potential as a valuable resource in clinical radiotherapy. The research [14] delves into the realm of robustness analysis as a means to provide a more consistent framework applicable across various treatment techniques and modalities. This framework aims to address the uncertainties inherent in treatment planning and delivery, offering a standardized approach for evaluating and reporting plans. By identifying critical elements and dosimetric effects of uncertainties, robustness analysis seeks to enhance the reliability of plan evaluation, particularly in multi-institutional clinical trials. This approach holds the promise of promoting more accurate and consistent reporting of treatment outcomes, ultimately benefiting patients through more reliable radiotherapy. The research [15] presents an innovative concept of motion uncertainty, utilizing PDF to characterize motion caused by respiration. This concept is subsequently applied to construct a robust optimization framework for IMRT. Actual patient data is integrated into the analysis to assess the reliability of the generated solutions, using a clinical case of lung cancer as an illustrative example. The results are enlightening, showing that the robust solution effectively mitigates the under-dosing of the tumor compared to the nominal solution, particularly in worst-case scenarios. Furthermore, the robust approach showcases a significant decrease in the total dose administered to the primary organ at risk, specifically, the left lung. This observation underscores the capacity of this robust framework to enhance the optimization of radiotherapy by achieving an equilibrium between safeguarding healthy tissues and guaranteeing sufficient tumor dose delivery, a pivotal facet of radiotherapy planning.
In the paper [16], the emphasis lies on assessing the dosimetric effectiveness of robust optimization within the realm of helical IMRT for localized prostate cancer. The study involves a comparison of two distinct planning strategies: robust optimization and the conventional approach utilizing a planning target volume PTV margin. The evaluation considers various factors, including setup uncertainty and anatomical changes, both of which significantly impact treatment outcomes. The results suggest that robust plans exhibit potential benefits, including higher target coverage and lower organ-at-risk (OAR) doses, especially when perturbed scenarios are considered. However, the study also highlights the complexity of assessing robustness, particularly in the presence of anatomical changes. The article [17] introduces a ground-breaking concept of incorporating time-dependent uncertainty sets into robust optimization. This advancement tackles a prevalent issue in medical decision-making, particularly in situations where a patient's condition may evolve throughout the treatment process. In IMRT, such changes in cell oxygenation can directly impact the body's response to radiation treatment. The proposed framework offers a versatile approach to adapt to evolving uncertainties by modelling temporal changes within a cone structure, yielding current uncertainty sets at each treatment stage. The conic robust two-stage linear problems presented in this study cover a range of radiotherapy scenarios, and the clinical application of this approach is demonstrated in a prostate cancer case. The time-dependent robust approach is proven to improve tumor control over the course of treatment without introducing additional risks compared to established clinical methods. Furthermore, the research offers valuable insights into the timing of observations, maximizing the informational value for intermediate diagnostics. This innovative approach has implications not only in clinical settings but also in various applications, including maintenance scheduling.
## 3 Model Uncertainty
The objective of the motion PDF technique is to establish an accurate dose distribution by convolving it with an approximated PDF, thus addressing the problem of motion producing dose dispersion throughout radiotherapy [18]. However, this method requires prior knowledge of the expected motion pattern during treatment. If the actual motion pattern differs significantly from the assumed one, convolving with an optimized dose distribution for a different PDF can result in an uneven dose distribution with healthy and affected regions. As a result, a strategy to reduce treatment-related PDF uncertainty is required. Our conceptual framework is built on a finite set \(X\) that represents the various components of the respiratory cycle. A PDF of motion is a nonnegative real function \(f\colon X\to R\) that satisfies \(\sum_{x\in X}f(x)=1\). We begin with a nominal PDF designated as p, which was obtained from data gathered over the planning phase. We postulate that the nominal PDF \(p\) may differ from the real PDF \(\vec{p}\) inside a subset \(U\) of the domain \(X\) and that this deviation is likely to occur after treatment.
This deviation follows an inequality condition.
\[p(x)-\underline{p}(x)\leq\vec{p}(x)\leq p(x)+\vec{p}(x)\qquad\forall\ x\in U \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \ \leavevmode\nobreak\ \leavevmode\nobreak\ \ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\
It's worth emphasizing that incorporating the set \(U\) can be considered somewhat superfluous since its impact can be efficiently managed by configuring both \(\overline{p}(x)=\underline{p}(x)\) as equal to \(0\) for \(x\in X\backslash U\), excluding those that pertain to \(U\). The upper and lower bounds on \(\bar{p}\), which help define the range of uncertainty, will be referred to as "error bars".
The robustness of a treatment plan can be evaluated by checking that all of the conditions in our formulation are met regardless of which pdf from the set \(P_{U}\) is actualized. During the optimization process, the set \(P_{U}\) comprises all pdfs that must be protected against. Simple linear "smoothness" requirements can be incorporated into the concept of \(P_{U}\) to alleviate concerns about conservative techniques allowing for implausible and extremely oscillating PDFs. \(\bar{p}(x)-\bar{p}(y)|\leq\in\)if \(|x-y|\leq\delta\), with suitable values for \(\in\) and \(\delta\). The primary challenge here is to ensure that \(P_{U}\) encompasses a wide enough range of PDF variations to account for realistic patient-specific breathing patterns while preventing an excessive margin that would sacrifice critical patient information.
## 4 Optimization
The optimization techniques employed in this research are detailed in this section with the pseudo code.
### Cuckoo Search Optimization
The cuckoo bird's brood parasitic behaviour served as inspiration for the CSO technique, which was initially presented in a journal [19]. Cuckoos use this behaviour to ensure that their eggs hatch by host birds [20]. For optimization, researchers looked at this natural process and developed the CSO method. The terminology within the CSO algorithm is metaphorically associated with familiar concepts in general optimization [21]. When dealing with single-objective function challenges, a "nest" or an "egg" represents an individual solution. An individual (nest) may contain many solutions (i.e., several eggs) in the arena of multi-objective function challenges. However, the primary focus of this research is on challenges with a single objective function. A nested set represents the entire population of possible solutions. The idea of a foreign bird departing a nest, representing the finding of a cuckoo's egg by a host bird, corresponds to the removal of an unsatisfactory solution. Conversely, the act of a cuckoo laying a new egg(s) in one or more nests represents the introduction of fresh solution(s) to the population. The CSO technique generates improved solutions using the following formula:
\[x_{p}^{t+1}=x_{p}^{(t)}+\bigotimes\mathit{Levy}(\lambda) \tag{3}\]
Here, \(x_{p}^{t+1}\) represents a fresh solution for a cuckoo labeled as \(p\) acquired during a new iteration denoted as \(t+1\). This solution is derived from a prior solution, \(x_{p}^{(t)}\), obtained in the preceding iteration \(t\). To update these solutions, the Levy flight distribution algorithm called \(\mathit{Levy}(\lambda)\) is employed. Here, \(\lambda\) denotes the Levy walk parameter, while \(\alpha\) corresponds to the step size, which is determined by the scale of the particular issue being addressed. Furthermore, the symbol \(\bigotimes\) represents element-wise multiplication. The Levy flight function approach allows for a stochastic walk with random step lengths derived from a Levy distribution. The Mantegna algorithm is typically used to estimate this distribution in the following manner:
\[\mathit{Levy}(\lambda)\sim\frac{u}{v^{-\lambda}} \tag{4}\]
Where:
\(u\ \sim\ N(0,\sigma_{u}^{2}\ )\)
\(v\ \sim\ N(0,\sigma_{v}^{2}\ )\)
\(\sigma_{u}^{2}=\frac{\Gamma(1+\lambda)*\sin{\frac{\pi\lambda}{2}}}{\Gamma\left( \frac{1+\lambda}{2}\right)*\lambda*2^{\frac{d-1}{2}}}\)
With the Gamma function denoted as \(\Gamma\). It's important to note that the parameter \(\lambda\) falls within the range \(1<\lambda\ \leq\ 3\). The pseudo-code for CSO is given below.
PSEUDOCODE FOR CSO
_Define an objective function \(f(X)\), where \(X\) represents the vector \(X=(f(x_{1},x_{2},...,x_{d})^{\tau}\)._
_Initiate the population of host nests as \(X_{i}(i=1,2,...,n)\)_
_While \(t<Max\_iterations\)_:
_Pick a cuckoo at random using Levy flights._
_Assess its quality or fitness denoted as_ \(F_{i}\)__
_Randomly pick one nest among the_ \(n\) _nests (example j)._
_If \(F_{i}>F_{j}\),_
_Replace j with the new solution._
_End if_
_Abandon a fraction_ (\(pa\)) _of the less-fit nests and construct the new solution._
_Keep the best solutions._
_Determine the best solution by ranking them._
_Continue until the maximum number of iterations_ (\(Max\_iterations\)_) _is reached._
_End While_
### Flower Pollination Algorithm
The characteristics of the pollination process, pollinator behaviour, and flower constancy can be distilled into a set of rules in order to better understand them [22]:
1. Pollen-carrying insects engage in Levy flights, allowing for the possibility of biotic and cross-pollination to occur on a global scale.
2. Conversely, self and abiotic pollination are examples of local pollination techniques.
3. The possibility of reproduction between two flowers is related to how similar they are to one another, and this is what we mean by flower constancy.
4. A switch probability, represented as \(p\), ranging from 0 to 1 affects both local and global pollination probability. Local pollination assumes significance in overall pollination activities, influenced by factors like physical proximity and wind. The fraction \(p\) signifies the contribution of local pollination to the entire pollination process.
Flowers can produce billions of pollen gametes, and some plants can have dozens of flowers on a single plant. For the purpose of simplification, we presume that each plant has a single flower and that this flower produces just one gamete of pollen. So, we can think of a solution \(x_{i}\) as a gamete of pollen or a flower. This simplification could be further developed in the future to account for situations with multiple pollen gametes or numerous flowers in multi-objective optimization challenges. Depending on these idealized features, we may create a flower-based algorithm called the FPA. The two main phases of this method are called global and local pollination, respectively [23].
In the process of global pollination, insects and other long-distance travelers carry flower pollen from one location to another. Pollination and spread of the best ideas, represented by \(g_{*}\), are therefore ensured. This process can be represented mathematically as follows, factoring in the first rule and flower constancy:
\[x_{l}^{t+1}=x_{l}^{t}+L(x_{l}^{t}-g_{*})\] [5]
In this equation, \(x_{l}^{t}\) represents solution vector \(x_{i}\) at \(t\) iteration, and \(L\) signifies pollination strength, serving as a step size. To mimic the variable step lengths observed in insects, a Levy flight mechanism is employed efficiently. Then derive \(L\ >0\) from a Levy distribution with the form:
\[L\sim\frac{\lambda\Gamma(\lambda)\sin\frac{\pi\lambda}{(\frac{\pi\lambda}{2} )}}{\pi}\frac{1}{s^{1+\lambda}}\ \,(S\gg S_{0}>0)\] [6]
Here, \(\Gamma(\lambda)\) represents the standard gamma function. Flower constancy and local pollination (Rule 2) could be depicted as follows:
\[x_{l}^{t+1}=x_{l}^{t}+\in\left(x_{j}^{t}-x_{k}^{t}\right)\] [7]
Flowers of identical species tend to remain consistent in appearance from one location to the next, and this depiction of pollen (\(x_{j}^{t}\) and \(x_{k}^{t}\)) does the same. Mathematically, a local random walk involves uniform distribution sampling within [0, 1] if \(x_{j}^{t}\) and \(x_{k}^{t}\) are from identical species or populations. Most pollination of flowers occurs on both the local and global levels. Flower patches situated nearby or flowers within relatively close proximity are more susceptible to undergo local pollination compared to those positioned farther away. To this end, we suggest a proximity probability, denoted by \(p\), to toggle between local and global pollination (Rule 4) [24]. The pseudo-code for FPA is given below.
PSEUDOCODE FOR FPA
_The objective is to minimize or maximize the function \(f(x)\), where \(x\) represents a \(d\)-dimensional vector
\((x_{1},x_{2},...,x_{d})\). Here are the steps of the algorithm:_
1. _To begin, create a population of_ \(n\) _pollen gametes or flowers, each of which will have a different random solution._
2. _Find the optimal solution,_ \(g_{*}\)_, among these possibilities._
3. _Choose a switch probability_ \(p\) _between zero and one._
4. _Proceed with iterations as long as the number of iterations_ \(t\) _remains below the maximum allowable generations denoted as "MaxGeneration." For each of the_ \(n\) _flowers in the population:_ * _If a randomly generated number falls below_ \(p\)_, execute global pollination:_ * _Create a_ \(d\)_-dimensional step vector_ \(L\) _following a Levy distribution._ * _Update the global position using the equation:_ \(x_{i}^{t+1}=x_{i}^{t}+L(g_{*}-x_{i}^{t})\)_._ * _If the randomly generated number exceeds or equals "p," perform local pollination:_ * _Generate a random value "_e_" from a uniform distribution between [0, 1]._ * _Randomly select two solutions,_ \(j\) _and_ \(k\) _from the population._ * _Update the local position with the formula:_ \(x_{i}^{t+1}=x_{i}^{t}+\epsilon\) __ \((x_{j}^{t}-x_{k}^{t})\)_._ * _Evaluate the newly obtained solutions._ * _If these fresh solutions demonstrate superiority over their predecessors, substitute them and locate the current best solution,_ \(g_{*}\)_._
5. _Continue this process until the achieved the maximum generation._
### Bat Search Optimization
The bat algorithm is a bio-inspired technique rooted in echolocation, where bats employ sonar waves for navigation. It is a straightforward yet highly effective optimization method [25]. This approach draws inspiration from microbats' echolocation mechanisms, which these tiny creatures employ extensively to locate prey, identify obstacles in dark environments, and navigate through tight spaces, like stone cracks. The process of globally searching for a solution involves the position and velocity of virtual microbats undergoing random movements. Here, the position, referred to as \(x_{i}\), represents the current value of the solution, while the velocity, \(v_{i}\), indicates the transition from the current solution to potentially better solutions. At each iteration, the best current solution is indicated by \(x_{*}\). The exploration of solutions involves adjusting parameters like frequency (wavelength) \(f_{i}\), pulse emission rate \(r\), and loudness Ai for each iteration. The effectiveness of this approach in locating global solutions depends on the precise management of frequency or wavelength to regulate the behavior of virtual microbats and achieve an optimal equilibrium between exploration and exploitation [26]. The mathematical equations governing the updates of location and velocity for each microbat in the group are outlined below:
\[f_{i}=f_{min}+(f_{max}-f_{min})\beta\] [8] \[V_{i}^{t}=V_{i}^{t-1}+(X_{i}^{t-1}-X_{*})f_{i}\] [9] \[X_{i}^{t}=X_{i}^{t-1}+V_{i}^{t}\] [10]
Here, \(\beta\) belongs to the range \([0,1]\) and it indicates the random vector from uniform distribution. The parameter \(f_{i}\), signifying frequency (or wavelength), governs the rhythm and extent of the virtual bat's movement (both position and velocity) towards the local solution \(x_{*}\) in each iteration and, ultimately, the best global solution once the objective is met. Additionally, the Bat Algorithm's efficiency is influenced by parameters like loudness and pulse emission rate. The mathematical expressions illustrating changes in sound value and pulse emission rate exhibit similarities, as illustrated below:
\[A_{i}^{t+1}=\alpha A_{i}^{t}\] [11] \[r_{i}^{t+1}=r_{i}^{0}[1-\exp(-\gamma t)]\] [12]
Where \(0<\alpha<1\)\(dan\)\(\gamma>0\).
The Bat Algorithm operates under three key assumptions:
* All bats within the swarm employ echolocation for distance detection and the distinction between food and other objects.
* Bats navigate through random flight patterns, tuning their frequency (or wavelength) and pulse emission rate (\(r\)) of sonar signals to determine subsequent positions and velocities. While the loudness value \(A_{i}\) can fluctuate, it must remain within the range spanning from a high positive value \(A_{0}\) to its minimum threshold, \(A_{min}\).
To gain a clearer understanding of the Bat Algorithm, the optimization approach is summarized in the pseudo-code below.
PSEUDOCODE FOR BSO
_Begin by initializing the bat population represented by \(x_{i}\) and \(v_{i}\) (\(i=1,2,...,n\))_
_Set the initial values for frequencies \(f_{i}\), pulse rates \(r_{i}\), and loudness \(A_{i}\)._
_While (\(t<\text{maxvalue}\)). Proceed with the following steps:_
_Generate novel solutions by adjusting the frequency using the formula:_
\(f_{i}=f_{min}+(f_{max}-f_{min})\beta\)__
_Update the velocities and locations/solutions as follows:_
\(V_{i}^{t}=V_{i}^{t-1}+(X_{i}^{t-1}-X_{*})f_{i}\)__
\(X_{i}^{t}=X_{i}^{t-1}+V_{i}^{t}\)__
_If (\(rand>r_{i}\)), then_
_Choose one solution from the good solutions available._
_Produce a local solution in the vicinity of the selected good solution (\(x_{*}\))_
_End if_
_Produce a new solution through random flight._
_If (\(rand<A_{i}\) & \(f(x_{i})<f(x_{*})\), then_
_Keep the new solutions._
_Enhance \(\tau_{i}\) and decrease \(A_{i}\)._
_End If_
_Rank the good solutions and identify the current good solutions_
_End While_
## 5 Result and Discussion
In this section, we analyze the results and discussion of IMRT on a tumor located in the lower left lung, considering the effects of respiratory uncertainty. Three optimization techniques, namely CSO, FPA, and BSO, were employed to optimize the treatment plan. The primary goal of the research is to guarantee that the correct dose was delivered to the tumour while minimizing the dose to healthy tissues, particularly the lung, and heart, which are prone to radiation-induced damage because of breathing fluctuation. To limit dose fluctuations, a constraint was set up on keeping a dose range between 72 Gy and 80 Gy.
In Figure 1, we see the Dose-Volume Histograms (DVH) for the tumor area after applying the three different optimization strategies. The red line indicates BSO, the blue line is FPA, and the black line is CSO outcome. Positive outcomes from all three optimization strategies suggest that the model effectively administered the prescribed radiation dose to the tumor while avoiding over- and under-dosing. Indicating that the three optimization strategies are adequate for guaranteeing the appropriate tumor dosage.
Figure 2 depicts the DVH assessment for the lung area containing the tumor. The blue plot represents the dose distribution by BSO, the yellow plot represents FPA, and the red plot represents CSO. When the results of the three procedures are compared, it is clear that CSO delivered a minimal dosage to the lung. This finding suggests that CSO is the most successful approach for limiting radiation exposure to healthy lung tissue, hence reducing possible radiation-induced damage.
Figure 1: DVH on tumor
Radiation can also affect the heart, hence Figure 3 displays the DVH analysis for that area as well. The dose delivered by the BSO is represented by the violet plot, FPA by the orange plot, and CSO by the blue plot. Similar to the lung region, CSO delivered the minimum radiation dose to the heart when compared to the other two optimization techniques. This result underscores the effectiveness of CSO in safeguarding the heart from excessive radiation exposure.
Table 1 presents the doses delivered in Gy by the three optimization techniques to both the tumor and healthy organs. BSO and CSO delivered doses of 71 Gy and 70.32 Gy to the tumor, which are slightly lower than the minimum dose constraint of 72 Gy but within an acceptable range. However, FPA delivered a lower dose of 67.2 Gy to the tumor, indicating a deviation from the desired dose. CSO established its superiority in terms of healthy organs by delivering the least dosages of 29.4 Gy and 7.98 Gy to the lungs and heart. FPA supplied the second lowest dosage.
In conclusion, the results show that CSO is the best optimization strategy for IMRT in the circumstance of respiratory uncertainty. It efficiently delivers a sufficient dose to the tumor while minimizing radiation exposure to critical organs, particularly the lungs, and heart.
## 6 Conclusion
In this study, we investigated the use of three bio-inspired optimization techniques--CSO, FPA, and BSO--to overcome the issues of optimizing radiotherapy under the conditions of respiratory uncertainty. Our main goal was to
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Optimization & CSO & FPA & BSO \\ Method & & & \\ \hline Tumor & 70.32 & 67.2 & 71 \\ \hline Lung & 29.4 & 30.554 & 35.924 \\ \hline Heart & 7.98 & 10.25 & 12.57 \\ \hline \end{tabular}
\end{table}
Table 1: Dose delivered to the tumor and other organs
Figure 3: DVH on Heart
Figure 2: DVH on Lung
obtain a balanced dose distribution, assuring that the tumor received a sufficient amount of radiation while minimizing the dose given to healthy organs. Several significant inferences can be drawn from the results of this research and analysis. CSO and BSO were shown to be the most effective of the three bio-inspired approaches in terms of providing an adequate dosage to the tumor area. These methods were successful in confining the radiation dose to the tumor, which is essential for treating cancer. In particular, CSO proved to be the best method for radiation therapy planning when respiratory uncertainty was present. It not only provided the necessary dose to the tumor, but it also reduced radiation exposure to vital healthy organs like the lung and heart. This function is crucial in protecting surrounding tissues from damage caused by radiation. As we conclude this study, it is worth noting that while we focused on addressing respiration uncertainty, other sources of uncertainty in radiation therapy planning may exist. In future research, it is imperative to identify and tackle these additional uncertainties using bio-inspired or hybrid algorithms. By broadening the scope of optimization techniques and considering various uncertainties, we can further refine radiation therapy planning and enhance its precision and safety.
|
2301.00101 | Material vs. structure: Topological origins of band-gap truncation
resonances in periodic structures | While resonant modes do not exist within band gaps in infinite periodic
materials, they may appear as in-gap localized edge modes once the material is
truncated to form a finite periodic structure. Here, we provide an analysis
framework that reveals the topological origins of truncation resonances,
elucidating formally the conditions that influence their existence and
properties. Elastic beams with sinusoidal and step-wise property modulations
are considered as classical examples of periodic structures. Their non-trivial
topological characteristics stem from the consideration of a phason parameter
that produces spatial shifts of the property modulation while continuously
varying how the boundaries are truncated. In this context, non-trivial band
gaps are characterized by an integer topological invariant, the Chern number,
which is equal to the number of truncation resonances that traverse a band gap
as the phason is varied. We highlight the existence of multiple chiral edge
states that may be localized at opposite boundaries, and illustrate how these
can be independently tuned by modified boundary-specific phason parameters.
Furthermore, we show that the frequency location of a truncation resonance is
influenced by the modulation volume fraction, boundary conditions, and number
of cells comprising the finite structure, thus quantifying its robustness to
these factors. Non-topological in-gap resonances induced by a defect are also
demonstrated, showing that these can be coupled with topological modes when the
defect is located at an edge. Finally, experimental investigations on
bi-material phononic-crystal beams are conducted to support these findings. The
tunability of truncation resonances by material-property modulation may be
exploited in applications ranging from vibration attenuation and thermal
conductivity reduction to filtering and flow control by phononic subsurfaces. | Matheus I. N. Rosa, Bruce L. Davis, Liao Liu, Massimo Ruzzene, Mahmoud I. Hussein | 2022-12-31T02:52:59Z | http://arxiv.org/abs/2301.00101v1 | Material vs. structure: Topological origins of band-gap truncation resonances in periodic structures
###### Abstract
While resonant modes do not exist within band gaps in infinite periodic materials, they may appear as in-gap localized edge modes once the material is truncated to form a finite periodic structure. Here, we provide an analysis framework that reveals the topological origins of truncation resonances, elucidating formally the conditions that influence their existence and properties. Elastic beams with sinusoidal and step-wise property modulations are considered as classical examples of periodic structures. Their non-trivial topological characteristics stem from the consideration of a phason parameter that produces spatial shifts of the property modulation while continuously varying how the boundaries are truncated. In this context, non-trivial band gaps are characterized by an integer topological invariant, the Chern number, which is equal to the number of truncation resonances that traverse a band gap as the phason is varied. We highlight the existence of multiple chiral edge states that may be localized at opposite boundaries, and illustrate how these can be independently tuned by modified boundary-specific phason parameters. Boundary phasons modify the truncation of only one boundary at a time. Furthermore, we show that the frequency location of a truncation resonance is influenced by the modulation wavelength, modulation volume fraction, boundary conditions, and number of cells comprising the finite structure, thus quantifying its robustness to these factors. Non-topological in-gap resonances induced by a defect are also demonstrated, with their frequency dependence on the phason investigated to elucidate their contrast to truncation resonances. A coupling between topological and non-topological modes is shown to be possible when the defect is located at an edge. Finally, experimental investigations on bi-material phononic-crystal beams are conducted to support these findings. Our results provide a fundamental perspective on the topological character of truncation resonances in periodic structures and how this character relates to the underlying periodic material properties. The tunability of these unique structural resonances through material-property modulation may be exploited both in applications where in-gap resonances are not desired, such as vibration attenuation and thermal conductivity reduction, or where in-gap resonances provide a functional role, such as filtering, waveguiding, energy harvesting, and flow control by phononic subsurfaces.
keywords: Phononic materials, band-gap resonances, topological protection, phasons, experimental phononics +
Footnote †: journal:
## 1 Introduction
The study of elastic wave propagation in a continuous periodic medium is a classical problem in mechanics that can be traced back to Rayleigh in 1887 [1]. With the advent of composite materials, the interest in this problem surged with early contributions in the 1950s [2] and 1960s [3] formulating dispersion relations for wave propagation in laminated composites, and other forms of periodic media [4; 5], followed by extension to multi-dimensional composites in the 1970s [6]. The field re-emerged in the early 1990s with the study of phononic crystals [7; 8] and the establishment of formal connections with lattice dynamics in crystals [9], and gathered further pace with the rise of acoustic and elastic metamaterials [10]. In all these studies, periodicity is utilized enabling dynamic characterization by considering a representative unit cell, as commonly done in condensed matter physics [11]. Calculating the dispersion relation, or the band structure, using the Floquet/Bloch theorem [12; 13] formally enforces the assumption of an extended medium with an _infinite_ number of unit cells. This is not only computationally rewarding, but physically provides a fundamental description of the modal wave propagation properties of the medium under investigation\(-\)removing any influence of overall size and external boundary conditions. In this framework, the medium under consideration is rendered a _material_ with characteristic _intrinsic_ properties, such as band gaps (whose locations may be predicted analytically [14; 15; 16; 17]) and other key features revealed
by the nature of the band structure. The thermal conductivity, for example, is an intrinsic material property that is directly influenced by the band structure\(-\)determined by analysis of only a single atomic-scale unit cell [18; 19]. Effective dynamic properties, such as effective density and Young's modulus [20], provide another example of intrinsic material properties. On the other hand, unless a medium practically comprises thousands or millions of unit cells (as in a bulk crystal for example), realistic realizations are formed from a relatively small _finite_ number of unit cells, yielding a periodic _structure_, rather than a material, with _extrinsic_ properties. This is particularly the case in engineering problems such as sound [21] and vibration [22] isolation, and other similar applications [23; 24], and also the case in nanoscale thermal transport [25] where unique dynamical properties emerge primarily from the presence of finite size along the direction of transport.
### Truncation resonances
A periodic structure in practice may still consist, in some cases, of a relatively large but tractable number of unit cells, and in other cases, of only a few unit cells along the direction of vibration transmission. The number of cells impacts the degree of attenuation within a band gap [26]. However, the contrast between the material and structure behavior may not be limited to only quantitative differences but also to fundamental qualitative distinctions. One noticeable anomaly between the material and structure responses is the possibility of existence of resonances inside band gaps, i.e., resonance peaks in the frequency response function (FRF) of a finite periodic structure that appear within band-gap frequency ranges of the corresponding infinite periodic material. These resonances are often referred to as _truncation resonances_[27; 28] because they emerge from the truncation of a medium that is otherwise formed from an infinite number of unit cells. These resonances are associated with mode shapes that localize at the truncation junction, and are thus also commonly referred to as _edge_ or _surface modes_[29; 30; 31; 32; 33; 34; 35; 36; 37]. The presence of these modes has been uncovered theoretically by Wallis [29] in his study of a finite discrete diatomic chain of atoms with free ends. This followed the work of Born on finite atomic chains [38] which was motivated by the study of the influence of lattice vibrations on X-ray scattering. Recent studies extended Wallis' theory of finite discrete chains to more general conditions [28; 39; 40] and experiments on chains of discrete-like coupled spheres validated the theory [35].
The problem of truncation resonances in continuous periodic media\(-\)the focus of this paper\(-\)has also been investigated extensively. Early studies examined one-dimensional wave propagation in periodically layered/lamenated composites, also referred to as superlattices. Existence conditions for truncation resonances were derived for semi-infinite superlattices for out-of-plane [30; 32; 34] and in-plane [31; 33; 37] waves. It was shown that surfaces modes in some instances may appear below the lowest bulk band, i.e., the band that hosts conventional resonances. Investigations of the truncation phenomenon were also done on finite layered phononic crystals examining transverse waves [36; 41], on finite beam-based phononic crystals [42; 43] and locally resonant elastic metamaterials [44; 45], and on rod-based phononic crystals [46; 47]. Among the factors that influence the frequency location of the truncation resonances are the unit-cell symmetry and the boundary conditions [41; 43; 45; 46; 47]. When there is more than one layer in the unit cell, the number of surface states increases [34; 37]. Techniques proposed for control of the truncation resonances also include tuning of unit-cell spatial material distribution or volume fraction [42], and the anomalous addition of a "cap layer" [32; 34] or a "tuning layer" [42; 48] at the edge of the structure. A cap layer is simply a homogeneous layer, whereas a tuning layer is a purposefully truncated single unit cell. The concept of truncation resonances is also relevant to other areas in applied physics such as photonic crystals [49] and quantum lattices [41].
### Connection to topological physics
The principle of a truncation resonance is fundamentally connected to the periodic structure's topological properties; this connection forms the core focus of the present study. Inspired by the emergence of topological insulators in condensed matter physics [50], classical analogues have been developed in photonics [51] and phononics [52], demonstrating the features of robust topological waves. In passive elastic materials, topological interface modes are created by contrasting two materials with band gaps existing at the same frequencies, but characterized by different topological invariants. Examples include interface modes in one-dimensional (1D) structures [53; 54; 55; 56] in analogy to the Su-Schrieffer-Heeger model [57], and waveguiding along interfaces in two-dimensional (2D) materials in analogy to the Quantum Spin Hall Effect [58; 59] or to the Quantum Valley Hall Effect [53; 60; 61]. These effects rely on symmetry breaking by interfacing two domains whose unit cells have opposite symmetries, which results in contrasting topological properties in the reciprocal space. Hence, an actual interface between two materials is required, which presents a contrast to the truncation resonances we explore in this paper. We will show an intriguing connection that stems from a stronger type of topological effect associated with the Quantum Hall Effect (QHE) [62; 63]. The QHE manifests in 2D lattices of electrons under the presence of a strong magnetic field, which leads to robust edge waves that propagate along the boundaries of a finite sample (structure), without backscattering at corners or defects. It is therefore sufficient to exploit the interface between a single material medium and vacuum. However, such a strong effect requires breaking time reversal symmetry, which in
the quantum case is achieved through the magnetic field. Emulating similar features on 2D elastic materials is possible through active components that break time reversal symmetry, such as rotating frames [64] or gyroscope spinners [65; 66]. An alternative that has emerged later, and which we adopt here, is to map the QHE to 1D passive structures that have extended dimensionality emanating from their parameter spaces [67; 68]. This has been achieved by using patterned mechanical spinners [69], spring-mass lattices [70], acoustic waveguides [71; 72], and continuous phononic crystals or elastic metamaterials with modulations of inclusions such as ground springs [73], stiffeners [74], and resonators [75; 76]. In these examples, edge states localized at the boundaries of 1D periodic and quasi-periodic finite domains are observed to appear in correspondence to non-zero topological invariants called _Chern numbers_. The boundary at which the localization occurs can be determined by a phason parameter that is associated with spatial shifts in the medium's modulated properties. This feature leads to possibilities for topological pumping by varying the phason parameter continuously along time [77; 78; 79] or along a second spatial dimension [70; 80], inducing a transition of the edge states from being localized at one boundary to the other. Thus, energy can be "pumped" between two boundaries of a system through a transition of a topological edge state. The application of the field of topology to elastic and acoustic material systems has been attracting much interest in recent years [81; 52; 82].
In this paper, we provide a formal framework for the identification of the topological character of truncation resonances in periodic structures, drawing on concepts from the QHE. We consider a family of periodic elastic beams with either sinusoidal or step-wise property modulations. The modulations offer key parameters that expand the structure's property space and allow us to readily apply the concepts of topological band theory. In particular, the variation of a periodic beam's spectral properties with respect to the modulation wavelength allows us to extract the Chern numbers of the band-gaps and identify the locations of truncation resonances. Then, the phason parameters associated with spatial shifts of the modulations further characterize the truncation resonances as topological edge states spanning the band gaps. The frequency dependence of the location of a truncation resonance on the phason has recently been predicted, for periodic rods, by means of a closed-form transfer-matrix-based mathematical formulation [47]. Here, we investigate, for periodic flexural beams, the topological origins of this class of relations. We show that the number of truncation resonances within a gap is equal to the predicted Chern number, for any set of boundary conditions, although the particular features of their branches as they traverse the gaps may vary. We elucidate how additional _boundary phason_ parameters can be defined, formalizing the notion of the tuning layer [42; 48], to manipulate the edge states localized at different boundaries independently. Furthermore, we examine the convergence of the truncation resonant frequencies as a function of the number of unit cells\(-\)a matter of significant practical importance especially when this number is relatively small. The fundamental differences, and the possibility of coupling, between truncation resonances and corresponding non-topological _defectmode_ resonances are then investigated. Next, we provide laboratory results using a bi-material phononic-crystal beam as experimental validation of some of the key features of truncation resonances and their association with topological theory. Finally, we use our experiments to explore yet another important factor in the design space, namely the role of the materials' volume fraction within the unit cell in influencing the frequency locations of the truncation resonances.
The paper is organized as follows: following this introduction, Section 2 provides a description of the considered periodic flexural beams and their boundary truncation through phasons. Next, Section 3 develops the theory and computational analysis to characterize the topological properties of truncation resonances and those of non-topological defect resonances, and the coupling of the two types of resonances, followed by Section 4 which provides experimental results and further analysis. Finally, Section 7 has a general discussion on the key findings and their broader implications to related areas of research, and Section 6 provides a closing summary and outlines possible future research directions.
## 2 Modulated phononic-crystal beams: Truncation characterization by phasons
We consider elastic beams undergoing flexural motion described by transverse displacement \(w=w(x)\) and angle of rotation \(\varphi=\varphi(x)\), where \(x\) is the axial position, as classical examples of 1D periodic materials or structures. The properties of the beam are the Young's modulus \(E=E(x)\), shear modulus \(G=G(x)\), density \(\rho=\rho(x)\), cross-sectional area \(A=A(x)\), and second moment of area \(I=I(x)\). These properties are modulated in space as illustrated in Fig. 1. Two scenarios are considered; in the first the Young's modulus is modulated according to a cosine function, i.e. \(E(x)=E_{0}[1+\alpha\cos(2\pi\theta x-\phi)]\), while other parameters remain constant (Fig. 1(a)). This cosine-modulated phononic crystal (CM-PnC) serves as an idealized continuous periodic waveguide used to illustrate the behavior of interest in a simple setting. It is characterized by a unit cell of length \(a=1/\theta\), where \(\alpha\) is the amplitude of the modulation with respect to the mean value \(E_{0}\) and \(\theta\) may be viewed as the modulation wavenumber. The second case corresponds to a beam modulated in a step-wise fashion, which we refer to as step-wise modulated phononic crystal (SM-PnC). It generically represents a periodic material of two alternating layers of lengths \(a_{1}\) and \(a_{2}\), with different constituent material or geometrical (e.g. cross-sectional area)
properties. In this case, the material or geometrical properties are modulated through a step-wise function of period \(a=1/\theta=a_{1}+a_{2}\) that takes two different values in the intervals of length \(a_{1}\) and \(a_{2}\).
The appearance of in-gap resonances stems from the truncation of the boundaries. The truncation details are here characterized by _phason_ parameters that are connected to non-trivial topological properties. The most natural choice of the phason is simply the phase \(\phi\) of the property modulations, which rigidly shifts the modulation in space. Thus it results in a simultaneous change of the local properties of the beam at both boundaries. This is illustrated in the schematics of Fig. 1 for both the sinusoidal and step-wise modulations. The blue boxes highlight the region of the modulations selected to form the properties of the finite beams. From a given initial configuration, a change in phason over the range \(0<\phi<2\pi\) (higher values of \(\phi\) do not need to be considered due to the periodicity) can be interpreted as simultaneously adding a segment of length \(\phi a/2\pi\) to the left boundary, while removing the same length from the right boundary. This will naturally influence any vibration mode localized at either boundary. It's effect can be further understood as the superposition of two independent parameters which we call boundary phasons. A change in the right boundary phason \(\phi_{r}\) corresponds to removing a length \(\phi_{r}a/2\pi\) from the right boundary while keeping the left boundary unchanged, while a change in the left boundary phason \(\phi_{l}\) corresponds to adding a length \(\phi_{l}a/2\pi\) to the left boundary while keeping the right boundary unchanged. Hence, changing the phason \(\phi\) corresponds to changing both the left and right boundary phasons by the same amount (as illustrated in the figure). As we will show, the boundary phasons independently tune the topological truncation resonances at their respective boundary, and their superimposed effect leads to the variation of the resonances with respect to the conventional phason \(\phi\).
Herein, the flexural motion of the beam is modeled through Timoshenko theory as governed by the following two coupled equations:
\[\rho A\frac{\partial^{2}w}{\partial t^{2}}-q(x,t)=\frac{\partial}{ \partial x}\left[\kappa_{\mathrm{s}}AG\left(\frac{\partial w}{\partial x}- \varphi\right)\right], \tag{1a}\] \[\rho I\frac{\partial^{2}\varphi}{\partial t^{2}}=\frac{\partial}{ \partial x}\left[EI\frac{\partial\varphi}{\partial x}\right]+\kappa_{\mathrm{s }}AG\left(\frac{\partial w}{\partial x}-\varphi\right), \tag{1b}\]
where \(\kappa_{\mathrm{s}}\) denotes the shear coefficient, and \(t\) and \(q=q(x,t)\) represent time and the external forcing, respectively. Equations 1a and 1b are combined to yield a single fourth-order partial differential equation with only \(w\) as the dependent variable [83]. In our investigation, we consider three types of problems: a Bloch dispersion analysis problem for a unit-cell representing an infinite material, an eigenvalue analysis problem for a finite structure with arbitrary boundary conditions (BCs), and a harmonic forced-response problem for a finite structure with arbitrary BCs. In the first two problems, we set
Figure 1: Elastic periodic beams with (a) sinusoidal and (b) step-wise property modulation whose spatial distribution is defined by a phason \(\phi\) or boundary phasons \(\phi_{r}\) and \(\phi_{l}\). A modulation characterized by \(\phi\) is a superposition of modulations characterized by \(\phi_{r}\) and \(\phi_{l}\).
\(q=0\) and
\[w(x,t)=\dot{w}e^{i(\mu x-\omega t)}, \tag{2}\]
where \(\omega\) denotes the frequency. In Eq.2, we set \(0\leq\mu\leq\pi/a\) for the Bloch dispersion problem, where \(\mu=0\) is used for the finite periodic-structure eigenvalue with arbitrary BCs. The results are obtained by a finite-element discretization of the equations of motion. The implementation details of these methods are omitted here for brevity since they are widely available in the literature (for example, see Ref. [84]).
Motivated by the experimental portion of this work (see Section 4), we select the following parameters. The SM-PnC consists of a bi-material beam composed of alternating layers of Aluminum (Al) and the polymer acrylonitrile butadiene styrene (ABS). These materials are selected due to the contrast of mechanical properties leading to wide band gaps. Their properties are as follows: Young's moduli \(E_{\mathrm{Al}}=68.9\) GPa and \(E_{\mathrm{ABS}}=2.4\) GPa, shear moduli \(G_{\mathrm{Al}}=25.9\) GPa and \(G_{\mathrm{ABS}}=0.872\) GPa, and densities \(\rho_{\mathrm{Al}}=2700\) kg/m\({}^{3}\) and \(\rho_{\mathrm{ABS}}=1040\) kg/m\({}^{3}\), respectively. While we will allow the unit-cell length to vary through the \(\theta\) parameter, the ABS polymer length filling fraction is fixed as \(a_{\mathrm{ABS}}/a=0.2\); this ratio will be changed only in Section 4.3. For purposes of comparison, the properties of the CM-PnC are then chosen to make it statically equivalent [26] to the SM-PnC by selecting a fixed density \(\rho_{0}=(0.2\rho_{\mathrm{ABS}}+0.8\rho_{\mathrm{Al}})\) and elastic modulus modulation with a mean value of \(E_{0}=(0.2/E_{\mathrm{ABS}}+0.8/E_{\mathrm{Al}})^{-1}\). We consider a Poisson's ratio of \(\nu=0.33\), which consequently determines the shear modulus through the relation \(G=E/(2(1+\nu))\). Throughout this paper, the CM-PnC modulation amplitude is fixed at \(\alpha=0.9\), and the beams have a square cross-section geometry with side length \(h=2.54\)cm. The finite-element analysis follows by discretizing the beams with linear Timoshenko beam elements with a shear coefficient of \(5/6\). The beam element length varies according to the case studied but does not exceed a maximum length of \(\hat{a}/100\), where \(\hat{a}=203\) mm is the unit-cell size of the experimental beams and is used as a reference unit-cell length throughout the paper.
Figure 2 presents a comparison between the properties of the CM-PnC and SM-PnC for the reference unit-cell size \(\hat{a}=203\) mm, highlighting the contrast between material and structure. Panels (a) and (b) display their dispersion diagrams in a frequency range of interest from 0-9 kHz, which is a material feature. Both CM-PnC and SM-PnC exhibit the same long-wave static limit that approaches the dispersion of the homogenized beam with material property constants \(\rho_{0},E_{0}\) (dashed lines), but display different band-gaps (shaded gray regions). In particular, the SM-PnC has wider gaps due to its discrete nature and the contrast of both densities and elastic moduli, while the CM-PnC has smaller gaps due to a fixed density and a continuous variation of the elastic modulus only. On the right side of the dispersion diagrams, the eigenfrequencies of representative finite beams with 15 unit cells and free-free BCs are plotted as black dots, with \(\phi=0.2\pi\) and \(\phi=0.4\pi\) selected for the CM-PnC and SM-PnC beams, respectively. Truncation resonances are observed to appear in band gaps, a feature which is unique to the structure, non-existing at the material level. An arbitrary phason value is chosen here to produce a large number of truncation resonances as an example, but the behavior with the full range of \(\phi\) will later be explored and explained. The truncation resonances are localized at one of the two boundaries of the finite beams, with selected mode shapes displayed in Figs. 2(c-f). By looking at such isolated cases (as has been largely done in previous studies), there is no apparent reason or pattern pertaining to the appearance of in-gap resonances, why are they localized at one boundary instead of the other, and why these features can change by selecting different BCs or different numbers of unit cells, etc. In the following sections, we will shed light on all of these questions by illustrating the topological character of in-gap truncation resonances associated with non-zero Chern numbers, and consequently how they can be manipulated through the phason and other parameters or design features.
## 3 Topological properties of modulated phononic-crystal beams
In this section we develop the theoretical tools for the topological characterization truncation resonances by examining their behavior inside band gaps. We begin by investigating the effect of the modulation wavenumber \(\theta\), which allows us to extract the topological invariants (Chern numbers). We then show how the Chern numbers are related to in-gap truncation resonances through the variation of the phason parameters. We also study the effect of the number of unit cells comprising the finite structure on the convergence of the truncation resonance frequencies. Finally, we provide a comparison between topological truncation resonances and non-topological defect resonances, highlighting their key differences and demonstrating the possibility of their coupling as a defect is moved towards a boundary.
### Topological characterization by the Chern number
In principle, the Chern number characterizes the topology of a vector field defined over a two-dimensional torus. For 2D periodic materials the torus is composed of two orthogonal wavenumber coordinates \(\kappa_{x}\) and \(\kappa_{y}\) and describes the reciprocal space Brillouin zone [53; 58; 59; 61; 85]. For 1D modulated materials such as the considered beams, the phason \(\phi\) serves as an additional dimension and replaces the missing wavenumber component to form a torus based on \(\kappa\) and \(\phi\). [70]. The eigenvector field is the Bloch mode displacement \(\dot{w}_{n}(\kappa,\phi)\) corresponding to the \(n_{\mathrm{th}}\) band defined over the torus \((\kappa,\phi)\in\mathbb{T}^{2}=[0,2\pi]\times[0,2\pi]\), recalling that the dispersion is \(2\pi\)-periodic in both \(\phi\) and \(\kappa\), with \(\kappa=\mu a\) defined as the
non-dimensional wavenumber. Due to the continuous nature of the beams, the dispersion frequency bands are invariant with \(\phi\), which only produces a shift in the choice of the unit cell. However, the variation of \(\phi\) produces changes in Bloch eigenvectors, which may reflect in non-trivial topological properties. The Chern number \(C_{n}\) for the \(n_{\text{th}}\) band is defined as
\[C_{n}=\frac{1}{2\pi i}\int_{\mathcal{D}}\beta_{n}\,d\mathcal{D}, \tag{3}\]
where \(\mathcal{D}=\mathbb{T}^{2}\), \(\beta_{n}=\nabla\times\mathbf{A}_{n}\) is called the Berry curvature, and \(\mathbf{A}_{n}=\hat{w}_{n}^{*}\cdot\nabla\hat{w}_{n}\) is the Berry connection, with \(0^{*}\) denoting a complex conjugate. The Chern number is an integer that quantifies the topological properties of the bands; these are robust to small perturbations in the system's unit cell as long as these perturbations do not close the gaps separating the bands. Among other features, the Chern number is related to discontinuities (or vorticities) in the eigenvector field [85], localization of the Berry curvature [53], and to phase accumulation of the Bloch modes along cyclic paths in the torus Brillouin zone [70; 62].
Of particular relevance to the present work is the bulk-boundary correspondence principle that relates the existence of in-gap edge states in finite systems to the Chern numbers [86]. This is done through the computation of a gap label \(C_{g}\) given by the summation of the Chern numbers of the bands below the gap, i.e. \(C_{g}^{(t)}=\sum_{n=1}^{r}C_{n}\), which is equal to the number of truncation resonances found inside such gap when the phase \(\phi\) varies in an interval of \(2\pi\) (see Section 3.2 for more details). However, the computation of the Chern number as given by Eq. (3) is often challenging due to phase or gauge ambiguities [87]. Furthermore, it has to be done for each \(\theta\) value that defines a different unit-cell size (see, for example, Refs. [80; 70]). Here, we take an alternative, and more generic, approach that produces the gap labels \(C_{g}\) without direct computation of the band Chern numbers \(C_{n}\), and for all \(\theta\) values at once. Such approach relies on density of states computations based on the spectral variation with \(\theta\), which has been developed using mathematical principles of K-theory in the context of periodic and aperiodic topological insulators [88; 89], and later extended to quasi-periodic acoustic/elastic metamaterials [71; 72; 73; 74; 75; 76]. This approach has not yet been extended to continuous elastic periodic waveguides such as the
Figure 2: Material versus structure properties. Dispersion diagrams (material) for the CM-PnC and the SM-PnC models are displayed in (a-b) as solid lines, while dashed lines correspond to the homogenized beam dispersion. Band-gap frequency ranges are shaded grey. A finite structure with 15 unit cells exhibits in-gap truncation resonances as illustrated alongside the dispersion diagrams, with selected mode shapes displayed in (c-f). For both models, the unit-cell length is \(\tilde{a}=203\) mm.
beams studied here.
#### 3.1.1 Extraction of the Chern number by varying the modulation wavenumber
To begin, we investigate the variation of the beams' spectral properties as a function of the modulation wavelength \(\theta\). The procedure relies on a large finite structure of fixed size \(L=100\tilde{a}\), and the computation of its eigenfrequencies under periodic boundary conditions (PBCs). The results are illustrated in Fig. 3(a,b) for the CM-PnC and SM-PnC configurations, where the eigenfrequencies are plotted as a function of \(\theta\) as black dots. In the computation, the considered range of \(\theta\) is discretized in intervals of \(\Delta\theta=1/L\), i.e., \(\theta_{n}=n/L\), such that each considered structure has an integer number \(n\) of unit cells. By doing so, the resulting eigenfrequencies sample the Bloch dispersion bands defined for the considered \(\theta\) value, and no frequencies are found inside the gaps due to the PBCs and the "perfect" periodicity emanating from an integer number of unit cells [73]. The resulting spectrum provides a map for the location of the bands (black regions) and band gaps (white regions) as a function of \(\theta\), and consequently of unit-cell length \(a=1/\theta\). We note that SM-PnC produces a more complex spectrum (Fig. 3(b)) with a larger number of gaps when compared to CM-PnC (Fig. 3(a)), in particular for lower values of \(\theta\) as illustrated in the zoomed view of Fig. 3(c).
The band-gap Chern numbers can be extracted by computing the Integrated Density of States (IDS) of the spectrum. It is defined as
\[\mathrm{IDS}(\theta,f)=\lim_{L\to\infty}\frac{\sum_{n}[f_{n}\leq f]}{L}, \tag{4}\]
where \([\cdot]\) denotes the Iverson Brackets, which provides a value of 1 whenever the argument is true. In simple terms, for a given \(\theta\) and frequency \(f\), the IDS is the summation of all the eigenfrequencies below \(f\), normalized by the structure size \(L\). It theoretically converges as the structure size tends to infinity, but it is practically sufficient to consider large structures such as the one with \(L=100\tilde{a}\) considered in our investigation. The IDS is displayed for the CM-PnC medium in Fig. 3(d), and for the SM-PnC medium in Fig. 3(e) with a zoomed view for the lower \(\theta\) range in (f). In this representation, the \(z\)-axis and the associated colormap represent frequency \(f\) as a function of IDS and \(\theta\). The insets in (d,e) illustrate the 3D views highlighting sharp discontinuities in the surface plot, which are visualized as straight lines in the top view colormaps. Each straight line is associated with a band gap and occurs since the IDS does not change inside the gap.
Figure 3: Eigenfrequencies of finite beam with \(L=100\tilde{a}\) and PBCs for (a) sinusoidal and (b) step-wise modulation, with zoomed view in (c). Black dots represent eigenfrequencies while white areas denote band gaps. The corresponding IDS plots are displayed in the bottom panels (d-f), where selected fitted lines have colors corresponding to the gaps marked and labeled in (a-c).
Hence, a jump in frequency (color) occurs as the IDS changes from the last mode before the gap to the first mode right after the gap. According to the theory [71], and confirmed by our findings, the variation of the IDS with \(\theta\) inside the gaps identify straight lines expressed as
\[\mathrm{IDS}(f)=n_{0}+C_{g}\theta, \tag{5}\]
with the gap Chern number \(C_{g}\) corresponding to the slope. The lines of the most prominent gaps in Fig. 3 are fitted and overlaid to the IDS plots, allowing the extraction of the Chern gap labels from the slopes as marked in the top panels, with different colors used to represent different gaps. These gap labels are defined generically for any \(\theta\) value that defines the band gap, and are related to the truncation resonances as described in the following section.
### Topological edge states and their control by phasons
The non-zero Chern gap labels indicate the presence of in-gap edge states existing for structures with truncated boundaries, i.e., the truncation resonances. Their properties are illustrated in Figs. 4 and 5 for the CM-PnC and SM-PnC configurations, respectively. The figures display the frequencies of a finite structure of fixed length \(L=15\bar{a}\) as a function of modulation wavenumber \(\theta\) and phase \(\phi\), for different BCs such as free-free and pinned-pinned. The frequencies are color-coded according to a localization factor \(p\) to identify modes localized at the boundaries, which is defined as
\[p=\frac{\int_{\mathcal{L}_{r}}|w|dx-\int_{\mathcal{L}_{l}}|w|dx}{\int_{ \mathcal{L}}|w|dx}, \tag{6}\]
where \(\mathcal{L}\) denotes the domain of the beam, and \(\mathcal{L}_{r}\) and \(\mathcal{L}_{l}\) correspond to a smaller portion of length \(0.15L\) at the right and left boundaries, respectively. With this definition, positive (red) and negative (blue) \(p\) values indicate modes localized at the right and left boundary, respectively, while values that are close to zero (black) indicate non-localized bulk modes.
The left panels in Figs. 4 and 5 display the eigenfrequencies of the finite beam as a function of \(\theta\), for different BCs as illustrated by the schematics. The spectra are overall similar to the bulk spectra exhibited in Fig. 3, with black regions also defining the bulk bands, but with additional modes appearing inside the band gaps. These modes are the topological edge states, corresponding to the truncation resonances which are localized at one of the boundaries of the beam. The modes localized at the right boundary (red) traverse the band gaps multiple times as they migrate from the band above to the band below their respective gaps. Although not the focus of the present investigation, this behavior stems from the positive gap labels \(C_{g}>0\) and can be explained by density of states arguments [73]. Furthermore, the modes localized at the left boundary (blue) do not migrate between bands and instead remain inside the gap for the considered range of \(\theta\). The different behavior between left- and right-localized modes occur due to the way the finite structure is constructed, where the change in \(\theta\) produces a qualitative change at the right boundary (the modulation is truncated at different places for different \(\theta\)), but not of the left boundary (the modulation is always truncated at the same place).
The gap label \(C_{g}\) dictates the number of left- and right-localized edge modes that span the band gap as the phason \(\phi\) varies within an interval of \(2\pi\), for a fixed \(\theta\) value. This is illustrated for selected \(\theta\) values (marked as vertical dashed green lines) in the middle and right panels of Figs. 4 and 5, which display the variation of the eigenfrequencies with the phason \(\phi\). As previously mentioned, variations of \(\phi\) do not affect the frequencies of the dispersion bands, and therefore the boundaries of the band gaps (material property) remain unchanged with \(\phi\). However, the phason influences how both boundaries of a finite structure are truncated (Fig. 1), and its variation causes the eigenfrequency branches of the truncation resonances to traverse the gaps. The first selected value \(\theta_{1}=1/\bar{a}\) corresponds to the modulation wavelength for the reference unit-cell size \(\bar{a}\). In the CS-PnC case (Figs. 4(b,e)), this unit-cell size produces two small gaps with Chern labels \(C_{g}=1\) and \(C_{g}=2\), which were extracted from the procedure in Fig. 3. For both types of BCs (free-free in (b) and pinned-pinned in (e)), one left- and one right-localized edge state traverse the first gap, and two edge states traverse the second gap, as the phason \(\phi\) varies from \(0\) to \(2\pi\). In the SM-PnC case (Figs. 5(b,e)), the choice \(\theta_{1}=1/\bar{a}\) corresponds to the case investigated in the experimental section of this paper (see Section 4), which produces three band-gaps with \(C_{g}\) values ranging from \(1\) to \(3\). Regardless of the type of boundary condition, the number of left- and right-localized edge modes spanning the band gaps is equal to the corresponding Chern gap label. In addition, the gap label sign is related to the direction the edge modes cross the gap [89]. A positive \(C_{g}>0\) indicates that \(|C_{g}|\) left-localized branches will cross the gap from the lower band to the upper band, and an equal number of right-localized states will cross from the upper band to the lower band. Although no examples are found in this paper, a negative sign \(|C_{g}|<0\) produces transitions in opposite directions [70]. Also note that the eigenfrequencies have a periodic behavior with \(\phi\), and are actually continuous at \(\phi=0=2\pi\). Therefore a few branches of the truncation resonances traverse the gap through that point; for example, see the second right-localized mode in the second gap of Fig. 5(e). Indeed, the phason variable \(\phi\) defines a continuous ring, with no start or ending point, with the beginning and end at \(\phi=0\) and \(\phi=2\pi\), respectively, being arbitrary choices for the plots.
Other examples are shown to demonstrate the generality of the approach and give more insights into the behavior of the edge states. The case of \(\theta_{2}=2.5/\tilde{a}\) (panels (c,f) in Figs. 4 and 5) corresponds to a unit-cell size 2.5 times smaller than the reference \(\tilde{a}\), and therefore the finite length \(L=15\tilde{a}\) now comprises 37.5 unit cells. Even without an integer number of unit cells, the number of edge sates inside each gap matches the corresponding gap labels, for both CS-PnC and SM-PnC, and both types of BCs considered. In fact, this behavior is general and holds for any arbitrary \(\theta\) value. The last row in Fig. 5 focuses on the lower \(\theta\) range, where the SM-PnC features additional gaps with higher Chern gap labels. The examples \(\theta_{3}=2\) m\({}^{-1}\) and \(\theta_{4}=3\) m\({}^{-1}\) correspond to unit cell sizes of 0.5m and 0.33m respectively, and form finite structures with 6.09 and 9.135 unit cells for the fixed length \(L=15\tilde{a}\). They feature gap labels as high as \(C_{g}=8\), and the behavior of the edge states spanning the gaps with \(\phi\) is in agreement with the extracted gap labels, again even without an integer number of unit cells. Among many edge states, two transitions experienced by the modes as a function of \(\phi\) are highlighted by thicker lines and dots in Fig. 4(f) and in Fig. 5(h), and have their mode shape variation displayed in Figs. 6(a,b) respectively. These examples illustrate a transition between a right- and left-localized mode that occurs as a function of \(\phi\), with an intermediate state as a non-localized bulk mode when the eigenfrequency branch tangentially approaches the boundary of the gap. This type of transitions have been exploited for topological pumping applications, where the phason \(\phi\) is varied along an additional spatial [70; 80] or temporal [77; 78; 79] dimension to induce a migration of localized modes between two boundaries.
These results reveal that the truncation resonances are in fact topological edge states that traverse the band gaps for variations of the phason \(\phi\). The number of truncation resonances that traverse a gap is equal to the corresponding gap label \(C_{g}\). This holds true for any set of BCs, although the particular shape of the branches of the edge states as they traverse the gap may be different. In addition, while the number of in-gap resonances can be predicted, one cannot guarantee the existence of truncation resonances for a particular phason value \(\phi\), but only that \(|C_{g}|\) branches will traverse the gap when \(\phi\) varies in an interval of \(2\pi\). For example, the finite structure considered in Fig. 2(a) correspond to a phason value \(\phi=0.2\pi\), which intersects both the right- and left-localized edge state branches of Fig. 4(b), and therefore one resonance localized at each boundary is found in this case. In contrast, for a phason value \(\phi=\pi\), the same gap in Fig. 4(b) does not exhibit any edge states, and therefore no truncation resonances would be found. Similarly, the modes I and II in Fig. 2(b) are intersections of the left- and right-localized edge state branches in the first and third gap of Fig. 5(b), respectively, for
Figure 4: Eigenfrequencies of finite CM-PnC structure with length \(L=15\tilde{a}\) and free-free (top) or pinned-pinned (bottom) BCs. The left panels (a,d) display the variation of the eigenfrequencies with \(\theta\), while the middle (b,e) and right (c,f) panels display the variation with \(\phi\) for the selected \(\theta\) values highlighted as vertical dashed green lines in (a,d). The frequencies are color-coded according to the polarization \(p\), and the gap labels \(C_{g}\) are added for reference.
\(\phi=0.4\pi\), while other phason choices would define different truncation resonances or the their absence. Therefore, to better understand the behavior of the truncation resonances one needs to consider the entire family of structures defined for variations of \(\phi\), instead of separately considering particular cases.
#### 3.2.1 Boundary phasons
As described, the phason \(\phi\) simultaneously modifies the properties of both boundaries of a finite structure (Fig. 1), and therefore influence the truncation resonances localized at both boundaries. A higher degree of control over the truncation resonances is achieved by using the right- and left-boundary phasons introduced in Fig. 1, which modify only one boundary at a time. This is equivalent to adding a tuning layer at one end of the structure as done in Refs. [42; 48]. The effect of boundary phasons is demonstrated in Fig. 7, which repeats the eigenfrequency variation with \(\phi\) of Fig. 3(f) and Fig. 4(d) in the left panels, and compares them to the the variation as a function of right-boundary phason \(\phi_{r}\) and left-boundary phason \(\phi_{l}\) displayed in the middle and right panels, respectively. The plots clearly show evidence of how the boundary phason only causes the edge states localized at the corresponding boundary to traverse the gap, while the superimposed effect of both boundary phasons lead to the effect caused by the phason \(\phi\). Indeed, as \(\phi_{r}\) varies (Figs. 7(b,e)), only the
Figure 5: Eigenfrequencies of the finite SM-PnC structure with length \(L=15\tilde{a}\) and free-free (top row) or pinned-pinned (middle row) BCs. The left panels (a,d) display the variation of the eigenfrequencies with \(\theta\), while (g) displays a zoom of (d) in the low \(\theta\) range. The middle (h,e,h) and right (c,f,i) panels display the variation with \(\phi\) for the selected \(\theta\) values highlighted as vertical dashed green lines in (a,d,g). The frequencies are color-coded according to the polarization \(p\), and the gap labels \(C_{g}\) are added for reference.
right-localized modes traverse the gaps, producing the same branches as the ones in Figs. 7(a,d). Any left-localized modes that were defined for \(\phi=0\) (the starting point) appear as roughly flat bands inside the gap, since the left boundary is not changing with \(\phi_{r}\). A similar effect is observed for the variation with \(\phi_{l}\) in Figs. 7(c,f). For a structure that has a sufficient number of unit cells (i.e., has reached convergence as described Section 3.3 to follow), the right- and left-localized edge states form a set of decoupled chiral bands [89], the number of which corresponds to the gap label magnitude \(|C_{g}|\) and whose slopes are associated with the gap label sign.
### Effect of number of unit cells on frequency convergence of topological truncation resonances
Next, we investigate the effect of the number of unit cells on the behavior of the truncation resonances. As shown earlier, truncation modes exhibit an exponential decay away from the boundary since their frequency lies inside a band gap, and therefore correspond to a complex wave number. For structures with a large number of unit cells, the in-gap truncation modes are only mildly affected by further addition of unit cells since their displacement tend to zero away from the boundary. In that scenario, a further increase in number of unit cells will produce a larger number of bulk modes, while the branches of the edge states spanning the band gaps with \(\phi\) will remain the same. However, for structures with a small number of unit cells, the truncation resonances are more likely to be influenced by the opposing edge and by other effects such as mode coupling and veering with bulk modes or another edge state.
This behavior and the convergence with the number of unit cells is elucidated by the results of Fig. 8. The SM-PnC structure with \(\theta_{1}=1/\hat{a}\) is chosen to exemplify these features, with the first and second row corresponding to free-free and pinned-pinned BCs, respectively. The panels (a,d) display the variation of the eigenfrequencies with \(\phi\) for a structure with 5 unit cells, while the right panels (c,f) correspond to a larger structure comprising 15 cells. In the middle panels (b,e), the variation of the frequencies with the number of unit cells is displayed for the fixed phason value highlighted by the vertical dashed-line intersections in the other panels. Overall, the number of bulk modes increase with the number of unit cells as expected, and the edge state branches traversing the gaps are similar but exhibit small differences. These differences are amplified for phason values that are close to mode couplings as illustrated in the top row. At the selected phason value, there is a strongly coupled avoided crossing between the right- and left-localized edge states for the case with 5 unit cells shown in (a), and therefore the eigenfrequencies defined for that phason value are more separated when compared to the structure shown in (c) with 15 cells and without the avoided crossing. Therefore, the frequencies of the edge states for this phason value vary as a function of the number of unit cells and converge to a fixed value at approximately 10 unit cells as illustrated in Fig. 8(b). In contrast, in the case of the bottom row with pinned-pinned BCs, the chosen phason value intersects the edge state mode and an adjacent mode that is well isolated, and therefore the truncation frequency converges quicker at around four unit cells. These results illustrate that while convergence is always achieved, the required number of unit cells may vary between different structures depending on the BCs and the presence of coupling effects at the phason value of interest.
Figure 6: Examples of mode shape transitions as a function of phason \(\phi\) for the (a) CM-PnC and (b) SM-PnC structures, corresponding to the branches highlighted in Fig. 4(f) and Fig. 5(h), respectively.
### Topological truncation resonance versus non-topological defect resonance
Truncation resonances, with their topological character, are not the only type of resonances that appear due to truncation or breakage of symmetry in a periodic medium. Another type of resonance, that is also of localized nature, is that associated with defect modes [90; 91; 92]. Under the developed framework, the band gaps characterized by non-zero Chern labels are guaranteed to support \(|C_{g}|\) truncation resonances spanning the gaps as a function of phason or boundary phason parameters. Although we do not present an example in this paper, in some cases a band gap may be characterized by \(C_{g}=0\), which is referred to as a topologically trivial band gap. In this case, the presence of in-gap resonances is not guaranteed, although they may appear. Since there is no topological explanation or origin to their appearance, these truncation resonances are usually categorized as defect modes. One example can be found in reference [75], where a central trivial gap with \(C_{g}=0\) does not exhibit in-gap resonances under pinned-pinned BCs (Fig. 2a), but exhibits truncation resonances under clamped-free BCs (Fig. 3a). Note that the truncation resonances in this second case do not traverse the band gap, which is a key feature expected from topological modes as we highlight in this work.
We here illustrate another important scenario where a physical defect is introduced to a finite structure in order to create an in-gap resonance, although in this case a non-topological resonance as we will show. As an example, we consider a finite SM-PnC structure comprising 15 unit cells with \(\theta_{1}=1/\bar{a}\), and introduce a defect initially located at the 8th unit cell by "skipping" the ABS portion within this unit cell, making it entirely out of aluminum. The results displayed in Fig. 9 show the variation of the eigenfrequencies with \(\phi\), with the defect unit cell highlighted in the schematics at the top and identified by the larger white segment, which represents aluminum. As the phason varies, material is added to the left boundary and removed from the right boundary (Fig. 1), which causes the defect to continuously drift towards the right boundary. The defect moves by one unit cell with every change in \(2\pi\); these increments are marked by the vertical dashed lines in the figure. After a change in phason of \(14\pi\), the defect is at the last unit cell, and finally for \(16\pi\) it exists the structure and a perfect periodic domain is restored. In a defect-free structure, the variation of the eigenfrequencies with \(\phi\) is trivially periodic in intervals of \(2\pi\). With the inclusion of the defect, additional modes are found inside the gaps and co-exist with the truncation resonances. The interplay between the in-gap defect mode and the truncation resonances is highlighted by the selected mode shapes displayed in the bottom panels. In the initial configuration, the in-gap defect resonance is localized at the center (8th unit cell) of the structure and is completely decoupled from the truncation resonances, as evidenced by the plots in stage I. As the phason varies, the trajectory of the defect modes remain almost flat inside the
Figure 7: Eigenfrequency variation as a function of phason \(\phi\) (a,d), right-boundary phason \(\phi_{r}\) (b,e) and left-boundary phason \(\phi_{l}\) (c,f) for finite beam with \(L=15\bar{a}\) and pinned-pinned BCs. The top row consists of a CM-PnC structure with \(\theta_{2}=2.5/\bar{a}\) while the bottom row consists of a SM-PnC structure with \(\theta_{3}=2\) m\({}^{-1}\).
gaps, in sharp contrast to the behavior of the topological states which transverse the gaps. Indeed, the truncation resonances exhibit the expected periodic behavior as their branches traverse the gaps in a pattern that repeats periodically in intervals of \(2\pi\). However, as the defect physical position approaches the right boundary, the in-gap defect modes progressively couple with the truncation resonances localized at the right boundary, this is seen in all three gaps viewed in the figure. Focusing on the third band gap as an example, the frequency curves in stage II exhibit a weak coupling, while in stage III a larger coupling is observed causing an avoided crossing with relatively strong repulsion between the defect and truncation resonances. As the defect moves within the last unit cells (13th-15th), it slowly transforms to capture, itself, the characteristics of a truncation resonance localized at the right boundary, with a mode shape example displayed for stage IV. At this last stage, the branches of the right-localized truncation resonances are very different from the periodic pattern of the perfect periodic structure, since they are created by a truncation near a defect.
These results highlight key differences between the truncation resonances and defect modes. The defect mode defines a flat branch inside the gap as a function of \(\phi\), until it starts to couple with the topological truncation resonances-which happens as the position of the defects nears the boundary. It is interesting to note that when the coupling takes place, the shape of the coupled truncation resonance branch changes as it traverses the gap. However, the counting principle given by the gap Chern labels is still valid. This can be verified as in every interval of \(\phi=2\pi\), there is a net number of 1, 2 and 3 right-localized modes transversing the first, second, and third gap, respectively. Therefore, the truncation resonances retain this key topological property even with the interference of a defect at the boundary. We should also stress that the topological classification of an in-gap mode is always relative to a given set of parameters. The defect mode introduced here is non-topological in the context of the phason degree of freedom, which causes it to remain confinded inside the gap as a flat band. However in some cases this type of defect mode might find a topological classification under a different set of parameters and analysis framework [93].
## 4 Experimental investigation of modulated phononic crystal beams
### Experimental set-up and measurements
For the experimental investigation, we focus on the SM-PnC beam structure, again composed of alternating layers of Al and ABS with a ratio of layer lengths of 4:1 (Al:ABS) for the baseline unit-cell configuration. The unit-cell length
Figure 8: Eigenfrequency variation with \(\phi\) for structure with \(\theta_{1}=1/\hat{a}\) comprising 5 cells (a,d) and 15 cells (c,f). The middle panels (b,e) show the variation with the number of unit cells for the fixed phason values highlighted as vertical dashed lines in the other panels. Top and bottom rows correspond to free-free and pinned-pinned BCs, respectively. Band-gap frequency ranges are shaded grey.
and cross-sectional area are selected as \(\bar{a}=203\) mm and \(A=645\) mm\({}^{2}\), except in Section 4.3 where the unit-cell length is varied. The values of these geometric parameters are chosen to allow for the generation of several band gaps below 9 kHz for practical reasons; however, all conclusions are scale invariant and hence applicable to periodic structures that are orders of magnitude smaller in size (with the limit that they are appropriately represented by continuous models). In this section, we show additional FE results for direct comparison with the experiments, where we use the same FE model details as in Section 3 with specifically 100 finite elements being used per unit-cell. For our experimental set-up, a set of Al and ABS solid blocks were fabricated and connected to each other by an adhesive to form the periodic structure. The test articles were suspended using thin nylon wires to simulate free-free BCs as depicted in Fig. 10(a).
First we show the complex band structure of the unit cell, which is shown in Fig. 10(b)--the real part of which is identical to Fig. 2(b). This calculation shows that three relatively large band gaps exist between 0 and 9 kHz. Figure. 10(c) shows a corresponding FRF obtained theoretically (solid line) and experimentally (dashed line) for a 5-unit-cell version of the structure, in which the "input" force excitation and the "output" displacement evaluation are at the extreme ends. For the experimental results, the test article was excited at the tip of the structure using a force hammer. The impulse forcing data \(F\) from the force hammer was used in conjunction with the response data \(U\) obtained by a sensing accelerometer connected at the other end of the structure, to generate the receptance \(U/F\) over the frequency range 0-9 KHz. The amplitude of the experimental response was calibrated to match the average of all theoretical data points over the 0-9 KHz frequency range. An excellent correlation is observed between the theoretical and experimental FRF curves. It can be seen, however, that the correlation generally degrades at higher frequencies along with an increasing level of noise. This is due to the difficulty of stimulating high frequencies with a force hammer as well as the reduced resolution when using a constant sampling rate over all frequencies.
### Effects of modulation wavenumber, boundary phasons, and number of unit cells by experiment
In Fig. 5(a), we have shown the effect of the modulation wavenumber (i.e., unit-cell length) on the locations of the truncation resonances. Here we repeat our computational investigation focusing on the range \(0.18\leq a\leq 0.22\) m and overlay the data of the experimental case of \(a=0.2\) m (\(\theta=5\)). The results, which are shown in the inset of Fig. 10(c), indicate very good agreement between theory and experiments. Another approach that keeps the unit-cell geometric configuration intact is the addition of a single tuning layer (or a partial unit-cell) at the end of the finite periodic structure [42; 48], as demonstrated in Section3.2.1. As illustrated in Fig. 1, the addition of a tuning layer corresponds to the application of a
Figure 9: Eigenfrequencies as a function of phason \(\phi\) for a finite SM-PrC structure with \(\theta_{1}=1/\bar{a}\), \(L=15\bar{a}\) and a defected unit cell. The location of the defect changes by one unit-cell increments with every change of \(2\pi\) in \(\phi\), as marked by the vertical dashed lines and illustrated in the top schematics. Band-gap frequency ranges are shaded grey. Selected mode shapes are displayed in the bottom panels, whose colors correspond to the polarization of the mode, with dashed and solid lines representing the mode with open and closed circle markers, respectively.
boundary phason \(\phi_{l}\). The material and geometrical configuration of the tuning layer should be chosen such that it would generally form a physically cropped unit-cell, i.e., it would form a partial unit-cell when its length is less than \(a\) and a full unit-cell when its length is \(a\). Figure 10(d) displays a plot of the resonant frequencies as a function of the length of the tuning layer, denoted by \(I_{\mathrm{TL}}\) and ranging from \(I_{\mathrm{TL}}=0\) (\(\phi_{l}=0\), 5 unit-cells) to \(I_{\mathrm{TL}}=a\) (\(\phi_{l}=2\pi\), 6 unit-cells) for the same baseline design of Fig.10(a)\(-\)this corresponds partially to the results shown in Fig. 5(b) but now with the addition of experimental data points. With the addition of a tuning layer, band-gap resonances rapidly traverse the band gaps. However, once they reach the band-gap boundaries they behave like regular structural resonances (bulk modes) with slower levels of variation as a function of \(I_{\mathrm{TL}}\).
Given the localization nature of truncation resonances, the measured amplitude at the far end of the SM-PnC structure is expected to be less than at the edge where the mode is localized and where the excitation is applied. In Fig. 11, we show using both theory and experiment an FRF comparison between 5- and 6-unit-cell structures in (a) and 5- and 15-unit-cell structures in (b). A truncation resonance peak clearly exists inside the second band gap. We also observe a stronger
Figure 10: Experimental validation: (a) Photograph of the experimental setup showing a 5-unit-cell SM-PnC beam structure consisting of layers of Aluminum and ABS polymer with a ABS volume fraction of 20% and \(\hat{a}=203\) mm. The structure was excited on the far left side (on the first ABS polymer layer) with a force hammer and measured with an accelerometer on the other far end. (b) Frequency band diagram of the infinite (material) constituent of the SM-PnC beam and (c) corresponding FRF response of the finite structure. Inset: Resonance frequency (thin solid lines, theory; dots, experiment) versus unit-cell length \(a\) for the 5-unit-cell periodic beam structure. (d) Corresponding resonance frequency (solid lines, theory; dots, experiment) versus left boundary phase (i.e., length of a tuning layer attached at the far left end). At \(\phi_{\mathrm{l}}=0.4\pi\), the tuning layer transitions from ABS to Al. At \(\phi_{\mathrm{l}}=2\pi\), the tuning layer is a full regular unit cell and the total structure is rendered a 6-unit-cell structure. In (a), the solid lines represent propagation modes, and the dashed lines represent attenuation modes. Band-gap frequency ranges are shaded grey.
attenuation from edge-to-edge as the number of unit cells (and total structure length) increases. As for the effect of the number of unit cells on the frequency of the truncation resonance, we note that there is a negligible shift from 5 to 15 unit cells. These results are to be compared with the eigenfrequency versus phason plot shown in Fig. 8(b) for free-free BCs. It is shown in that figure that beyond 5 unit cells, the change in the frequency of the truncation resonances become negligible. In contrast, the frequencies of the conventional resonances demonstrate substantial shifts, as shown in both Fig. 8(b) and Fig. 11. We also observe in Fig. 11(b) that while the amplitude of the truncation resonance peak drops significantly as the number of unit cells is increased from 5 to 15, the amplitudes of all the conventional resonances do not experience any noticeable drops.
### Effect of unit-cell material volume fraction by experiment
In addition to property modulation wavenumber and phasons, an alternative approach for controlling the frequency locations of truncation resonances is alternation of the unit-cell design, e.g., by changing its material composition and/or
Figure 11: Frequency response function comparison for the finite SM-PnC beam structure with different number of unit cells. The results show a truncation resonance in the second band gap. Compared to the baseline case of 5 unit cells, the trucation resonance is observed to experience negligible shift in frequency for (a) a 6 unit-cell structure and (b) a 15 unit-cell structure. Strong spatial attenuation in displacement amplitude across the structure is observed as the number of unit cells is increased. These results are for the same unit-cell configuration considered in Fig. 10. Band-gap frequency ranges are shaded grey.
Figure 12: Experimental validation: Resonance frequency (thin solid lines, theory; dots, experiment) versus ABS length-fraction for the 5-unit-cell SM-PnC beam structure with \(\hat{a}=203\) mm. The experimental data points correspond to an ABS length-fraction of 0.1, 0.15, 0.2, 0.25 and 0.3, respectively. The thick solid lines represent the band-gap boundaries for the corresponding infinite periodic materials.
spatial distribution or its geometry. This can result in achieving a total exit of a truncation resoance from a band-gap frequency range, as illustrated in Fig. 12 for a 5-unit-cell SM-PnC structure, which shows that when \(a_{\text{ABS}}/a\) is set to 0.25 or higher, no in-gap resonances appear in any of the three gaps covered by both computation and experiment. In this figure, we consider the full range \(a_{\text{ABS}}/a\), which at one extreme (\(a_{\text{ABS}}/a=0\)) represents a homogenous Al beam, and at the other extreme (\(a_{\text{ABS}}/a=1\)) represents a beam composed of only ABS polymer. This figure also allows us to examine the sensitivity of the truncation resonances' frequencies to smooth variations in the material volume fraction. It can be seen that the truncation resonances are noticeably more sensitive to varying the unit-cell layer dimensions than the conventional resonances. Once they exit the band gaps however, these unique resonances become less sensitive to varying \(a_{\text{ABS}}/a\), and their sensitivity becomes similar to that of the conventional resonances.
## 5 Further reflection on the material vs. structure theme
The distinction and interconnection between a material and a structure may be examined and classified at various levels. A basic distinction is that of intrinsic versus extrinsic properties or characteristics, e.g., the Young's modulus and density being intrinsic material properties in contrast to the stiffness and total mass as extrinsic structural characteristics. The distinction may also be made based on physical response. In this context, an elementary classification may be based on the behavior of static deformation, such as the length scale of deformation or spatial span of tangible force interactions. For example, consider a lattice configuration of beams forming a truss that lies at the core of a larger structural frame. If the length scale of deformation at, say, the center of the core is much larger than the individual beam elements and negligible force interaction occurs with the boundaries formed by the frame, then this deformation may be viewed as a form of material behavior. On the other hand, if the length scale of the deformation is on the order of the beam elements, and non-negligible interaction occurs with the boundaries, then the "periodic network of beams behaves as a structure, such as a frame in a building or a truss in a bridge [94]."
In this work, we have addressed the material-versus-structure correlation problem at a more fundamental level; that is, by examining the characteristics pertaining to finite size in comparison to the properties associated with idealized infinite size, and doing so from a topological elastodynamics perspective. Here, the dispersion curves represent material properties and the natural frequencies represent structural characteristics. In this context, finite size along the direction where the physical phenomenon of interest takes effect (in this case, wave propagation) is what distinguishes the material versus structure character. Finite dimensions in other lateral dimensions (such as the thickness of a beam, for exmaple) may play a significant role in altering the material properties or structural characteristics, but not in altering the classification of material versus structure. As a periodic material is truncated, and rendered a structure, both bulk and truncation resonances emerge; the latter being intimately connected to the nature of the truncation. This investigation focuses specifically on this aspect.
## 6 Conclusions
In this paper, we have investigated using theory and experiments the fundamental question of the relation and interplay between material and structure. We provided a formal connection between topological physics and truncation resonances in finite periodic structures. Periodic structures can be understood and topologically characterized using property modulation parameters such as the modulation wavelength \(\theta\) and phason \(\phi\). These parameters expand the physical space and allow for a rigorous study of the nature of truncation resonances.
The Chern number is a material property obtained from unit-cell analysis, considering a large number of unit cells with periodic boundary conditions applied. It allows us to predict the behavior of a periodic medium through the bulk-boundary correspondence principle, which in fact is itself a manifestation of the interconnection between the notion of a material and a structure, originated in the quantum realm-which we bring here to elastic media. In the QHE theory, for example, the Chern number is a material invariant that predicts the existence of edge currents propagating along the edges of truncated finite samples. Similarly, for our elastic structures, the gap labels predict the number of truncation resonances that span a band gap as \(\phi\) is varied for a finite structure with any prescribed BCs.
We have shown that the existence of in-gap truncation resonances cannot be guaranteed for any \(\phi\) and that the topological character is understood only when sweeping through \(\phi\). This brings a more comprehensive perspective rather than analysing particular truncation cases, and provides a methodology for designing for truncation resonances or their absence. The boundary phasons, which is a concept we introduce in this work, provide an additional tool to control truncation resonances, albeit at different boundaries independently. We have also investigated the effect of the number of unit cells in a finite structure, elucidating that the left- and right-boundary phasons become independent only when a sufficient number of unit cells is present. We similarly demonstrated that the frequency location of truncation resonances converge only when the structure is comprised of a sufficiently large number of unit cells, at least five cells in most cases.
Mode couplings--whose locations are influenced by the boundary conditions among other factors--impact the rate of convergence of the truncation resonances. The impact of the unit-cell constituent material composition was also studied, showing that a truncation resonance may be forced to exit a band gap with an appropriate choice of material volume fraction.
We have also examined another important type of localized mode in finite structures, the defect mode. We have shown it to be non-topological, since it remains flat with change of \(\phi\) inside the band gap unless it couples with a truncation resonance. In a perfect "undefected" periodic structure, there can only be one mode localized at each boundary for any given phason value. By coupling with a defect, it is possible to have two modes localized at the same boundary for a given structure, living inside a band gap, with different frequencies.
This study, we expect, will inspire future work on multiple fronts. For example, similar principles may be extended to 2D and 3D periodic structures and their truncation resonances, which may manifest as localized modes at points, edges, and surfaces, having connections to topological physics and possibly to higher-order Chern numbers and higher-order topological modes (such as corner modes). Another domain of potential applicability is coiled phononic crystals for space saving [95]. A further angle to be explored in the question of material versus structure is the static regime, where similar connections may be established for topological floppy modes [96]. Other areas to be investigated are the interplay with nonlinearities [97], the applicability to damage mechanics such as the effect of number of unit cells on the fracture toughness [98], and the role of size effects in nanoscience where small finite dimensions have profound impact on thermal transport [25] and other physical properties. Implications to quasiperiodic media [73; 74; 75; 76; 99] or nonperiodic media described statistically by representative volume elements may also be explored. Finally, the framework presented for connecting between topology and truncation may potentially be applied to finite systems in other branches of physics, such as photonics [49] and quantum mechanics [41].
## Acknowledgement
The authors acknowledge the students Andrew S. Tomchek and Edgar A. Flores for their assistance in conducting the experiments.
|
2309.14756 | On quantifying and improving realism of images generated with diffusion | Recent advances in diffusion models have led to a quantum leap in the quality
of generative visual content. However, quantification of realism of the content
is still challenging. Existing evaluation metrics, such as Inception Score and
Fr\'echet inception distance, fall short on benchmarking diffusion models due
to the versatility of the generated images. Moreover, they are not designed to
quantify realism of an individual image. This restricts their application in
forensic image analysis, which is becoming increasingly important in the
emerging era of generative models. To address that, we first propose a metric,
called Image Realism Score (IRS), computed from five statistical measures of a
given image. This non-learning based metric not only efficiently quantifies
realism of the generated images, it is readily usable as a measure to classify
a given image as real or fake. We experimentally establish the model- and
data-agnostic nature of the proposed IRS by successfully detecting fake images
generated by Stable Diffusion Model (SDM), Dalle2, Midjourney and BigGAN.
We further leverage this attribute of our metric to minimize an IRS-augmented
generative loss of SDM, and demonstrate a convenient yet considerable quality
improvement of the SDM-generated content with our modification. Our efforts
have also led to Gen-100 dataset, which provides 1,000 samples for 100 classes
generated by four high-quality models. We will release the dataset and code. | Yunzhuo Chen, Naveed Akhtar, Nur Al Hasan Haldar, Ajmal Mian | 2023-09-26T08:32:55Z | http://arxiv.org/abs/2309.14756v1 | # On quantifying and improving realism of images generated with diffusion
###### Abstract
Recent advances in diffusion models have led to a quantum leap in the quality of generative visual content. However, quantification of realism of the content is still challenging. Existing evaluation metrics, such as Inception Score and Frechet inception distance, fall short on benchmarking diffusion models due to the versatility of the generated images. Moreover, they are not designed to quantify realism of an individual image. This restricts their application in forensic image analysis, which is becoming increasingly important in the emerging era of generative models. To address that, we first propose a metric, called Image Realism Score (IRS), computed from five statistical measures of a given image. This non-learning based metric not only efficiently quantifies realism of the generated images, it is readily usable as a measure to classify a given image as real or fake. We experimentally establish the model- and data-agnostic nature of the proposed IRS by successfully detecting fake images generated by Stable Diffusion Model (SDM), Dalle2, Midjourney and BigGAN. We further leverage this attribute of our metric to minimize an IRS-augmented generative loss of SDM, and demonstrate a convenient yet considerable quality improvement of the SDM-generated content with our modification. Our efforts have also led to Gen-100 dataset, which provides 1,000 samples for 100 classes generated by four high-quality models. We will release the dataset and code.
## 1 Introduction
Generative models, including Variational Autoencoders (VAEs) [17, 37], Energy-Based Models (EBM) [29, 32], Generative Adversarial Network (GANs) [13, 53] and normalizing flow [41] have historically attracted significant attention from the research community, only to be eventually surpassed by diffusion models [25, 8]. Diffusion models have recently provided a quantum leap to the generative visual content quality [8]. Moreover, they are claimed to also overcome challenges such as matching posterior distributions in VAEs, managing unpredictability in GAN objectives, high computational demands of Markov Chain Monte Carlo [21] techniques in EBMs, and network limitations of the normalizing flows. Naturally, we can expect an even higher popularity of the diffusion models in generative computer vision in the future.
Owing to the importance of generative modeling in vision, a number of metrics have been proposed to evaluate the abilities of generative models [56, 24, 48, 6]. However, due to the high quality and versatility of the content generated by diffusion models, these metrics are now falling short on providing meaningful evaluation of the diffusion model generated content. Not to mention, these metrics have widely known intrinsic weaknesses. For instance, the popular Frechet Inception Distance (FID) [24] is known to suffer from notable bias [6]. The Inception Score (IS) [48] is also known to be often suboptimal [4]. Moreover, both metrics are dataset- or model-dependent in the sense that they rely on a reference dataset or a model to compute their scores. For example, IS [48] uses an ImageNet [46] trained Inception model [51] to provide a meaningful evaluation score. This is particularly problema
Figure 1: We manually group images into high and low quality sets and measure their realism scores. FID [24] and IS [48] are model/data-dependent metrics and produce counter-intuitive results on these sets. The proposed Image Realism Score (IRS) is sample-specific and performs intuitive and reliable discrimination between the low and high quality samples. Unlike FID and IS, the proposed IRS can also be used as a loss to improve the generative ability of a model.
erative content where normally a single sample is available and the task is to adjudge its authenticity in a model- and data-agnostic manner.
Nowadays, we are witnessing many new generative diffusion models surfacing on a daily basis, each with better content quality and flexibility than the previous ones [8]. With public access to these models, this can lead to serious societal problems if the generated content are used with a negative intent [42]. The inability of the current evaluation metrics for generative modeling to verify the authenticity of individual (generated) images is an obvious shortcoming that needs to be addressed for the forensic treatment of the content [11]. This work fills-in the gap by proposing an Image Realism Score (IRS) metric that is more suited to benchmark the content quality of modern diffusion models.
The proposed IRS is a 'non-learning' based metric which allows it to be dataset- and model-agnostic, while computing intuitive scores according to image quality, see Fig. 1. The proposed metric relies on well-established concepts in image processing, including Canny Edge Density [52], GLCM Contrast [49], GLCM Energy [49], Variance of Laplacian [23], and Mean Spectrum [20] to determine the realism of an image. Leveraging the sample-specific nature of the statistics used by IRS, we can delineate between natural and fake content easily with our metric. To facilitate further efforts towards the quantification of realism in diffusion-generated content, we also introduce a 100-category dataset in this work, called Gen-100. Each category in this dataset contains 1,000 images generated with ChatGPT prompts using Stable Diffusion Model (SDM) [43], Dalle2 [14], Midjourney [40] and BigGAN [9]. Our dataset is mainly used to validate the model-agnostic and data-agnostic nature of IRS. We eventually exploit the same property of IRS to minimize the generative loss of SDMs, which allows us to considerably improve the quality of SDM-generated content. To summarize, this paper makes the following three main contributions.
* It introduces Image Realism Score (IRS), a first-of-its-kind non-learning based sample specific metric to quantify realism of an image for differentiating natural images from those generated by generative models.
* It leverages IRS to benchmark realism of popular generative models and establishes that the metric is well-suited to forensic analysis by employing it for fake image detection. In the process, it also proposes Gen-100 dataset that contains 1,000 images of 100 classes generated by four models.
* By regulating the training loss of Stable Diffusion Model (SDM) [43] to further minimize the proposed metric score, it demonstrates a considerable improvement in the quality of images generated by SDM.
## 2 Related Work
In the last decade, high-quality sample generation of Generative Adversarial Networks (GANs) [7] has enabled deep generative models to receive widespread attention. Nevertheless, with the emergence of diffusion models [6, 43, 48, 56, 24] GANs are no longer the dominant force in this field. Diffusion models have gained rapid popularity in recent years owing to their stability and superior generation quality. They address some of the common challenges associated with GANs, such as mode collapse, the overhead of adversarial learning, and convergence failure [2]. The training strategy of diffusion models involves systematically corrupting the training data by gradually adding Gaussian noise, followed by learning to retrieve the original data from the noisy version [8]. Additionally, since their training approach makes small changes to the original data and then corrects those changes, they manage to learn a data distribution where samples closely follow the original data, pro
Figure 2: Overall framework of the proposed IRS. The left side illustrates how IRS quantifies realism in an image through pentagon area; larger area means higher realism. IRS maximizes the area difference between real and generated images through a special arrangement of five statistical measures followed by a calibration process. The right side shows that IRS is used to modify the loss function to improve realism in the generative model outputs, taking SDM as an example.
viding a strong sense of realism to the generated samples. These strengths of diffusion models have led to their significant achievements in the field of image generation technologies [2, 39]. Moreover, diffusion models have been widely applied in various domains, including image denoising [34] and repair [18], image super-resolution [5, 30], and text-to-image generation [44, 47].
The diffusion model training process consists of two steps. First, there is a predefined forward process that transforms the data distribution into a Gaussian distribution. The second, the corresponding reverse process, employs a trained neural network to simulate regular or random stepwise reversal of the forward process. Diffusion modeling provides a more stable training target and higher generative quality compared to VAEs, EBMs, and normalizing flows [13, 37, 35, 17]. However, because the prior distribution is iteratively transformed into a complex data distribution, a significant number of function evaluations are required in the reverse process. Consequently, diffusion models inherently suffer from a more time-consuming sampling process. Researchers have proposed various solutions, such as introducing new forward processes to stabilize sampling [15, 26], and recent studies have addressed this issue by implementing dimensionality reduction [22, 27].
Although several evaluation metrics are proposed to measure the performance of deep generative models, there is no globally accepted agreement on the best metrics for generative models. Currently, popular metrics include the Inception Score (IS), the Frechet Inception Distance (FID), Maximum Mean Discrepancy (MMD) [19] and Activation maximization score [57]. Arguably, IS [48] is the most widely used metric for evaluating generative models. It employs a pre-trained neural network to evaluate the desired properties of generated samples, such as high classifiability and diversity of class labels. This metric shows a reasonable correlation with the quality and diversity of the generated images. However, IS is also known to have multiple limitations [55]. Firstly, IS is sensitive to over-fitting. Secondly, it may favor models that learn from diverse images. Also, operations such as mixing natural images from completely different distributions may cheat this metric.
Another widely used evaluation metric for generative models is FID score [24]. To compute this metric, images are embedded into a feature space. Thereafter, mean and covariance of both generated and real data embeddings are computed and FID measures the image quality by comparing these two distribution parameters. While FID excels in discriminability, robustness, and speed, it assumes that the data features follow a Gaussian distribution, which is not always the case. It is also notable that both IS and FID are model- or dataset-specific metrics. This makes them less suitable to quantify the quality of individual images.
## 3 Proposed Approach
Numerous works highlight the importance of texture, edges, and frequency for detecting fake images [1, 10, 12, 16, 33, 45]. However, most fake detection methods proposed are still 'learned' models. They face intrinsic limitations of carefully handling training demands while falling short on generalizing to unseen generative techniques [38]. In this work, we develop a non-learning based metric called Image Realism Score (IRS) to quantify realism in generated images, as illustrated in Fig. 2. This metric incorporates five image statistics, and leads to a convenient detection of synthetic images generated by the contemporary diffusion models as well as BigGAN. In this section, we first introduce the mathematical principles behind the employed statistics and discuss how they positively influence our metric. Subsequently, we introduce their infusion into IRS. Later, we discuss the use of the proposed IRS for fake content detection and improvement of the popular Stable Diffusion Model (SDM) [43] using the same metric.
### Image Statistical Measures
Gray-Level Co-occurrence Matrix (GLCM) [49] is used in image processing to extract textual features of an image. One of the statistical measures that can be derived from a GLCM is "Energy", also known as "Uniformity" or "Angular Second Moment". Given a GLCM \(P\in\mathbb{R}^{N\times N}\) (where \(N\) is the number of gray levels in the image), the Energy \(E\) is defined as
\[\text{GLCM}_{E}=\sum_{i=0}^{N-1}\sum_{j=0}^{N-1}[P(i,j)]^{2}, \tag{1}\]
where \(P(i,j)\) represents the joint probability of occurrence of pixel pairs with intensity values \(i\) and \(j\) at a specified spatial relationship. A higher Energy (E) score suggests that there are fewer variations in intensity in an image, indicating more uniform or repetitive patterns. In the process of removing noise or irregularities while generating images, generative techniques may blur some texture details. This can leave their signature in the GLCM\({}_{E}\).
The second metric, GLCM Contrast, quantifies the difference or change between adjacent pixels present in the image. Such differences or changes are indicative of the texture contrast in the image. The Contrast \(C\) is defined as
\[\text{GLCM}_{C}=\sum_{i=0}^{N-1}\sum_{j=0}^{N-1}(i-j)^{2}P(i,j), \tag{2}\]
where \(P(i,j)\) represents the probability that a pixel with intensity \(i\) co-occurs with a neighboring pixel of intensity \(j\) in a specified spatial relationship. A higher value of Contrast implies more variations in intensity between a pixel and its neighboring pixels across the image. Smooth and blurred generated textures are expected to result in lower GLCM Contrast values for generated images.
Canny Edge Density [52] refers to the proportion of pixels in an image that are identified as edges using the Canny edge detection technique. Given the total number of pixels \(I\) and number of edge pixels \(E\) detected by the Canny edge detector, the Canny Edge Density, \(D\), can be defined as
\[\text{CED}=\frac{E}{I}. \tag{3}\]
Here, \(\text{CED}\in[0,1]\) represents the proportion of the edge pixels in the image. Higher CED value indicates high edge density. A visually appealing, generated images can be expected to differ from natural images in terms of their edge pixel distribution. Hence, Canny Edge Density is another helpful statistic for quantifying realism in images.
Variance Blur Measure (VBM) [23] is used to estimate the sharpness or blurriness of an image. The process involves computing the variance of an image after applying a Laplacian filter. For an image with dimensions \(M\times N\), the VBM is measured as,
\[\text{VBM}=\frac{1}{M\times N}\sum_{i=1}^{M}\sum_{j=1}^{N}(L(i,j)-\mu)^{2}, \tag{4}\]
where \(L(i,j)\) is the pixel value at position \((i,j)\) in the Laplacian filtered image and \(\mu\) represents the mean pixel value of the Laplacian filtered image. Due to the differences in smoothness between real and fake images, VBM is expected to generate different score in fake images.
The Mean Spectrum (MS) [20] is a concept used in the frequency domain analysis of images. When an image undergoes a Fourier Transform, it produces a spectrum with both magnitude and phase components for each frequency. The MS provides an average measure of the spectrum magnitude. For an image with dimensions \(M\times N\) and its Fourier Transform \(F(u,v)\), the Mean Spectrum is given as
\[\text{MS}=\frac{1}{M\times N}\sum_{u=0}^{M-1}\sum_{v=0}^{N-1}|F(u,v)|. \tag{5}\]
Here, \(|F(u,v)|\) represents the magnitude at frequency coordinates \((u,v)\). Image generation usually involves blending faces or objects from various sources, often introducing subtle artifacts [54]. By leveraging the Mean Spectrum, these inconsistencies can be brought to light.
### Image Realism Score
Real images exhibit natural attributes, which directly result from natural scenes. On the contrary, attributes of generated images are dependent on the processes underlying the generative model. This can lead to unnatural image statistics for the generated images. The key intuition behind our Image Realism Score (IRS) is to scrutinize the primitive image attributes to quantify the realism of the image content. We do so by combining the above-mentioned five image statistics in our IRS. Our technique enables leveraging the numerical differences in the values of the image statistics to distinguish between real and fake images.
#### 3.2.1 Sort Order of the Measures
The five measures we opt (refer Section 3.1) to define IRS capture largely unrelated primitive statistics of images. To support this argument, we report the correlations between the five measures of ten thousand random images from the ImageNet dataset [46] in Table 1. Since the chosen measures eventually just provide numerical values, simply combining them using basic arithmetic operations is ineffective for our ultimate objective. Hence, we devise a unique strategy to maximize the collective information we can extract from our measures. The computation method of our IRS requires defining a graph using the measures. For \(n=5\) measures that have equal importance for IRS, we choose pentagon as the base graph geometry. Each edge of the graph, which translates to the radii of the pentagon, signifies one of the used measures, see Fig. 2.
The radii can be used to construct five triangles comprising the pentagon. Eventually, we use the areas of these triangles to compute the area of the pentagon that defines IRS. For our geometric graph shape, a triangle has the central angle \(\theta\). The area of the triangle delineated by two radii, say \(m_{a}\) and \(m_{b}\), can be computed as
\[A_{\triangle_{a,b}}=\frac{m_{a}\cdot m_{b}}{2}\cdot\sin\left(\theta\right). \tag{6}\]
Denote the radii of the pentagon as \(m_{1},m_{2},...,m_{5}\). Since each radii signifies a unique measure, their order matters for the ultimate IRS value. The order of the radii can vary. Initial combinations of two radii that are adjacent to, say \(m_{1}\) can be calculated as
\[C=\frac{(n-1)!}{x!(n-x-1)!}, \tag{7}\]
where \(x=2\) is the number adjacent radii and \(n=5\) are the total radii. This leads to \(C=6\) combinations. For these combinations, the adjacent radii can be arranged in \(2\) ways. Hence, the total number of combinations is \(6\times 2=12\).
To find a suitable order from the 12 combinations of metrics in the pentagon, we leverage their mutual correlations. Based on the empirical results in Table 1, we arrange the two measures with the largest correlation as adjacent radii and continue in a descending order. The intuition here is that this systematic arrangement can lead to larger areas, which are eventually desirable for IRS for its discrimina
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & CED & GLCM\({}_{C}\) & GLCM\({}_{E}\) & VBM & MS \\ \hline CED & - & 0.76 & -0.28 & 0.05 & 0.21 \\ GLCM\({}_{C}\) & 0.76 & - & -0.13 & 0.20 & 0.14 \\ GLCM\({}_{E}\) & -0.28 & -0.13 & - & 0.21 & -0.35 \\ VBM & 0.05 & 0.20 & 0.21 & - & -0.07 \\ MS & 0.21 & 0.14 & -0.35 & -0.07 & - \\ \hline \hline \end{tabular}
\end{table}
Table 1: Correlation coefficient between five image statistical measures. CED, GLCM\({}_{C}\), GLCM\({}_{E}\), VBM, and MS represent Canny Edge Density, GLCM Cantrast, GLCM Energy, Variance Blur Measure, and Mean Spectrum, respectively.
tive abilities. To validate this intuition, we conducted experiments on 10,000 random ImageNet images, calculating and ranking areas for all 12 arrangements. We recorded that this arrangement resulted in the maximum area \(\sim 37\%\) times whereas the remaining 63% was distributed between the other 11 arrangements. Finally, we express the area of a pentagon formed by a fixed sequence of radii as
\[A=\sum_{(a,b)\in S}A_{\triangle_{a,b}}, \tag{8}\]
where \(S\in\{(1,2),(1,5),(2,3),(3,4),(4,5)\}\).
#### 3.2.2 Calibration of the Statistical Measures
We have carefully chosen the measures to be incorporated in our metric. However, each measure has its own numerical variability. This calls for a calibration before combining them into IRS. In Fig. 3(row-1), we show (red) pentagons of fake images that get formed for the four generative models used. We also provide a (blue) reference pentagon in each sub-figure that corresponds to the normalized values of the five measures resulting from real images. It can be noticed that the radii GLCM\({}_{E}\), VBN and MS of the (red) generated image pentagons are often exceeding the radii of the reference (blue) pentagon. The reason behind their behavior is as follows.
It is known that generative techniques do not capture all subtle textural details found in real images [3]. This results in relatively smoother images which in turn leads to higher GLCM\({}_{E}\), as well as higher VBM. Similarly, image generation can lead to inconsistencies in noise distribution, which get more pronounced in the frequency domain [31]. This causes relatively higher MS values for the generated images. The values of the other two measure are loosely upper-bounded by their real image values. Hence, we take multiplicative inverses of VBM, MS and GLCM\({}_{E}\). This leads to polygons that are confined much better within the real image pentagons, see Fig. 3(row-2). Before further processing, we normalize the (red) pentagons of fake images and scale the (blue) pentagons of real images by the same proportion as shown in Fig. 3(row-3). These two steps are helpful because our technique uses polygon area comparison for fake detection is Section 3.3.
In Table 2, we report the average values of the measures 'before' and 'after' their calibration. More precisely, the 'before' calibration case corresponds to row-2 of Fig. 3, which already accounts for VBM, MS and GLCM\({}_{E}\) variability. However, notice that the difference between the average areas of real and fake images is still only 0.88. To amplify this difference, we re-scale the measures for fake images to 1.0, and also re-scale the measures of real images correspondingly (see Fig. 3(row-3)) which increases the difference between the average areas to 2.30. After this re-calibration, the eventual IRS value is computed as
\[\mathrm{IRS}=\sum_{(a,b)\in S}w_{a}\cdot w_{b}\cdot A_{\triangle_{a,b}}, \tag{9}\]
where \(w_{a}\) and \(w_{b}\) are the weights resulting from the re-scaling of the corresponding measures.
### Fake Detection
Our proposed metric can be evaluated on a single image, making it particularly suitable for detecting fake content on a sample-by-sample basis. This type of detection is not possible using traditional metrics such as IS and FID, which typically rely on large datasets. For fake content detection, the IRS adopts a thresholding technique. The last column in Table 2 shows that the average areas of real and fake images generally vary greatly under our proposed scheme. We capitalize on this observation, and use a threshold \(\delta:\mathrm{IRS}<\delta\implies\) Fake, where \(\delta=3\) is set empirically in our experiments. This simple yet effective approach
\begin{table}
\begin{tabular}{l c c c c c|c|c} \hline Metrics & GLCM\({}_{C}\) & GLCM\({}_{E}\) & CID & VBM & MS & IRS \\ \hline \hline \multicolumn{8}{c}{**Before**} & \multicolumn{4}{c}{**Calibration**} \\ \hline Real Images & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 2.38 \\ Fake Images & 0.43 & 0.97 & 0.64 & 0.90 & 0.98 & 1.50 \\ \hline \multicolumn{8}{c}{**After**} & \multicolumn{4}{c}{**Calibration**} \\ \hline Real Images & 2.31 & 1.02 & 1.57 & 1.11 & 1.02 & 4.68 \\ Fake Images & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 2.38 \\ \hline \end{tabular}
\end{table}
Table 2: Calibration of the statistical measures. Average values are reported for all. After calibration, the difference between the average IRS of real and fake images increases considerably.
Figure 3: In row 1 and row 2, the blue normalized pentagon is computed as the average of 10,000 ImageNet (real) images. Red pentagons are formed by the average of 1,000 images generated by the four models. In row 2, the MS, VBM, and GLCM\({}_{E}\) values are inverted for generated images. In row 3, the red and blue pentagons are re-scaled by the same ratio so that the red pentagons become uniform, which makes the blue pentagons nonuniform.
ensures efficient fake image detection without the need for complex calculations.
### Gen-100: Dataset of Generated Images
Another important contribution of this paper is the creation of an new dataset, Gen-100. In addition to being used to evaluate the effectiveness of our method, this dataset can also be used for benchmarking future evaluation metrics. The Gen-100 data is generated using several popular image generation models, including SDM, BigGAN, Dalle2, and Midjourney. It consists of 100 object categories, where all categories follow CIFAR100 [28] categories. We use the aforementioned models to generate 1,000 images for each category. The real counterparts of the synthetic images are extracted from the same class labels of ImageNet [46]. We use ChatGPT [36] to generate 10 prompts for each category, which are used to further generate the text-conditioned images from the models. This process allows us to guarantee diversity in the dataset, while also capturing advanced abilities of the models. Due to the prevalent trend of non-open-source diffusion-based generative models that require payment for access, a void exists in the realm of comparative datasets comprising images generated by different diffusion models. Our dataset fills this gap. The dataset will be made public after acceptance.
### Improving Image Generation
Diffusion models (DMs) [50] learn data distributions through a unique approach. The core idea is to learn the distribution \(p(x)\) by consecutively denoising a variable that follows a normal distribution. This progressive denoising can be conceived as backtracking a Markov Chain of fixed length \(T\). An objective function is defined to gauge the denoising process as follows
\[L_{DM}=\mathbb{E}_{\epsilon(x),t,\epsilon}\left[||\epsilon-\epsilon_{\theta}(x _{t},t)||^{2}\right], \tag{10}\]
where \(t\) is a time step, uniformly chosen from the range {1,..., T}, \(x_{t}\) represents the noisy version of the input at time \(t\), \(\epsilon_{\theta}(x_{t},t)\) signifies the model's denoised prediction at time \(t\) and \(L_{DM}\) is the model loss. Following DMs, a Latent Diffusion Model (LDM) [43] generates images by iterating over denoised data in a latent representation space, and then decode the representation results into full images. Its loss objective is defined as
\[L_{LDM}=\mathbb{E}_{\epsilon(x),t,\epsilon}\left[||\epsilon-\epsilon_{\theta} (z_{t},t)||^{2}\right], \tag{11}\]
were, \(z_{t}\) is the representation in the latent space, abstracting the input's finer details.
The widely popular Stable Diffusion Model [43] is an LDM. To improve the realism in its generated images, we incorporate our IRS metric into the models training loss objective. Specifically, we minimize \(\frac{1}{\text{IRS(image)}}\) because real images have large IRS values, see Table 2. This encourages the model to generate more realistic images by minimizing the training loss. It is notable that we are able to compute IRS values on per-image basis here, which allows us to easily modify the training objective of SDM. The improved training objective of the model is defined as
\[L_{IRS}=\mathbb{E}_{\epsilon(x),t,\epsilon}\left[||\epsilon-\epsilon_{\theta} (z_{t},t)||^{2}\right]+\frac{\xi}{\text{IRS}(d(z_{t}))}, \tag{12}\]
were, \(d\) is the decoding function of the original SDM approach, and \(\xi\) is a scaling factor for our regularization.
## 4 Experiments
Benchmarking generative models:We first benchmark the popular generative models using our metric. In the experiment, we employ our Gen-100 dataset - see Sec. 3.4. The average IRS values for the SDM, Dalle2, Midjourney and BigGAN models are reported in Table 3. The table also includes the average value of 10K real images as a reference. It can be seen that the IRS values are generally in accordance with the known abilities of the models. Interestingly, whereas Midjourney is popular for its high quality images, it scores lower than SDM on our metric. This is because despite their high quality, Midjourney images lack in'realism' as compared to SDM, and IRS is intended to quantify realism. In Fig. 4, we show representative images generated by each model for high and low IRS values. It is easily noticeable that images with high IRS values indeed contain details that make them appear highly realistic. On the other hand, images with low IRS values are indeed of cartoonic nature, lacking in realism.
Fake Detection:In Table 4, we report the results of using IRS for fake image detection. For this experiment, we use random 500 samples for each model from the Gen-100 dataset and following Sec. 3.3, we use \(\delta=3.0\) as the threshold value. The results are reported with the standard metrics of accuracy, F1 score, recall and precision. It is noticeable that the results are generally in accordance with the realism quality reported in Table 3. That is, the model with the highest IRS score from Table 3, i.e., SDM, has the lowest detection rate. However, Dalle2 is still able to maintain a lower detection rate than BigGAN despite its smaller IRS value in Table 3. We find that this is due to the larger versatility of Dalle2 images, which allows samples to score relatively high IRS with more frequency, instead of scoring very large IRS on a few images to achieve higher average IRS.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Model Name & SDM & Dalle2 & Midjourney & BigGAN & Real \\ \hline IRS score & 2.29 & 1.58 & 2.03 & 1.74 & 4.68 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Average IRS values of different generative models computed on 1K images from Gen-100 dataset. The value for Real images is computed for 10K ImageNet samples.
This observation is inline with the general understanding that diffusion based models are more versatile than GANs.
**Improving SDM :** We improved SDM with our technique following Sec. 3.5, and compare results with the original SDM. For a fair comparison, we use the exact same configuration for both models. As shown in Fig. 5, the IRS-augmented SDM produces images with sharper edges and clearer separation from the background. This makes the main subject in the image easier to identify. Secondly, the color transition is more natural, without abrupt color blocks or obvious artificial traces. In addition, the shadow and lighting effects presented by the improved model are more realistic. The images generated by the IRS-augmented SDM present finer textures, and both the subject and the background can clearly show their unique texture features.
## 5 Further Discussion
Currently, there are two popular metrics to measure the generative model quality, namely; Inception Score (IS) [48] and Frechet Inception Distance (FID) [24]. However, both these metrics have their limitations [4, 11, 35]. For instance, the underlying classification model of IS adopts the Inception V3 architecture and is trained on the ImageNet dataset. Therefore, for unbiased evaluation using IS, generative models and classifiers are best trained on the ImageNet dataset. Additionally, sample size can also affect IS output. Moreover, insufficient samples may adversely affect IS results. Similar concerns are also valid for FID. In contrast, our IRS eliminates the need for training, mitigating the risk of overfitting. Its design premise allows evaluation based on only a single image, avoiding sample size concerns.
In addition to the above-noted well-known shortcomings of IS and FID that have been verified in other papers, we also test rotation invariance of these metrics. We performed random small rotations on the dataset and evaluated their impact on the IS and FID score. As shown in the Table 5, after rotation, the value of IS decreases and the value of FID increases. This means that for IS and FID, the same image can become more "fake" after rotation. In contrast, IRS uses only the most basic information of the image itself, which is not affected by rotations. This is an added advantage of our proposed metric.
## 6 Conclusion
In recent years, the development of diffusion models has made it more difficult for humans to distinguish real and fake visual content. Therefore, exploring metrics for evaluating the authenticity of generated visual content is important. Our proposed Image Realism Score (IRS) addresses the shortcomings of existing metrics, such as the inability to analyze the authenticity of individual images and difficulties in achieving good results with tests on a new dataset different from the training sets. IRS avoids the limitations of current metrics by computing a fusion of five statistical measures of the input image. IRS is a non-learning based metric that does not rely on heavy computational resources. Using IRS, we also successfully detected fake images generated by Stable Diffusion Model (SDM), Dalle2, Midjourney, and BigGAN (GAN), establishing the model-agnostic and data-independent nature of IRS. Furthermore, when IRS was incorporated in the loss function of SDM, the model performance was shown to improve.
## 7 Acknowledgments
This research was supported by National Intelligence and Security Discovery Research Grants (project\(\#\) NS220100007), funded by the Department of Defence Australia. Professor Ajmal Mian is the recipient of an Australian Research Council Future Fellowship Award (project number FT210100268) funded by the Australian Government.
\begin{table}
\begin{tabular}{l c c c c} \hline Dataset Name & BigGAN & SDM & Dalle2 & Midjourney \\ \hline Accuracy & 0.85 & 0.76 & 0.81 & 0.79 \\ F1 Score & 0.87 & 0.68 & 0.79 & 0.77 \\ Recall & 0.95 & 0.71 & 0.77 & 0.81 \\ Precision & 0.81 & 0.73 & 0.81 & 0.78 \\ \hline \end{tabular}
\end{table}
Table 4: Fake detection results. The dataset for each model consists of 500 Gen-100 samples and 500 real images from ImageNet.
Figure 4: Representative generated images for different generative models that led to high and low IRS values. Images with higher IRS are clearly more realistic.
\begin{table}
\begin{tabular}{l c c c c} \hline Dataset source & Midjourney & BigGAN & SDM & Dalle2 \\ \hline IS before rotation & 4.84 & 3.45 & 3.70 & 3.78 \\ IS after rotation & 4.35 & 3.15 & 3.60 & 3.34 \\ \hline FID before rotation & 3135.00 & 2234.86 & 2769.05 & 2550.84 \\ FID after rotation & 3389.25 & 2700.14 & 2980.22 & 2784.39 \\ \hline \end{tabular}
\end{table}
Table 5: Image rotation test with IS and FID. Each test dataset contains 1,000 generated images from the Gen-100 dataset.
Figure 5: Comparison of generated images. One set is produced with the IRS-augmented SDM, and the other generated by the original SDM. Prompts used to generate the images are also given. Images generated by the proposed modified loss showcase richer details, aligning with the essence of our technique. |
2309.04545 | Multi-Octave Frequency Comb from an Ultra-Low-Threshold Nanophotonic
Parametric Oscillator | Ultrabroadband frequency combs coherently unite distant portions of the
electromagnetic spectrum. They underpin discoveries in ultrafast science and
serve as the building blocks of modern photonic technologies. Despite
tremendous progress in integrated sources of frequency combs, achieving
multi-octave operation on chip has remained elusive mainly because of the
energy demand of typical spectral broadening processes. Here we break this
barrier and demonstrate multi-octave frequency comb generation using an optical
parametric oscillator (OPO) in nanophotonic lithium niobate with only
femtojoules of pump energy. The energy-efficient and robust coherent spectral
broadening occurs far above the oscillation threshold of the OPO and detuned
from its linear synchrony with the pump. We show that the OPO can undergo a
temporal self-cleaning mechanism by transitioning from an incoherent operation
regime, which is typical for operation far above threshold, to an ultrabroad
coherent regime, corresponding to the nonlinear phase compensating the OPO
cavity detuning. Such a temporal self-cleaning mechanism and the subsequent
multi-octave coherent spectrum has not been explored in previous OPO designs
and features a relaxed requirement for the quality factor and relatively narrow
spectral coverage of the cavity. We achieve orders of magnitude reduction in
the energy requirement compared to the other techniques, confirm the coherence
of the comb, and present a path towards more efficient and wider spectral
broadening. Our results pave the way for ultrashort-pulse and ultrabroadband
on-chip nonlinear photonic systems for numerous applications. | Ryoto Sekine, Robert M. Gray, Luis Ledezma, Selina Zhou, Qiushi Guo, Alireza Marandi | 2023-09-08T18:23:37Z | http://arxiv.org/abs/2309.04545v1 | # Multi-Octave Frequency Comb from an Ultra-Low-Threshold Nanophotonic Parametric Oscillator
###### Abstract
Ultrabroadband frequency combs coherently unite distant portions of the electromagnetic spectrum. They underpin discoveries in ultrafast science and serve as the building blocks of modern photonic technologies. Despite tremendous progress in integrated sources of frequency combs, achieving multi-octave operation on chip has remained elusive mainly because of the energy demand of typical spectral broadening processes. Here we break this barrier and demonstrate multi-octave frequency comb generation using an optical parametric oscillator (OPO) in nanophotonic lithium niobate with only femtojoules of pump energy. The energy-efficient and robust coherent spectral broadening occurs far above the oscillation threshold of the OPO and detuned from its linear synchrony with the pump. We show that the OPO can undergo a temporal self-cleaning mechanism by transitioning from an incoherent operation regime, which is typical for operation far above threshold, to an ultrabroad coherent regime, corresponding to the nonlinear phase compensating the OPO cavity detuning. Such a temporal self-cleaning mechanism and the subsequent multi-octave coherent spectrum has not been explored in previous OPO designs and features a relaxed requirement for the quality factor and relatively narrow spectral coverage of the cavity. We achieve orders of magnitude reduction in the energy requirement compared to the other techniques, confirm the coherence of the comb, and present a path towards more efficient and wider spectral broadening. Our results pave the way for ultrashort-pulse and ultrabroadband on-chip nonlinear photonic systems for numerous applications.
Broadband optical frequency combs are among the great achievements of modern optics [1; 2]. Recently, increasing efforts are focused on the realization of broadband frequency combs in nanophtonic platforms [3; 4; 5] with applications including dual-comb spectroscopy [6], optical communications [7], optical frequency synthesis [8; 9], and laser ranging [10]. However, the spectral coverage of integrated frequency comb sources remains far behind their table-top counterparts using high-pulse-energy lasers and discrete components, which have recently surpassed six-octave spectra [11; 12]. Such multi-octave frequency combs are valuable for applications such as ultrashort pulse synthesis [13], attosecond science [14], and bio-chemical sensing and imaging [15; 16; 17].
Integrated sources of short-pulse frequency combs typically generate picojoules or femtojoules of pulse energies [2; 4; 18; 19; 20] and their spectral coverage barely reaches an octave [21; 22]. This has necessitated further spectral broadening stages for many applications, which so far have been realized strictly using table-top systems with discrete amplifiers and components [1; 8; 23]. A femtojoule-level multi-octave coherent spectral broadening mechanism has so far been beyond the reach of current photonic technologies, and hence, a path towards a fully integrated multi-octave frequency comb has remained elusive.
Substantial spectral broadening is typically achieved by passing femtosecond or picosecond pulses with 0.1-10 nJ of energy through waveguides, crystals or fibers with quadratic (\(\chi^{(2)}\)) or Kerr (\(\chi^{(3)}\)) nonlinearity with various designs [1; 24; 25; 26; 27; 28]. Among these schemes, waveguides with quadratic nonlinearity are becoming increasingly more efficient, especially because of the recent progress on quasi-phase matching and dispersion engineering [24; 26; 29] and show superior performances over their cubic counterparts. However, to reach an octave of coherent spectrum and beyond they still need 10's of picojoules of energy [29], which is far beyond the current capability of integrated frequency comb sources.
Resonant enhancement of spectral broadening is expected to improve the energy requirements. However, such experiments have so far remained below an octave [23; 30; 31]. This is mainly because of the overly constrained dispersion requirements of cubic coherent spectral broadening schemes especially when combined with high-Q requirements. In fact, even linear components in nanophotonics with multi-octave spectral response are still challenging to design and realize [32]. In contrast, quadratic nonlinearity not only leads to lower energy requirements in single-pass configurations, but it also offers a wider range of nonlinear processes for ultrawide coherent spectral broadening resulting from nonlinear interactions of distant portions of the spectrum [11; 12]. However, a proper resonator design is necessary to enable an operation regime where a sequence of quadratic nonlinear processes can yield coherent spectral broadening towards multi-octave operation.
A promising path towards such a multi-octave nonlinear resonator is based on synchronously (sync-) pumped degenerate OPOs, which so far have been successfully
used in bulk optics for efficient phase-locked down-conversion via half-harmonic generation of broadband frequency combs [15; 33; 34; 35]. Recent studies indicate the potential of sync-pumped OPOs for extreme pulse shortening and spectral broadening while preserving the coherence properties of the pump [36]. However, lack of dispersion engineering in bulk nonlinear crystals, low parametric gain bandwidths, and multi-picojoule thresholds have put limitations on their applicability for compact and ultrabroadband frequency comb applications. Recent developments of dispersion-engineered optical parametric amplifiers (OPAs) [37] and narrowband sync-pumped OPOs [38] in lithium niobate nanophotonics promise a path towards overcoming these limitations and accessing a new regime of ultrabroadband ultra-low-energy nonlinear optics that has not been accessible before.
In this work, in sharp contrast to previous realizations of nonlinear photonic resonators, we judiciously design and realize an on-chip sync-pumped OPO featuring a low-finesse resonator which couples only frequencies near the half-harmonic of the pump while leaving the pump and its high-harmonics non-resonant. It is near-zero dispersion engineered for the pump and its half-harmonic. The nanophotonic sync-pumped OPO operates with a record-low threshold of \(\sim\)18 fJ. Due to its low-energy, intense, phase-sensitive amplification, we discovered an operation regime of the OPO where the nonlinear phase compensates the cavity detuning, yielding temporal self cleaning and a multi-octave coherent spectrum. We measured a two-octave frequency comb at \(\sim\)109 fJ of pump energy and experimentally confirmed its coherence. We numerically replicate the broadband nonlinear dynamics associated with such a multi-octave broadening and provide design guidelines for even broader outputs.
**Operating principle and design**
Figure 1a illustrates the design of the on-chip sync-pumped OPO, with the fabricated device shown in Fig. 1b. The input/output couplers are designed to allow resonance only around the half-harmonic of the pump (see supplementary section I), and the cavity is designed to be minimally dispersive for these wavelengths. To phase and frequency lock the OPO, the OPO is nearly sync-pumped at degeneracy, requiring a cavity round-trip time of 4 ns for a pump comb with a 250 MHz repetition rate. With the effective index of our nanophotonic lithium niobate waveguides (wgs), this amounts to a 53-cm-long-cavity.
To achieve the ultra-high, ultra-broad, phase-sensitive gain at fJ pump pulse energies that enables coherent
Figure 1: **Principle and design of the multi-octave nanophotonic OPO.****a,** Illustration of the sync-pumped OPO on thin-film lithium niobate with key features highlighted. **b,** Microscope image of several devices when the one in the center is pumped at 1 \(\mu\)m. The chip glows green due to second harmonic generation (SHG). The top inset is a scanning electron microscope image of the spiral region and the bottom is a picture of the entire chip containing 16 OPOs. **c,** Illustration showcasing how short pump pulses can take advantage of near-zero-dispersion-engineered OPAs. The simulated gain profiles are shown in the top for a waveguide with 60 fs\({}^{2}\)/mm half-harmonic GVD and 26 fs/mm GVM and in the bottom for a near-zero-dispersion waveguide. The solid orange line marks the center wavelength of the pump and the orange shaded regions mark the 3-dB bandwidth (BW) of the 100-fs source. **d,** Depiction of the different regimes of operation of the OPO as a function of pump pulse energy, along with the roundtrip-to-roundtrip temporal output of the OPO in each regime.
broadband comb generation, the OPO includes a 10.8 mm OPA with proper dispersion engineering and quasi phase matching (QPM). Specifically, we target minimizing the group velocity dispersion (GVD) of the pump and signal, as well as the group velocity mismatch (GVM) between the pump and signal [37]. Figure 1d illustrates the large gain bandwidth that can be accessed when coupling a 100-fs pump to a near-zero dispersion engineered waveguide, as opposed to one with large dispersion that is favored for broadly tunable OPOs [38; 39]. The designs for the poling period, cavity length, and couplers for sync-pumped operation can be found in the Supplementary, Section I.
Figure 1d illustrates the different regimes of operation of this nanophotonic OPO. At low pump pulse energies, the OPO goes above threshold when the gain overcomes the loss inside the cavity. This is conventionally the regime where OPOs are operated to yield coherent outputs phase-locked to the pump [34]. At higher pump pulse energies a degenerate OPO is known to transition to an unstable operation regime where the phase-locked operation deminishes [40; 41]. Here however, we find that far above threshold, the OPO can undergo a transition to the phase-locked regime as a result of the nonlinear phase being compensated by the cavity. This emergence of coherence is akin to the spatial self-cleaning in multimode fibers [42], which is emphasized in the accompanying time domain plots as a temporal self-cleaning mechanism,
Figure 2: **OPO characterization.****a,** Oscillation peaks of the OPO as the pump repetition rate is modulated by a piezoelectric transducer (PZT) in the pump laser cavity at 600 Hz. **b,** Signal spectrum at 35 fJ of pump energy for three different roundtrip detunings and **c,** the corresponding OPO signal growth as a function of pump energy for different oscillation peaks and their slope efficiencies, \(\eta_{\mathrm{SL}}\). **d,** Output spectra from the OPO cavity at 54 fJ, 109 fJ, and 380 fJ of pump. **e,** Spectral overlap between the broadened pump using a PCF and the SHG output of the integrated device with a cavity. **f,** The resulting radio-frequency (RF) beatnote at three different pump \(f_{\mathrm{CEO}}\) frequencies when the chip is pumped at 380 fJ. **g,** Example beatnote between a free space and on-chip OPO pumped at 109 fJ and \(\sim\)8 fs of cavity detuning. **h**, Interference of these two coherent OPOs filtered around 2.1 \(\mu\)m as their relative delay is scanned. The interference is observed only when their \(f_{\mathrm{CEO}}\) frequencies are the same, while the beatnote is observed only when they are different by \(f_{\mathrm{rep}}/2\).
where after a finite number of roundtrips the output pulse intensity is seen to stabilize with ultrashort features in the multi-octave case.
**Experimental results**
In Fig. 2a-c, we show the near-threshold performance of the nanophotonic OPO. Scanning the repetition rate of the pump by 600 Hz, we observe the oscillation peaks of the OPO as depicted in Fig. 2a. These peaks are characteristic of doubly-resonant operation [34]. We can actively lock the pump repetition rate to the center of each of these peaks, and the near-threshold signal spectra of three such peaks at distinct detunigs between the pump repetition period and cavity round-trip time, \(\Delta T_{RT}\), are shown in Fig. 2b. In Fig. 2c we show the measured input-output pulse energy growth of these same peaks. We can extrapolate the threshold and slope efficiencies, \(\eta_{SL}\), and define the peak with the lowest threshold as the zero cavity detuned state. For this peak we estimate an OPO threshold of \(\sim\)18 fJ.
In Fig. 2d, we show three characteristic output spectra of the OPO. At 54 fJ of pump we observe conventional OPO behavior. The pump, half-harmonic and second-harmonic are all spectrally broadened, and there is noticeable sum frequency generation (SFG) between the pump and half harmonic. At 109 fJ of pump, we observe continuous spectra from 600 nm to 2710 nm, and at 380 fJ, we observe three-octave-spanning spectra from 362 nm to 3261 nm. Notably for the spectra above 2.5 \(\mu\)m, we observe molecular absorption features, here predominately due to ambient H\({}_{2}\)O. The dip at 2.8 \(\mu\)m is associated with the OH absorption peak in the LN and/or the buffer layer [39; 43], and kinks near 680 nm and 1135 nm are due to mode crossings (see Supplementary Section II).
First, we investigate the coherence of the second-harmonic portions of these spectra using a spectrally broadened output of the pump by a photonic crystal fiber. We interfere this broadened pump with the second-harmonic portion of the on-chip OPO, the spectral overlap of which is shown in Fig. 2e. We show sample beatnotes of the resultant carrier-envelope offset frequency, \(f_{\mathrm{CEO}}\), along with the pump repetition rate, \(f_{\mathrm{rep}}\), at 250 MHz when pumping at 380 fJ in Fig. 2f. By observing the shifting of these beatnotes as the pump \(f_{\mathrm{CEO}}\) is tuned, we verify that these beatnotes correspond to \(f_{\mathrm{CEO}}\). We observe similar \(f_{\mathrm{CEO}}\) beatnotes at 54 and 109 fJ of pump (see Supplementary Section II). We also note that, as expected, these beatnotes were present irrespective of the exact detuning of the cavity for all three pump pulse energies.
To investigate the coherence of the longer-wavelength side of the on-chip OPO output, we interfere it with that of a free-space OPO pumped by the same laser using a filter centered around 2.1 \(\mu\)m. A degenerate OPO above threshold can have two possible CEO frequencies which differ by \(f_{\mathrm{rep}}/2\) depending on the oscillation peak [34]. When the on-chip OPO has a different CEO from the free-space OPO, upon spatially and temporally overlapping their outputs, beatnotes at \(f_{\mathrm{rep}}/2\) are observed in the case that the on-chip OPO is pumped at 54 fJ with zero detuning. At 109 fJ of pump energy, while we did not observe the \(f_{\mathrm{rep}}/2\) beatnote at zero detuning, we did observe it at a cavity detuning of \(\sim\)8 fs, and this is shown in Fig. 2g. At this same detuning, when both OPOs operate with equivalent CEO frequencies, no \(f_{\mathrm{rep}}/2\) beatnote is observed, and as the relative delay between their outputs is scanned, the two
Figure 3: **Simulation results showing different operation regimes of the nanophotonic OPO.****a**, Transition from (i) near-threshold coherent operation to (ii) incoherent operation and (iii) back to coherent operation when the pump energy is increased. The roundtrip temporal evolution (i-iii) and output spectra (iv-vi) are shown for three different pump intensities using experimental parameters and at a cavity detuning of -10.5 fs. **b**, A three-octave coherent OPO. The same experimental parameters are used except that the last one mm of the PPLN was replaced with a chirped poling period. The pump pulse energy was at 250 fJ.
OPOs constructively/destructively interfere resulting in the fringes in Fig. 2h (see Supplementary Section III). From these measurements, we confirm that the down-converted combs at these two pump energies are coherent with respect to the pump comb. In particular, at 109 fJ of pump, because both the half-hamonic and second harmonic combs are coherent with respect to the pump and all frequency portions of our spectrum are generated through parametric processes of these three combs [29], we conclude that at this detuning the continuous two-octave wide spectrum as well as the second harmonic comb are coherent. However, for the case of 380 fJ pumping, the beatnote and spectral fringes of Fig. 2g and h were not observed for any roundtrip detuning, and hence we consider this three-octave spectrum incoherent.
To explain the dynamics of this OPO far above threshold and how coherence can be established over such a broad spectrum, we turn to numerical simulations. To capture the multi-octave nonlinear interactions occurring in the OPO, we modeled the electric field in the nanophotonic cavity as a single envelope in frequency domain which is evolved using the split-step Fourier method for propagation in the PPLN region and a linear filter for the cavity feedback (see Supplementary Section III for details). In Fig. 3a, we show how this captures distinct regimes of operation when using parameters matching that of the experiment. At 16 fJ the OPO goes above threshold and stabilizes after \(\sim\)20 roundtrips. At this point, all the frequency translated components (OPO, SHG, SFG of the pump and OPO) are coherent with respect to the pump and they remain unchanged from roundtrip to roundtrip. As the pump pulse energy is increased, fewer roundtrips are required for the OPO to form, and at 137 fJ of pump (\(\sim\)9\(\times\) above threshold) we see that the OPO output is incoherent.
At roughly 204 fJ of pump (\(\sim\)13\(\times\) above threshold), however, the the half-harmonic is seen to acquire a \(\pi\) phase shift through the nonlinear interaction with the pump in each single-pass through the PPLN region. This can be compensated by detuning the cavity by an odd number of OPO peaks, or by adding a constant phase offset of \(\pi\) between the pump and cavity, corresponding to the carrier-envelope offset phase, \(\phi_{\mathrm{CEO}}\), of the pump (see Supplementary Section III). The former case is shown in Fig. 3a(iii) and shows a two octave coherent continuous comb that stabilizes after roughly twenty roundtrips with temporal features as short as 4 fs (see Supplementary Section III). The output spectrum is also very similar to the detuned 109 fJ experimental result of Fig. 2d.
In simulation, we further investigate how to extend the coherent operation of the OPO to even broader spectra. By replacing the last one mm of the PPLN region with a chirped poling period for efficient second harmonic and sum-frequency generation, we achieve a coherent three-octave continuous frequency comb with \(\sim\)250 fJ of pump energy as shown in Fig. 3b.
**Conclusion and discussion**
In Fig. 4 we compare our results with other integrated spectral broadening schemes and sync-pumped OPOs. The figure highlights how our nanophotonic OPO design and its operation regime enable orders-of-magnitude improvement in the energy efficiency of coherent spectral broadening. Our work represents the lowest threshold sync-pumped OPO which is enabled by its near-zero
Figure 4: **Performance comparison of (a), integrated spectral broadening, and (b), frequency comb sync-pumped OPOs**. **a**, Wavelength coverage and pump pulse energies of integrated frequency comb spectral broadening schemes. The arrows indicate the pump wavelength. **b**, Comb repetition rates and pump threshold energies of sync-pumped OPOs. The marker shapes denote the different cavity and nonlinear (NL) element compositions for each OPO, the categories being free space, fiber, integrated and bulk, fiber, nanophotonic respectively. In both figures, the top legend denotes the material of the nonlinear element. Abbreviations, TFLN: thin-film lithium niobate, OP: orientation patterned, MF: microstructured fiber, HNLF: highly nonlinear fiber.
dispersion design. This ultralow-threshold operation enabled accessing a previously unexplored operation regime of the OPO far above threshold, where ultrabroad coherent spectral broadening is established as a consequence of the balance between cavity detuning and nonlinear phase shift.
In summary, we have experimentally demonstrated a nearly sync-pumped nanophotonic OPO operating in the near zero-GVM, zero-GVD, fs-pumped, high-gain low-finesse regime resulting in an ultra-broadband coherent output with only \(\sim\)109 fJ of energy. The two-octave frequency comb enables unprecedented opportunities for on-chip applications including wavelength division multiplexing [7], dual-comb spectroscopy [44], and frequency synthesis [5]. We show the OPO transitions from an incoherent to coherent operation regime and demonstrate a path towards much broader frequency comb sources in the femtojoule regime.
## Methods
**Device fabrication.** Our device was fabricated on 700-nm-thick X-cut MgO-doped thin-film lithium nioable on a SiO\({}_{2}\)/Si substrate (NANOLN). Following the procedure in [37], we pattern Cr/Au poling electrodes with 16 fixed poling periods ranging from 4.955-5.18 \(\mu\)m using lift-off and and apply a voltage to periodically flip the ferroelectric domains. Upon poling we remove the electrodes and subsequently etch the waveguides using Ar-milling and Hyrdogen Silsesquioxane (HSQ) as the etch mask. Finally, the waveguide facets are mechanically polished to allow for butt coupling. Each OPO has a footprint of 0.5 mm \(\times\) 13 mm.
**Optical measurements.** The measurements were performed using a Menlo Orange HP10 Yb mode-locked laser (MLL) centered at 1045 nm. It outputs 100-fs-long pulses at 250 MHz with a \(\pm\)1 MHz tuning range. Light was coupled to and from the chip using Newport 50102-02 reflective objectives, chosen for their minimal chromatic aberration. All of the results in this paper were performed on a device with 5.075 \(\mu\)m poling period at 26\({}^{\circ}\)C, regulated by a thermoelectric cooler (TEC). The lowest OPO threshold was obtained from a pump repetition rate of 250.1775 MHz, which we define as the zero detuned state. This device has a total throughput loss of 43.4 dB, and following the methodology in [37], we measured the input and output coupling losses to be 35.7 dB and 7.7 dB respectively. For the results in Fig. 3a, the spectra were collected by two different optical spectrum analyzers (OSA), specifically a Yokogawa AQ6374 (350-1750nm) and AQ6376 (1500-3400 nm). For the three octave spectra the wavelengths below 700 nm were taken with a high OH silica fiber (Thorlabs M133) and the spectra above 1750 with an InF\({}_{3}\) fiber (Thorlabs MF12). All the other measurements were taken using a low OH silica fiber (Thorlabs M72). The RF spectra was collected by an electronic spectrum analyzer (Rhode & Schwarz FSW), combined with a high-speed silicon avalanche photodiode (Menlo Systems APD210) in Fig. 2f and an InGaAs high speed photodiode (DSC2-40S) in Fig. 2g.
**Numerical simulations.** We used commercial software (Lumerical Inc.) to solve for the waveguide modes shown in Sections I and II of the Supplementary that allowed us to dispersion engineer and quasi-phase-match our device. For the nonlinear optical simulation, we solved an analytical nonlinear envelope equation as described in Section III of the Supplementary. The simulations were performed with no constant phase offset between the pump and cavity unless specifically mentioned otherwise. This parameter effectively acts as a carrier-envelope offset phase of the pump, \(\phi_{\text{CEO}}\). As the simulations were performed with a time window of 1.7 ps, it should be mentioned that a large portion of the short wavelength side of the spectrum walked out of the time window of our simulation. For example, the simulated GVM between our simulation reference frame at the half-harmonic signal wavelength of 2090 nm and the second harmonic of the pump at 522 nm is 721 fs/mm. As a result, the up-converted portions of the spectrum in simulation tend to be smaller than what was measured experimentally. In these simulations we have only incorporated the effects of \(\chi^{(2)}\) nonlinearity and have not considered the effects of \(\chi^{(3)}\). Especially given the low pulse energies and low-finesse nature of our cavity, we believe this to be a good approximation, yet it could be one additional reason for small discrepancies between experiment and simulation.
## Data availability
The data that support the plots within this paper and other findings of this study are available from the corresponding author upon reasonable request.
## Code availability
The computer code used to perform the nonlinear simulations in this paper is available from the corresponding author upon reasonable request.
## Acknowledgements
The device nanofabrication was performed at the Kavli Nanoscience Institute (KNI) at Caltech. The authors thank Dr. Mahmood Bagheri for loaning equipment. The authors gratefully acknowledge support from ARO grant no. W911NF-23-1-0048, NSF grant no. 1846273 and 1918549, AFOSR award FA9550-20-1-0040, the center for sensing to intelligence at Caltech, and NASA/JPL.
The authors wish to thank NTT Research for their financial support.
## Authors contributions
R.S. and A.M. conceived the project; R.S. fabricated the devices with assistance from L.L., S.Z, and Q.G. R.G. and R.S. performed the measurements and R.G. carried out the simulations with initial input from L.L. R.S. and A.M. wrote the manuscript with inputs from all authors. A.M. supervised the project.
## Competing interests
L.L., and A.M. are inventors on granted U.S. patent 11,226,538 that covers thin-film optical parametric oscillators. R.S., R.G., L.L., A.R., and A.M. are inventors on a U.S. provisional patent application filed by the California Institute of Technology (application number 63/466,188). R.G., L.L., and A.M. are inventors on a U.S. provisional patent application filed by the California Institute of Technology (application number 63/434,015) on 20 December 2022. L.L. and A.M. are involved in developing photonic integrated nonlinear circuits at PINC Technologies Inc. L.L. and A.M. have an equity interest in PINC Technologies Inc. The other authors declare that they have no competing interests.
|
2309.10530 | Laser-driven pointed acceleration of electrons with preformed plasma
lens | The simultaneous laser-driven acceleration and angular manipulation of the
fast electron beam is experimentally demonstrated. The bunch of multi-MeV
energy charged particles is generated during the propagation of the femtosecond
laser pulse through the near-critical plasma slab accompanied by plasma
channeling. Plasma is formed by the controlled breakdown of a thin-tape target
by a powerful nanosecond prepulse. The electron beam pointing approach is based
on the refraction of a laser pulse in the presence of a strong radial density
gradient in the breakdown of the tape with a small displacement of the
femtosecond laser beam relative to the breakdown symmetry axis. A shift of
several micrometers makes it possible to achieve beam deflection by an angle up
to 10 degrees with acceptable beam charge and spectrum conservation. This opens
up opportunities for in-situ applications for scanning objects with an electron
beam and the multistage electron beam energy gain in consecutive laser
accelerators without bulk magnetic optics for particles. Experimental findings
are supported by numerical Particle-In-Cell calculations of laser-plasma
acceleration and hydrodynamic simulations. | K. Ivanov, D. Gorlova, I. Tsymbalov, I. Tsygvintsev, S. Shulyapov, R. Volkov, A. Savelev | 2023-09-19T11:16:37Z | http://arxiv.org/abs/2309.10530v2 | # Laser-driven pointed acceleration of electrons with preformed plasma lens
###### Abstract
The simultaneous laser-driven acceleration and angular manipulation of the fast electron beam is experimentally demonstrated. The bunch of multi-MeV energy charged particles is generated during the propagation of the femtosecond laser pulse through the near-critical plasma slab accompanied by plasma channeling. Plasma is formed by the controlled breakdown of a thin-tape target by a powerful nanosecond prepulse. The electron beam pointing approach is based on the refraction of a laser pulse in the presence of a strong radial density gradient in the breakdown of the tape with a small displacement of the femtosecond laser beam relative to the breakdown symmetry axis. A shift of several micrometers makes it possible to achieve beam deflection by an angle up to 10 degrees with acceptable beam charge and spectrum conservation. This opens up opportunities for in-situ applications for scanning objects with an electron beam and the multistage electron beam energy gain in consecutive laser accelerators without bulk magnetic optics for particles. Experimental findings are supported by numerical Particle-In-Cell calculations of laser-plasma acceleration and hydrodynamic simulations.
## I Introduction
The generation of energetic collimated electron beams with a high charge using laser-plasma accelerators has become one of the main area of application for teravatt (TW) and petawatt (PW) laser systems [1; 2; 3]. Thanks to Laser Wakefield Acceleration (LWFA [4]) and Direct Laser Acceleration (DLA [5; 6]) in relativistic plasma channel, together with the rapid development of high repetition rate TW-class lasers the new phenomena related to astrophysics [7], nuclear photonics [8], biomedicine and diagnostics [9; 10], etc. has become available to a wide community of researchers literally within the laboratory. The latter circumstance imposes fairly severe restrictions on the size of laboratory sources utilizing compact laser systems with peak power of up to 100 TW (including new-generation systems with peak power of a few TW with a repetition rate up to a kHz level [11]). Besides, the design of the electron beam generator itself (target assembly and adjacent elements) must have reasonable dimensions and provide simple and stable operation as well as control over the beam. One should also consider the bunch acceleration efficiency in terms of mean energy and charge. Instead of a single accelerating stage by an extremely powerful laser pulse the use of a few laser pulses can provide consecutive bunch energy gain in a number of acceleration stages driven by less-demanding and modest laser systems [12; 13; 14].
For various application scenarios the angular targeting and pointing of the electron beam are of key importance, whether it concerns the optimal injection into the subsequent stage of acceleration [15; 16], injection schemes [17] relevant to the AWAKE experiment [18], scanning of a sample, directing the bunch into different detectors, radio medicine applications including FLASH therapy [19], etc. For this, standard accelerator technology can be used: magnetic lenses and dipoles. But their key drawback is high energy selectivity, whereas the laser acceleration of particles beam with high charge leads to rather wide energy spectrum [20]. Also worth noting the bulkiness and slow adjustment. Therefore, the issue of controlling the beam pointing is of great importance not only from the point of view of compactness, but also the preservation of a high charge. For these purposes, several purely plasma methods have been proposed for the LWFA scheme. Among them, we can single out the waveguide propagation of a pulse, when a laser beam is introduced onto the axis of an electron bunch through a curved plasma channel [21; 22]. It was also proposed to use a pulse with an inclined wave front, which will turn in a low-density plasma and take the electron beam behind it in the Wakefield acceleration mode [23; 24].
In this work, we experimentally demonstrate the possibility of controlling the ejection angle of an electron bunch with particle energy exceeding a few MeV accelerated by a 1 TW laser pulse in a plasma channel. The latter is formed in a plasma sheet of near critical density when a thin (15 \(\mu\)m) mylar tape is hole bored by an additional nanosecond prepulse (100 mJ, 8 ns, \(10^{13}\) W cm\({}^{-2}\)), coming ahead of the main accelerating femtosecond pulse (50 mJ, 50 fs, \(5\times 10^{18}\) W cm\({}^{-2}\)) by a few nanoseconds. The prepulse forms a breakdown with strong radial density gradient. Precise adjustment of the breakdown axis with respect to the femtosecond pulse axis makes it possible to control the trajectory of the relativistic plasma channel due to laser beam refraction. The accelerated electron beam preserves its characteristics (energy, spectrum, divergence) at a high degree. The findings are confirmed by simulations and analytical model. |
2309.06352 | Lighter-Than-Air Autonomous Ball Capture and Scoring Robot -- Design,
Development, and Deployment | This paper describes the full end-to-end design of our primary scoring agent
in an aerial autonomous robotics competition from April 2023. As open-ended
robotics competitions become more popular, we wish to begin documenting
successful team designs and approaches. The intended audience of this paper is
not only any future or potential participant in this particular national Defend
The Republic (DTR) competition, but rather anyone thinking about designing
their first robot or system to be entered in a competition with clear goals.
Future DTR participants can and should either build on the ideas here, or find
new alternate strategies that can defeat the most successful design last time.
For non-DTR participants but students interested in robotics competitions,
identifying the minimum viable system needed to be competitive is still
important in helping manage time and prioritizing tasks that are crucial to
competition success first. | Joseph Prince Mathew, Dinesh Karri, James Yang, Kevin Zhu, Yojan Gautam, Kentaro Nojima-Schmunk, Daigo Shishika, Ningshi Yao, Cameron Nowzari | 2023-09-12T16:16:47Z | http://arxiv.org/abs/2309.06352v1 | # Lighter-Than-Air Autonomous Ball Capture and Scoring Robot Design, Development, and Deployment
###### Abstract
This paper describes the full end-to-end design of our primary scoring agent in an aerial autonomous robotics competition from April 2023. As open-ended robotics competitions become more popular, we wish to begin documenting successful team designs and approaches. The intended audience of this paper is not only any future or potential participant in this particular national Defend The Republic (DTR) competition, but rather anyone thinking about designing their first robot or system to be entered in a competition with clear goals. Future DTR participants can and should either build on the ideas here, or find new alternate strategies that can defeat the most successful design last time. For non-DTR participants but students interested in robotics competitions, identifying the minimum viable system needed to be competitive is still important in helping manage time and prioritizing tasks that are crucial to competition success first.
## I Introduction: Defend The Republic
Defend The Republic (DTR) is a national Lighter-Than-Air (LTA) robotics competition that pits two teams (Red versus Blue) against each other in a 60-minute head-to-head match in which fleets of autonomous robots must capture Green and Purple neutrally buoyant balls and move them through Circle, Square, and Triangle goals suspended from the ceiling. An overview of the game with all the elements are shown in Fig. 1. The environment being shape- and color-coded allows easier perception so the teams can focus on advancing multi-agent control and interaction problems.
The complete rules of the game allow flexibility for a myriad of different control and game strategies including heterogeneous teams (where completely different agents or robots are used cooperatively like a defending robot passing a ball to an attacking robot that is perhaps faster and more suitable for finding and scoring goals). However, this paper focuses on a single minimally viable agent design and deploying a homogeneous team of them to autonomously play DTR. It is expected that future competitions will see specialized agents that can perform certain tasks better than others as the competition evolves.
In considering the design and deployment of a new robot to enter a competition in less than one year, it is critical to identify the most critical aspects of the design and build a full minimum viable system as soon as possible. This allows the team to become competitive as quickly as possible and once a base functional design is found, future iterations need only to start building on previously tried and tested methods.
The minimum capabilities required for a single LTA agent to successfully capture a ball and move it through a goal are a basic level of aerial locomotion, perception, and interaction with the environment in the form of a method to move around neutrally buoyant balls.
It should be emphasized that the evolving design of this agent has been a very iterative process with lots of trial and error as with any robotics competitions. More specifically, while we identify the critical areas and problems that need solutions as fast as possible, we only present our solution at the time of the April 2023 competition and do not discuss earlier implementation ideas in detail. We will comment on a few earlier design choices to contrast certain earlier choices, we do not discuss any design decisions and instead focus on George Mason University's combination of technologies that happened to be successful in April 2023.
In Section II we identify the minimum viable components necessary to perform all the tasks needed to capture and score a ball in a game of DTR.
1. Move around in 3D space for 30 minutes (one half);
2. Find and capture green/purple circular objects;
3. Find orange/yellow, square/circle/triangle-shaped goals and release the ball through it.
In Section III we formalize the sequence of tasks our robots need to complete in a conceptual model and present our software that ties all the hardware together.
Fig. 1: Defend The Republic (DTR) game scenario showing two teams (Red versus Blue) with green and purple balls scattered in the environment and goals (one triangular, circular and square per side) in fixed locations. The agents must capture these green balls and score by completely passing the ball through the suspended goals.
## II Minimum Viable System and Hardware
We partition the required hardware into four components:
The **sensors** provide all the raw information available for perception and control.
The **actuators** provide all the mechanisms for the agent to move and interact with the world.
The **envelope** is the main helium-filled containment or balloon providing nearly all the lift to the entire agent.
The **gondola** is the main structure that maintains the integrity of the entire agent and potentially houses any required electronics.
A major challenge of most robotic competitions is the seamless integration of all subsystems actually working together, at the same time. The most sophisticated capturing and scoring robot cannot score a single goal if its envelope isn't large enough to support the helium needed to lift the robot into the air. Being an LTA robot competition, a significant challenge of our design problem is weight. The \(200\mathrm{cu\,ft}\) of helium limit per team means the heavier a single robot is, the fewer robots we can have playing on our team. The total weight of components used for the gondola, sensors, and actuators must all be supported by an envelope large enough to provide the required lift.
### _Sensing_
The color-coded game encourages visual data as a primary way of making sense of the environment. In order to be able to identify different shapes and colors in a large environment, an RGB camera is the natural first choice of sensor. In our case we use a monocular USB camera (OV5640) mounted directly at the front of our robot. In terms of minimum viability, no other perception is needed (or used) in our design. Again it should be noted that it is expected future designs can integrate other sensing mechanisms to further improve individual agent performance and ultimately overall team performance.
For instance, a past design included a separate single point LIDAR sensor used solely for detecting whether the robot was currently 'holding' a game ball or not. Although this was useful and successful, the added weight was not worth the minor improvements in performance as the single camera is still able to help determine whether a ball is currently being held or not, albeit not as reliably.
### _Actuation_
To enable our robot to reach any arbitrary point in a 3D environment, we use a 4-motor/propeller combination to allow unicycle-like control (2D position and orientation) coupled with a propeller to allow vertical motion similar to [1, 2]. Fig. 2 shows the motor configuration.
To control the yaw of the robot, a differential drive mechanism was created using a pair of motors and propellers \(m_{1},m_{2}\) placed 1000mm apart. Altitude/height control is performed using a third motor/propeller combo \(m_{3}\) mounted at the bottom of the robot that can produce vertical thrust. Although the above three motors are sufficient for minimally endowing the agent with its required capabilities, we are able to simplify our software problem formalized in Section III-A by adding a fourth motor \(m_{4}\) to the back of the robot as the primary forward-thrust mechanism. This allows us to decouple the yaw control and thrust control by delegating them to different motors.
In order to effectively capture neutrally buoyant balls, we have one additional brushless DC servo motor that actuates the gate or door of the cage shown hanging under the envelope in Fig. 2. The cage is used to capture, hold, and maneuver balls around the environment. Details about the cage design are in Section II-C. The location of \(m_{4}\) in Fig. 2 now enables a dual use not only by providing the primary thrust to the agent, but also serving to blow balls out of the cage to score a goal when operated in reverse.
### _Gondola Structure_
The gondola is a mechanical structure on the blimp which holds all the electronics and actuators on the blimp. This is important in maintaining the structural integrity of the robots and ensuring repeatability and uniformity (reducing idiosyncrasy) across our fleet of agents. We use a \(4\times 4\times 1000\)mm carbon fiber rod and 3D printed components to mount the 4 motors and the processing unit. Additionally, we have designed a cage that hangs underneath the envelope as shown in Fig. 2. The cage is used by the robot to capture balls and maneuver them around. Further details on the cage design are in Appendix B.
### _Envelope Design_
The envelope to hold helium that provides the primary lift for the agent as dictated by the competition is made out of metalized film [3, 4]. In order to have enough lift to support the payload on the agent, a custom envelope used so we can choose its volume and shape. One side of the film is shiny, silver, and uncolored made with Linear Low Density PolyEster (LLDPE), while the other side has the color of our agent (either light blue or red) made with metalized nylon.
Fig. 2: Simplified blimp diagram with motors and cage.
When the silver sides of the film are in contact and heated, an airtight seal is created.
The total weight of our agent and its respective components used in this agent are shown in Table II-D and came out to \(645\)g. Since helium can in general lift about 1 gram per liter (under 'normal' conditions of 25 degrees C and 1 atm pressure) and the game must be played in sometimes varying conditions, we add a 20% safety factor and created our envelope to hold about \(770\) liters. This ensures we have sufficient lift and can reach any point in 3D space, even in moderately varying environmental conditions. To deal with varying conditions, we use pliable playdough to get our agent as close as possible to neutrally buoyant, while also being able to very easily control the center of gravity of the agent. Fig. 3 shows the final envelope used. Details on how these envelopes are designed and produced are in Appendix A.
## III Minimum Viable System Conceptual Model
With our basic hardware selected to enable the minimum capabilities required to be viable, we now formalize our problem and discuss our simple 4-mode controller to autonomously play DTR depending on the evolving situation. The intuitive idea is to sequentially move through the following 4 modes of operation:
* **Ball Search.** (\(\zeta=1\)) Simply spin in circles until a green or purple object is found.
* **Ball Capture.** (\(\zeta=2\)) Use a tuned PD controller to drive the robot to the found object.
* **Goal Search.** (\(\zeta=3\)) Simply spin in circles until the desired orange or yellow object is found.
* **Goal Score.** (\(\zeta=4\)) Use a tuned PD controller to drive the robot to the goal and release the ball.
Clearly, these operations should not always happen in a seamless sequence. For instance if capturing a found ball fails in Step 2 and loses sight of the ball, the agent should go back to Step 1. To simplify perception as much as possible, we rely on a subsumption architecture-like control architecture to create a 4 state finite automata, with higher levels of operation subsuming the lower levels of operation [5, 6].
Thus rather than thinking about the tasks as an always sequential operation, we instead enable simple perception to determine which behavior should be driving the robot at any given time. We describe this architecture in levels where the highest level controller should always take over the lower level ones when they are able to.
**Level 1: Search.** When no target is available to the robot, it should wander around until it finds something of interest.
**Level 2: Go To.** When a target of interest is found, it should go towards it.
**Level 3: Capture.** If the target is a ball, the robot should capture it in its actuated cage when close enough.
**Level 4: Shoot.** If the target is a goal and the robot has a ball, it should shoot the ball when it is close enough.
Let \(\gamma\in\{0,1\}\) indicate whether the agent currently has a ball in its cage or not.
Let \(\sigma\in\{0,1\}\) indicate whether there is at least one ball in the agent's Field of View (FOV) or not.
Let \(\chi\in\{0,1\}\) indicate whether there is at least one goal
Fig. 3: Completed and inflated 3-fold envelope design.
in the agent's FOV or not.
The current mode of operation of the agent can then be determined by Algorithm 1.
### _Reduced Agent Model_
Here we present a simple kinematic 2.5D model that ignores pitch and roll of our agent. It should be noted that this model is not a good representation of the true system and is only utilized to formally frame our problems and show exactly how we implement their solutions on our real hardware.
The state of the robot is given by its 3D position \((x,y,z)\) and its 2D orientation \(\theta\). Fig. 4 shows the simplified model. Rather than thinking of direct motor inputs, we consider the kinematics
\[\begin{split}\dot{x}&=u_{1}\cos\theta\\ \dot{y}&=u_{1}\sin\theta\\ \dot{\theta}&=u_{2}\\ \dot{z}&=u_{3}\end{split} \tag{1}\]
where \(u_{1},u_{2},u_{3}\) are all bounded by their hardware limits. The immediate coupling of motor \(m_{3}\) and kinematic input \(u_{3}\) and \(m_{4},u_{1}\) directly controlling the height and forward thrust of the agent are trivial, and the motors \(m_{1},m_{2}\) used as a differential drive command the yaw through \(u_{2}\).
In equation (1), \(\dot{x},\dot{y},\dot{z}\), and \(\dot{\theta}\) is the velocity of the robot in the x-direction, y-direction, z-direction, and angular velocity about the z-axis in a global frame. It should be noted that this global state will **never** be available to the agents in general and is only used for us to formalize the control problems and present our solutions.
### _Mode Detection and Basic Perception_
Here we discuss how the agent estimates what mode it is in and how to process the information needed in each mode.
Our simple control strategy only requires a minimal level of perception, allowing the agents to know which of the 4 operational modes it should currently be in by estimating the three binary variables \(\gamma,\sigma,\chi\), and where exactly the ball or goal of interest is depending on the mode.
1. **Have Ball? \(\gamma\in\{0,1\}\).** The first thing the agent needs to keep track of is whether it currently is holding a ball in its cage or not. One way to do this using the camera, depending on the position of the camera relative to the cage, is to determine whether the pixels in the camera frame that capture the cage are green or purple (the color of game balls). Simple color detection algorithms can be used to determine the RGB values of specified pixels and comparing them against a tuned threshold [7]. However, to have more robust detection in presense of background noise, we deployed a trained yolov5 model [8] to detect balls and its relative size and position in the frame. The details of the detection system is shown in appendix C.
2. **See Ball? \(\sigma\in\{0,1\}\)**. If the robot is not holding a ball \(\gamma=0\), then it needs to determine if it sees a ball or not. This is as simple as checking whether it sees any green or purple in its video stream. Again here we rely on the yolov5 object detection model to detect the green/purple balls. The training set is tailored to the specific environment so as to enable reliable detection. If a ball is seen \(\sigma=1\), we need a method of determining both the lateral/yaw offset \(e_{\text{yaw}}\) and vertical offset \(e_{\text{vert}}\) from the center of the ball to the center of the camera frame. Note for simplicity we are assuming the center of the camera frame aligns with the center of the cage to properly capture the ball, but depending on the relative position of camera and cage, the offset may be measured from a different point in the camera frame. Fig. 5 shows a screenshot of a real camera frame taken from a blimp during a live game and how \(e_{\text{yaw}},e_{\text{vert}}\) are measured. Exactly how we do this using the camera is detailed in Appendix C.
3. **See Goal? \(\chi\in\{0,1\}\)**. If the robot is holding a ball \(\gamma=\) then it needs to determine if it sees a goal
Fig. 4: Kinematic model of the robot.
Fig. 5: Getting the errors \(e_{\text{yaw}}\) and \(e_{\text{vert}}\) that our PD controllers drive to 0 (to effectively center the object of interest in the center of the camera frame). Here we show 2 objects of interest: Green ball and Orange Square Goal.
or not. This is as simple as checking whether it sees any yellow or orange in its video stream. Similarly to when a ball is found, we want to estimate the offsets \(e_{\text{raw}},e_{\text{vert}}\). However, estimating these quantities for the goals are more challenging than the balls because they are hollow objects. Fig. 5 shows a screenshot of a real frame where \(e_{\text{yaw}},e_{\text{vert}}\) must be estimated with respect to the center of the hollow orange objects. In addition to the offsets the size of the goal must also be estimated from the video stream which will be useful when scoring the goal. The details of how we do this are in Appendix C.
### _Control_
Thanks to the design of our agent, we have a very simple controller for all 4 modes of operation.
Intuitively, the input \(u_{1}\) is generally always set to \(\bar{u}_{1}\), a desired forward velocity to allow the agent to move forward like a Dubin's vehicle. The only time it changes \(u_{1}=-\bar{u}_{1}\) to blow the balls out when scoring a goal.
The input \(u_{2}\) is used to steer or control the yaw of the robot. When searching for a ball or goal, we simply full throttle \(u_{2}=\bar{u}_{2}\) to spin around until it sees something of interest in modes \(\zeta=1,3\). When a target (either ball or goal) is available, \(u_{2}\) uses a simple PD controller to drive \(e_{\text{yaw}}\to 0\).
The input \(u_{3}\) is used to control the altitude or height of the robot. When searching for a ball or goal, we simply toggle \(u_{3}\in\{-\bar{u}_{3},\bar{u}_{3}\}\) randomly to move up and down using a selected velocity in the 3D environment until it sees something of interest in modes \(\zeta=1,3\). When a target is available, \(u_{3}\) uses a simple PD controller to drive \(e_{\text{vert}}\to 0\).
Finally, the cage gate control is actuated to be open only in modes \(\zeta=2,4\). In \(\zeta=2\) the gate remains open all the time facilitating ball capture. In \(\zeta=4\), we have the gate open only when the agent is close to a goal. This can be detected when the size of goal being tracked is above a preset threshold that determined through experimentation.
Coupled with the basic perception system determining the mode of operation \(\zeta\in\{1,2,3,4\}\), this autonomous behavior of the robot is described using a simple finite automata with the 4 modes shown in Fig. 8. This is a continuous process until the robot powers off, or a human interrupt is invoked.
The fully built agent in action is shown in Figs. 6 and 7.
first time). More specifically, we relied on in-situ labeling and training of objects of interest in the middle of the competition week to greatly improve our perception capabilities.
### _Limitations and Room for Improvement_
#### Iv-B1 System Setup Time
The setup time for our entire fleet amounted to more than 2 days. This includes inflating the envelopes, assembling the agents, balancing the agents, testing basic motion and finally performing autonomous capture and score tests. The small idiosyncrasies between each agent mean that we spend a lot of time making minor tweaks so that all agents perform the ball capture and score to a minimal degree. One of the major factors here is the experimental nature of the project where the design of the system is constantly evolving and so there are very few standardized parts. This leads to changing assembly procedures that lead to these differences observed in agents.
Needing to manually tune each agent separately due to the small idiosyncrasies is not scaleable. New approaches for both (i) consistency and speed of designing and deploying agents and (ii) self-tuning methods are desired.
#### Iv-B2 Variance in Lighting Condition vs Detection Performance
We have observed that the detection performance was heavily dependent on the lighting condition of the arena. Naturally relying on in-situ labeling and training is also not a scalable approach both in terms of setup time but more importantly non-transferrability of robots meaning slight changes in the environment may render algorithms trained earlier to be useless.
Using only a single sensor (RGB camera in this case) is a clear limitation. It is desired to integrate other sensing mechanisms to aid in perception, especially in scenarios where the RGB camera is weak. For example, the goals are covered in retro-reflective tape and using an IR light source shining directly onto the goal and using an IR camera for detection makes it much easier to detect from farther away, which may serve as a more course-grained sensor to help the robot get to positions where the camera can do its job properly.
#### Iv-B3 Aerodynamics of the Envelope Design
Admittedly one of our most lacking areas is good mechanical engineering design. Controllability, specifically vertical stability (yaw control), was an issue present in all agents. Without properly mechanical engineering analysis and design methods (for instance CFD analysis), we ended up with a much heavier agent than we would like. Propellers are also notoriously inefficient and the efficiency and controllability of our agents should be greatly improved through the use of different modes of propulsion such as control surfaces.
It is desired to integrate better mechanical engineering practices and aerodynamic trade-offs into our design choices.
#### Iv-B4 Heterogeneous Fleets
While our team has explored a lot of different agent designs and even some specialized roles, our team was basically carried in terms of actual point scoring by the single agent design described in this paper.
The main strategy of our team was to deploy as many agents as possible capable of executing the intended sequence with a non-zero probability. Unfortunately this introduced plenty of issues such as 3 teammates fighting over the same ball due to their low perception capabilities.
Besides just better coordination among a fleet of homogeneous agents, it is desired to have a heterogeneous team of agents where different agents have specialized roles and can work together to play DTR.
While our single agent here focused on reliably moving to precise locations to capture balls and score goals with little room for errors, novel methods that can deal with bad perception in other ways should be explored. While our approach relied heavily on simply improving perception in any way possible to allow our PD controllers to drive agents/balls to within a 5-10cm error tolerance, other methods can rely on much larger capturing mechanisms to tolerate much larger errors in perception (e.g., an aerial pursing net [9, 10]).
Fig. 8: State Machine for Autonomous Behaviors
Fig. 7: Image showing the motor on the back of the cage and the gate in the front of the cage.
## Acknowledgements
This work was supported in part by the Department of the Navy, Office of Naval Research (ONR), under federal grants N00014-20-1-2507 and N00014-23-1-2222.
|
2302.00479 | Characterising Solutions of Anomalous Cancellation | Anomalous cancellation of fractions is a mathematically inaccurate method
where cancelling the common digits of the numerator and denominator correctly
reduces it. While it appears to be accidentally successful, the property of
anomalous cancellation is intricately connected to the number of digits of the
denominator as well as the base in which the fraction is represented. Previous
work have been mostly surrounding three digit solutions or specific properties
of the same. This paper seeks to get general results regarding the structure of
numbers that follow the cancellation property (denoted by $P^*_{\ell; k}$) and
an estimate of the total number of solutions possible in a given base
representation. In particular, interesting properties regarding the saturation
of the number of solutions in general and $p^n$ bases (where $p$ is a prime)
have been studied in detail. | Satvik Saha, Sohom Gupta, Sayan Dutta, Sourin Chatterjee | 2023-01-31T17:43:08Z | http://arxiv.org/abs/2302.00479v1 | [
###### Abstract
Anomalous cancellation of fractions is a mathematically inaccurate method where cancelling the common digits of the numerator and denominator correctly reduces it. While it appears to be _accidentally_ successful, the property of anomalous cancellation is intricately connected to the number of digits of the denominator as well as the base in which the fraction is represented. Previous work have been mostly surrounding three digit solutions or specific properties of the same. This paper seeks to get general results regarding the structure of numbers that follow the cancellation property (denoted by \(\mathbf{P^{\bullet}_{\mathbf{\ell};\,\mathbf{k}}}\)) and an estimate of the total number of solutions possible in a given base representation. In particular, interesting properties regarding the _saturation_ of the number of solutions in general and \(\mathbf{p^{n}}\) bases (where \(\mathbf{p}\) is a prime) have been studied in detail.
Anomalous cancellation, Diophantine equation]Characterising Solutions of Anomalous Cancellation
S. Saha]Satvik Saha1*, S. Gupta2, S. Dutta1 and S. Chatterjee1]Sourin Chatterjee1
## 1 Introduction
A zeal for some interesting mathematical problems brought us to a very peculiar problem, a quest to find all odd digit integers [\(a_{1}a_{2}\ldots a_{2k+1}\)] (all \(a_{i}\)'s are
digits, \(a_{1}\neq 0\)) such that the following property holds.
\[[a_{1}a_{2}\ldots a_{k}]\cdot[a_{k+1}a_{k+2}\ldots a_{2k+1}]=[a_{1}a_{2}\ldots a_ {k+1}]\cdot[a_{k+2}a_{k+2}\ldots a_{2k+1}]\]
An elementary example is the number \(164\), which has the property as shown \(1\times 64=16\times 4\). Similarly \(24\times 996=249\times 96\) implies that \(24996\) also fits our problem requirement. Now the question is, can one find a way to generate all such numbers? A brute force algorithm always works, but it is never satisfying to leave things at that - a proof of being a mathematics aspirant. This prompted us to go through existing literature which, though scarce, hide a gold mine of information. Adding on to previous work, we arrived at several interesting results that demand a place in this paper. The paper has been structured in a format that the general reader can be presented with all the beautiful results, while the seasoned readers can move to the appendix to get a flavour of the relevant proofs. All proofs use elementary number theory techniques and hence this paper is directed at undergraduate students and beyond.
The anomalous cancellation property for fractions of the form \([ab]_{B}/[bc]_{B}\), where the numerator and denominator are two digit integers in base \(B\), has been studied extensively [2]. Finding such fractions amounts to solving the following Diophantine equation in \(a,b,c\).
\[(aB+b)c=a(bB+c).\]
We shall quickly state some nice results in literature, to keep readers up to date.
Apart from trivial solutions such as \(a=b=c\) or \(b=c=0\), it has been shown that \(b\) is the largest digit. With this, the transformation \((a,b,c)\mapsto(b-c,b,b-a)\) is an involution on the set of non-trivial solutions [5]. Ekhad [6] has examined a much more general anomalous cancellation property where a digit is allowed to be cancelled from anywhere in the numerator and denominator. Fixing the base, denominator, and the digit indices used during cancellation reduces the problem of finding all solutions to a linear Diophantine equation, which can be solved easily.
It should be good to go dive in our work now. For no future confusion, we shall take some time to sort out the notations used frequently. All other symbols have their usual meaning.
Notation.Let \(B\geq 2\) be an integer base, and let \(x_{1},x_{2},\ldots,x_{k}\) be integer digits, with each \(0\leq x_{i}<B\). Then, we denote their concatenation as
\[x=[x_{1}x_{2}\ldots x_{k}]\;\equiv\;\sum_{i=1}^{k}x_{i}B^{k-i}.\]
In our problem, we consider natural numbers \(N_{\ell;\ k}\) in base \(B\), where \(\ell,k\geq 1\). Its digits are denoted as
\[N_{\ell;\ k}=[a_{1}a_{2}\ldots a_{\ell}\,b\,c_{1}c_{2}\ldots c_{k}]=[abc],\]
with blocks
\[a=[a_{1}a_{2}\ldots a_{\ell}],\qquad c=[c_{1}c_{2}\ldots c_{k}].\]
Note that we may write
\[N_{\ell;\ k}=aB^{k+1}+bB^{k}+c.\]
**Definition 1**: We say that the number \(N_{\ell;\ k}\) has property \(P_{\ell;\ k}\), or is a solution of \(P_{\ell;\ k}\), if
\[[a_{1}a_{2}\ldots a_{\ell}\,b]\times[c_{1}c_{2}\ldots c_{k}]=[a_{1}a_{2}\ldots a _{\ell}]\times[b\,c_{1}c_{2}\ldots c_{k}].\]
Equivalently,
\[\frac{[a_{1}a_{2}\ldots a_{\ell}\,\not{b}]}{[\![\![\![\![\![\![\![\![\![\![\![\![ \![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![ \![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![ \![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![ \![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![ \![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![ \![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![ \![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![ \![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![ \![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![ \![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![ \![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![ \![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![ \![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![ \![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![ \![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![ \![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![ \![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![ \![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![ \![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![ \![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![ \![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![ \![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![ \![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![ \![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![ \![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![ \![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![ \![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\!\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\!\![\![\![\![\![\!\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\!\![\![\![\![\![\!\![\![\![\!\![\![\![\![\![\![\![\!\![\![\![\![\!\![\![\![\![\![\![\![\![\![\
**Proposition 1**: _Let \(B\) be an arbitrary integer base, and let_
\[N=[a_{1}a_{2}\ldots a_{\ell}\,b\,c_{1}c_{2}\ldots c_{k}],\qquad N^{+}=[a_{1}a_{2} \ldots a_{\ell}\,b\,b\,b\,c_{1}c_{2}\ldots c_{k}].\]
_Then, \(N\) has property \(P_{\ell;\;k}\) if and only if \(N^{+}\) has property \(P_{\ell+1;\;k+1}\). We say that \(N^{+}\) is an extension of \(N\)._
_Proof_ The number \(N^{+}\) has property \(P_{\ell+1;\;k+1}\) precisely when
\[(aB^{2}+bB+b)(bB^{k}+c)=(aB+b)(bB^{k+1}+bB^{k}+c),\]
which after expanding and cancelling reduces to
\[(aB+b)cB=aB(bB^{k}+c),\qquad(aB+b)c=a(bB^{k}+c),\]
which is precisely the statement that \(N\) has property \(P_{\ell;\;k}\). \(\Box\)
This shows that the number of solutions of \(P_{k}^{*}\) in a particular base \(B\) cannot decrease with increasing \(k\). A natural question is whether the number of such solutions gets arbitrarily large; can we keep producing solutions of the \(P_{k}\) problem that aren't merely extensions of old ones? To answer this, we show that the digits of any such solution must obey very rigid rules, culminating in the following.
**Theorem 2**: _Let \(B\) be an arbitrary integer base and let \(N_{k}\) have property \(P_{k}^{*}\). Then, the digits must satisfy the following constraints._
1. \(a_{1}<B/2\)_._
2. \(b=c_{1}=c_{2}=\cdots=c_{k-1}>c_{k}>1\)_._
3. \(\gcd(c_{k},B)>1\)_._
4. \(\gcd(a_{k}-b,B)>1\) _if_ \(a_{k}\neq b\)_._
The proof of Theorem 2 will follow by combining Corollary 12 and Lemmas 14, 15, 17, which we prove later.
An immediate consequence is that all solutions of \(P_{k}^{*}\) look like
\[[a_{1}a_{2}\ldots a_{k}\,b\,b\ldots bc_{k}].\]
This means that the blocks are of the form
\[a=\frac{bM}{B}+\frac{bc_{k}B^{k-1}}{bB-(B-1)c_{k}},\qquad c=bM+c_{k},\qquad M= \frac{B^{k}-B}{B-1}.\]
Theorem 2 guarantees that when the base \(B\) is even, plugging in \(b=B-1\), \(c_{k}=B-2\) corresponds to the largest solution of \(P_{k}^{*}\),
\[a=\frac{1}{2}B^{k}-1,\qquad b=B-1,\qquad c=B^{k}-2.\]
For example, in base \(B=10\), the largest solution of \(P_{3}^{*}\) is \(4999998\).
Theorem 2 also gives us a nice criterion for determining whether solutions of \(P_{k}^{*}\) exist in the first place.
**Theorem 3**: _The \(P_{k}^{*}\) problem admits solutions if and only if the base \(B\) is composite._
Proof: If the base \(B=mn\) for \(m,n>1\), then \(N_{k}=[abc]\) with
\[(a,b,c)=(mB^{k-1}-1,B-1,B^{k}-n)\]
has property \(P_{k}^{*}\). Conversely, if the base \(B=p\) for prime \(p\), then any solution \(N_{k}\) of \(P_{k}^{*}\) must satisfy \(0<c_{k}<p\) and \(\gcd(c_{k},p)>1\) simultaneously by Proposition 2, a contradiction.
## 3 Saturation of solutions.
In Section 2, we have shown that every candidate solution of \(P_{k}^{*}\) can be completely written in terms of the two single digits \(b\) and \(c_{k}\). Here, we say that \((b,c_{k})\)_generates_ such a solution of \(P_{k}^{*}\). Figure 1 visualizes the solution space of \(P_{101}^{*}\) in base \(B=126\) via such generating tuples. Thus, finding solutions amounts to testing pairs \((b,c_{k})\), with \(1<c_{k}<b<B\). Importantly, this search space depends only on \(B\), not \(k(!)\) Thus, for a given base \(B\), the number of solutions of \(P_{k}^{*}\)_saturates_.
**Proposition 4**: _Let \(B\) be an arbitrary integer base. There are at most_
\[\frac{(B-2)(B-3)}{2}\]
_solutions of \(P_{k}^{*}\), for any \(k\geq 1\)._
Proof: For each \(b\), we have \(2\leq c_{k}<b\), i.e. \(b-2\) choices. Putting \(3\leq b<B\), we obtain a total of \(1+2+\cdots+(B-3)=(B-2)(B-3)/2\) candidate tuples \((b,c_{k})\).
The above estimate is clearly very crude: we have only used the relation \(b=c_{i<k}>c_{k}\). Indeed, numerical evidence suggests that a much sharper bound can be established. To illustrate this, the only solutions of \(P_{k}^{*}\) for base \(B=10\) are extensions of the following (in order of appearance with increasing \(k\)).
\[164,\,265,\,195,\,498,\,21775,\,24996,\,1249992,\,340277776.\]
We next describe when the full gamut of solutions for a base \(B\) is achieved.
**Theorem 5**: _Let \(B\) be an arbitrary integer base. Then, the number of solutions of \(P_{k}^{*}\) become constant beyond_
\[k=\max\{5,\,2\log_{2}{(B-1)}+2\}.\]
This means that for \(k\) beyond the above _saturation point_, all solutions of \(P_{k}^{*}\) are mere extensions of old ones. A proof of Theorem 5 is supplied in Appendix D.
The bound on \(k\) described in Theorem 5 is especially bad for one family of bases in particular. When \(B=p^{n}\) for prime \(p\), \(n\geq 2\), the solutions of \(P_{k}^{*}\) saturate in the very first step, \(k=1\).
**Theorem 6**: _Let \(B=p^{n}\) for prime \(p\). Then, all solutions of \(P_{k}^{*}\) are extensions of those of \(P_{1}^{*}\)._
_Proof_ By Lemma 18, any solution \(N_{k}\) of \(P_{k}^{*}\) looks like
\[N_{k}=[a_{1}b\ldots b\,b\,b\ldots bc_{k}].\]
Proposition 1 can be used \(k-1\) times to reduce this to the solution \([a_{1}bc_{k}]\) of \(P_{1}^{*}\).
Figure 1: Generating tuples for \(P_{\ell;\ k}^{*}\) with \(B=126\), \(k=101\), represented by blue dots. Points above the orange curve generate solutions of \(P_{k}^{*}\) with the candidate block \(a\geq B^{k-1}\). Points underneath the orange curve generate solutions of \(P_{\ell;\ k}^{*}\) with \(\ell<k\).
For example, the only solutions of \(P_{k}^{*}\) for base \(B=3^{2}\) are of the form
\[14\ldots 43,\quad 28\ldots 86.\]
## 4 Discussion.
Our main goal was to characterize all solutions of the anomalous cancellation problem \(P_{k}^{*}\), which still seems to be a far way off. However, we have successfully characterized solutions in certain special bases, and given rudimentary bounds on their number. Our major findings are summarized below.
* There are no solutions of \(P_{k}^{*}\) for prime bases.
* The only solutions of \(P_{k}^{*}\) for prime-power bases \(p^{n}\) are extensions of solutions of \(P_{1}^{*}\).
* There are at least as many solutions of \(P_{k}^{*}\) for composite bases \(B\) as there are non-trivial factors of \(B\). These are of the form \([abc]\) with \[(a,b,c)=(mB^{k-1}-1,B-1,B^{k}-n)\] for \(B=mn\), \(m,n>1\), \(M=(B^{k}-B)/(B-1)\).
Some very interesting and peculiar observations were made from numerical solutions; while these haven't been proved in this paper, there is a lot of scope for future expansion.
* Suppose in base \(B\), we have no new non-trivial solutions (except extensions) in \((2k+1)\) digits. We have observed that there would be no new solutions in \((2k+3)\) digits. In other words, as long as the number of solutions do not saturate, they will keep on increasing.
* Currently we have a quadratic bound on the total number of solutions possible in a given base \(B\), but observed solution counts are much smaller than that.
* An interesting result discussed is that composite bases are guaranteed to have solutions that are of the form \((a,B-1,c)\). Therefore, finding out no solutions of this form guarantees primality. While the current search space is only as good as a brute force method, some better ideas relating \(b\) and \(c_{k}\) might lead to a much faster primality test.
Acknowledgments.The authors would like to thank Prof. Soumya Bhattacharya for his careful reading of the manuscript and many helpful discussions on the topic of this paper.
## Appendix A Trivial solutions.
This section focuses on trivial solutions of \(P_{\ell;\ k}\). We supply a few criteria which can be used to swiftly identify candidate solutions as trivial, based on a subset of their digits.
**Lemma 7**: _Let \(N_{\ell;\ k}\) have property \(P_{\ell;\ k}\). Then,_
1. \(a\mid bc\)_._
2. \(b\mid ac(B-1)\)_._
3. \(c\mid abB^{k}\)_._
_Proof_ Using the fact that \(N\) has the \(P_{\ell;\ k}\) property, we have
\[(aB+b)c=a(bB^{k}+c).\]
This can be rewritten by collecting each of \(a,b,c\) successively on one side, giving
\[a(bB^{k}-(B-1)c)=bc,\qquad b(aB^{k}-c)=ac(B-1),\qquad c(a(B-1)+b)=abB^{k},\]
from which the desired rules follow. \(\Box\)
**Lemma 8**: _Let \(N_{\ell;\ k}\) have property \(P_{\ell;\ k}\). If any one of \(a,b,c=0\), then at least one of the others is also \(0\), i.e. \(N\) is a trivial solution for the \(P_{\ell;\ k}\) problem._
_Proof_ This follows from the divisibility conditions in Lemma 7. \(\Box\)
**Lemma 9**: _Let \(N_{k}\) have property \(P_{k}\) and at least one of the following hold._
1. \(a=c\)_._
2. \(a_{i}=b\) _for all_ \(1\leq i\leq k\)_._
3. \(c_{i}=b\) _for all_ \(1\leq i\leq k\)_._
_Then, all the digits \(a_{i}=c_{i}=b\), i.e. \(N\) is a trivial solution for the \(P_{k}\) problem._
_Proof_ Let \(N\) have property \(P_{k}\), whence \((aB+b)c=a(bB^{k}+c)\). Denote
\[I=[\underbrace{11\ldots 1}_{k}]=\frac{B^{k}-1}{B-1}.\]
1. Putting \(a=c\), \[a=\frac{B^{k}-1}{B-1}\cdot b,\qquad\sum_{i=1}^{k}a_{i}B^{k-i}=\sum_{i=1}^{k}bB ^{k-i}.\]
By the uniqueness of representation in the base \(B\), each \(a_{i}=b\).
2. Putting \(a=bI\), \[c=\frac{abB^{k}}{a(B-1)+b}=\frac{b^{2}IB^{k}}{b(I(B-1)+1)}=bI.\]
3. Putting \(c=bI,\) \[a=\frac{bc}{bB^{k}-(B-1)c}=\frac{b^{2}I}{bB^{k}-b(B-1)I}=\frac{b^{2}I}{b(B^{k}-(B ^{k}-1))}=bI.\]
## Appendix B Uneven blocks.
Although we primarily deal with solutions of \(P_{k}^{*}\) in this paper, it is necessary to make a short detour and examine a few aspects of the more general \(P_{\ell;\;k}^{*}\) problem in order to prove Theorem 2.
**Lemma 10**: _Let \(B\) be an arbitrary integer base and let \(N_{\ell;\;k}\) have property \(P_{\ell;\;k}.\) If \(c_{k}=0\), then the number_
\[N^{-}=[a_{1}a_{2}\ldots a_{\ell}\,b\,c_{1}c_{2}\ldots c_{k-1}]\]
_has property \(P_{\ell;\;k-1}\)_
Proof.: Denote
\[c^{\prime}=[c_{1}c_{2}\ldots c_{k-1}],\qquad c=c^{\prime}B+c_{k}=c^{\prime}B.\]
Since \(N\) has property \(P_{\ell;\;k},\) we have \((aB+b)c=a(bB^{k}+c),\) hence
\[(aB+b)c^{\prime}B=a(bB^{k}+c^{\prime}B)\qquad(aB+b)c^{\prime}=a(bB^{k-1}+c^{ \prime}),\]
which is precisely the statement that \(N^{-}\) has property \(P_{\ell;\;k-1}.\)
**Lemma 11**: _Let \(B\) be an arbitrary integer base and let \(N_{\ell;\;k}\) have property \(P_{\ell;\;k}^{*}\). Then, \(\ell\leq k.\) In other words, there are no solutions of \(P_{\ell;\;k}^{*}\) when \(\ell>k.\)_
Proof.: If \(N_{\ell;\;k}\) has property \(P_{\ell;\;k}^{*},\) then \(a,b,c>0,\)
\[(aB+b)c=a(bB^{k}+c),\qquad a(bB^{k}+c-Bc)=bc.\]
Put \(bB^{k}+c-Bc=bc/a=d,\) which is a positive integer. Suppose that \(1\leq d<B,\) i.e. \(d\) is a single digit in base \(B.\) Expanding \(bB^{k}+c=cB+d\) gives us
\[bB^{k}+c_{1}B^{k-1}+\cdots+c_{k-1}B+c_{k}=c_{1}B^{k}+c_{2}B^{k-1}+\cdots+c_{k} B+d.\]
By the uniqueness of representation of integers in the base \(B,\) we equate the coefficients \(b=c_{1},\)\(c_{1}=c_{2},\)\(\ldots,\)\(c_{k-1}=c_{k},\)\(c_{k}=d;\) specifically, \(b=d.\) Thus, \(a=bc/d=c,\) hence the solution is trivial by Lemma 9.
This means that for \(N\) to be a non-trivial solution, we must have \(d\geq B.\) Now, \(0<b<B\) and \(0<c<B^{k},\) hence
\[a=\frac{bc}{d}<\frac{B\cdot B^{k}}{B}=B^{k}.\]
This shows that \(a\) can have at most \(k\) digits, hence \(\ell\leq k.\)
**Corollary 12**: _If \(N_{k}\) has property \(P_{k}^{*}\), then \(c_{k}\neq 0\)._
Proof: If \(c_{k}=0\), we see that
\[N^{-}=[a_{1}a_{2}\ldots a_{k}\,b\,c_{1}c_{2}\ldots c_{k-1}]\]
has property \(P_{k;\;k-1}\), and hence must be a trivial solution by Lemma 11. Furthermore, it must be trivial in the sense that one of \(a,b,c=0\); if not, then \(a<B^{k-1}\) from the lemma contradicts the fact that \(a\) is a \(k\)-digit number. Thus, the original number \(N_{k}\) is also a trivial solution.
**Corollary 13**: _If \(N_{k}\) has property \(P_{\ell;\;k}^{*}\), then the integer \(d=bc/a\geq B\)._
The technique used in Lemma 11 can be employed to obtain an even sharper bound on the leading block \(a\) of a solution \(N_{k}\).
**Lemma 14**: _Let \(B\) be an arbitrary integer base and let \(N_{k}\) have property \(P_{k}^{*}\). Then, \(a<B^{k}/2\). As a result, the leading digit \(a_{1}<B/2\)._
Proof: Continuing along the same lines as the proof of Lemma 11, suppose that \(N_{k}\) has property \(P_{k}^{*}\). Then \(a,b,c>0\),
\[(aB+b)c=a(bB^{k}+c),\qquad a(bB^{k}+c-Bc)=bc,\]
and \(bB^{k}+c-Bc=bc/a=d\) is a positive integer. We saw that when \(1\leq d<B\), the solution \(N\) is trivial. Furthermore, when \(d\geq 2B-2\), observe that
\[a=\frac{bc}{d}\leq\frac{(B-1)\cdot(B^{k}-1)}{2B-2}=\frac{1}{2}(B^{k}-1).\]
We now examine the remaining case \(B\leq d<2B-2\). Setting \(d^{\prime}=d-B\), we have \(0\leq d^{\prime}<B-2\), i.e \(d^{\prime}\) is a single digit in base \(B\). Expanding \(bB^{k}+c=cB+d\) gives us
\[bB^{k}+c_{1}B^{k-1} +\cdots+c_{k-1}B+c_{k}\] \[=c_{1}B^{k}+c_{2}B^{k-1}+\cdots+c_{k}B+(B+d^{\prime}).\]
This implies \(B\mid c_{k}-d^{\prime}\); but \(0\leq|c_{k}-d^{\prime}|<B\) forcing \(c_{k}=d^{\prime}\). Since \(k\geq 1\), we can subtract \(c_{k}=d^{\prime}\) and divide \(B\), yielding
\[bB^{k-1}+c_{1}B^{k-2} +\cdots+c_{k-1}\] \[=c_{1}B^{k-1}+c_{2}B^{k-2}+\cdots+c_{k-1}B+d^{\prime}+1.\] ( \[\star\] )
Since \(d^{\prime}<B-2\), the number \(d^{\prime}+1<B-1\) is a single digit, so we can equate coefficients and see that \(b=c_{1}\), \(c_{1}=c_{2}\),..., \(c_{k-2}=c_{k-1}\), \(c_{k-1}=d^{\prime}+1\), hence \(b=c_{1}=\cdots=c_{k-1}=d^{\prime}+1\). In other words, all the digits of \(c+1\) are exactly \(d^{\prime}+1\), so if we set
\[I=[1_{1}1_{2}\ldots 1_{k}]=\frac{B^{k}-1}{B-1},\]
then \(c+1=(d^{\prime}+1)I\). Then,
\[a=\frac{bc}{d}=\frac{(d^{\prime}+1)[(d^{\prime}+1)I-1]}{B+d^{\prime}}<\frac{d^{ \prime}\cdot[(B-1)I-1]}{2d^{\prime}}=\frac{1}{2}[(B-1)I-1]\]
hence putting \((B-1)I=B^{k}-1\) gives
\[a<\frac{1}{2}(B^{k}-2).\]
## Appendix C The trailing block.
This section deals with the trailing block \(c\) of a solution \(N_{k}\), which can be almost completely described in terms of the central block \(b\).
**Lemma 15**: _Let \(B\) be an arbitrary integer base and let \(N_{k}\) have property \(P_{k}^{*}\). Then, the digits in the last block satisfy \(c_{k}<c_{i<k}=b\). In other words, all solutions of \(P_{k}^{*}\) look like_
\[N=[a_{1}a_{2}\ldots a_{k}\,b\,bb\ldots bc_{k}]\]
Proof: As before, suppose that \(N_{k}\) has property \(P_{k}^{*}\). Then \(a,b,c>0\),
\[(aB+b)c=a(bB^{k}+c),\qquad a(bB^{k}+c-Bc)=bc,\]
and \(bB^{k}+c-Bc=bc/a=d\) is a positive integer. By Corollary 13, we have \(d\geq B\). Since
\[d=\frac{bc}{a}<\frac{(B-1)\cdot B^{k}}{B^{k-1}}<(B-1)B<B^{2},\]
we have \(d=qB+d^{\prime}\), with \(0<q,d^{\prime}<B\); the fact that \(q>0\) follows from \(d\geq B\). Expanding \(bB^{k}+c=cB+d\), we have
\[bB^{k}+c_{1}B^{k-1} +\cdots+c_{k-1}B+c_{k}\] \[=c_{1}B^{k}+c_{2}B^{k-1}+\cdots+c_{k}B+(qB+d^{\prime}).\] ( \[\star\] )
We have \(B\mid c_{k}-d^{\prime}\), forcing \(c_{k}=d^{\prime}\).
Consider the case \(k=1\), where our equation now reads \(bB+c=cB+d\), hence \(bB+d^{\prime}=d^{\prime}B+qB+d^{\prime}\), so \(bB=(d^{\prime}+q)B\). Thus, \(b=d^{\prime}+q>d^{\prime}=c\) as desired.
Now let \(k\geq 2\). Subtracting \(c_{k}=d^{\prime}\) from both sides of (\(\star\)) and dividing by \(B\) gives
\[bB^{k-1}+c_{1}B^{k-2} +\cdots+c_{k-2}B+c_{k-1}\] \[=c_{1}B^{k-1}+c_{2}B^{k-2}+\cdots+c_{k-1}B+d^{\prime}+q.\]
Note that \(2\leq d^{\prime}+q\leq 2B-2<2B\), so expand \(d^{\prime}+q=q^{\prime}B+r\) for some \(0\leq r<B\), and \(q^{\prime}=0,1\). If \(q^{\prime}=0\), then
\[bB^{k-1}+c_{1}B^{k-2} +\cdots+c_{k-2}B+c_{k-1}\] \[=c_{1}B^{k-1}+c_{2}B^{k-2}+\cdots+c_{k-1}B+r,\]
hence we can equate coefficients yielding \(b=c_{1}\), \(c_{1}=c_{2}\),..., \(c_{k-2}=c_{k-1}\), \(c_{k-1}=r=d^{\prime}+q=c_{k}+q>c_{k}\). In other words, all \(b=c_{i<k}>c_{k}\).
Otherwise, \(q^{\prime}=1\), and
\[bB^{k-1}+c_{1}B^{k-2} +\cdots+c_{k-2}B+c_{k-1}\] \[=c_{1}B^{k-1}+c_{2}B^{k-2}+\cdots+(c_{k-1}+1)B+r.\]
This gives \(B\mid c_{k-1}-r\), hence \(c_{k-1}=r\) anyways. Now if \(r=B-1\), we would have \(d^{\prime}+q=q^{\prime}B+r=2B-1\); this contradicts \(q^{\prime}+d\leq 2B-2\). Thus, \(r\leq B-2\), so \(c_{k-1}+1=r+1\leq B-1\) is a single digit in base \(B\). Equating coefficients, \(b=c_{1}\), \(c_{1}=c_{2}\),..., \(c_{k-2}=c_{k-1}=r\). Furthermore, \(c_{k}=d^{\prime}=(q^{\prime}B+r)-q=(B-q)+r<r=c_{k-1}\). Thus, we again have \(b=c_{i<k}>c_{k}\).
**Corollary 16**: _Let \(B\) be an arbitrary integer base and let \(N_{k}\) have property \(P_{k}^{*}\). Then,_
\[a=\frac{bM}{B}+\frac{bc_{k}B^{k-1}}{bB-(B-1)c_{k}},\qquad c=bM+c_{k},\qquad M= \frac{B^{k}-B}{B-1}.\]
_Since \(bM/B=b(B^{k-1}-1)/(B-1)\) is an integer for \(k>1\), so is \(bc_{k}B^{k-1}/(bB-(B-1)c_{k})\)._
_Proof_ Recall that
\[(aB+b)c=a(bB^{k}+c),\qquad a(bB^{k}+c-Bc)=bc.\]
Since \(b=c_{i<k}\) by Lemma 15, we can write
\[c=[bb\dots b\,c_{k}]=bB^{k-1}+\cdots+bB+c_{k}=b\frac{B^{k}-B}{B-1}+c_{k}=bM+c_ {k}.\]
Thus,
\[a=\frac{bc}{bB^{k}-(B-1)c}=\frac{b(bM+c_{k})}{bB^{k}-(b(B^{k}-B)+(B-1)c_{k})}= \frac{b(bM+c_{k})}{bB-(B-1)c_{k}}.\]
Now, note that
\[(bB-(B-1)c_{k})\frac{M}{B}=bM-(B^{k-1}-1)c_{k}=bM+c_{k}-B^{k-1}c_{k}.\]
Thus,
\[\frac{bM+c_{k}}{bB-(B-1)c_{k}}=\frac{M}{B}+\frac{c_{k}B^{k-1}}{bB-(B-1)c_{k}},\]
whence
\[a=\frac{b(bM+c_{k})}{bB-(B-1)c_{k}}=\frac{bM}{B}+\frac{bc_{k}B^{k-1}}{bB-(B-1 )c_{k}}.\]
\(\Box\)
The following observation regarding the final digit \(c_{k}\) ties up the proof of Theorem 2. This requires diving into the prime factorisation of the base \(B\).
**Lemma 17**: _Let \(B\) be an arbitrary integer base and let \(N_{k}\) have property \(P_{k}^{*}\). Then, \(\gcd(c_{k},B)>1\), i.e. the last digit \(c_{k}\) must share some factor \(p>1\) with \(B\). Furthermore, if \(a_{k}\neq b\), then \(gcd(a_{k}-b,B)>1\)._
Proof: Suppose that \(N_{k}\) has property \(P_{k}^{*}\). Then \(a,b,c>0\), \(c_{k}>0\), and
\[(aB+b)c=a(bB^{k}+c),\qquad(ac-abB^{k-1})B=(a-b)c.\]
This gives \(B\mid(a-b)c\); writing
\[a^{\prime} =[a_{1}a_{2}\ldots a_{k-1}], a=a^{\prime}B+a_{k},\] \[c^{\prime} =[c_{1}c_{2}\ldots c_{k-1}], c=c^{\prime}B+c_{k},\]
we have \(B\mid(a-b)(c^{\prime}B+c_{k})\), hence \(B\mid(a-b)c_{k}=(a^{\prime}B+a_{k}-b)c_{k}\), hence \(B\mid(a_{k}-b)c_{k}\). Let
\[B=p_{1}^{\alpha_{1}}p_{2}^{\alpha_{2}}\ldots p_{m}^{\alpha_{m}}\]
be the prime factorization of \(B\), with each \(\alpha_{i}\geq 1\). Then, for each prime factor \(p_{i}\) of \(B\), \(p_{i}^{\alpha_{i}}\mid(a_{k}-b)c_{k}\). Let \(\beta_{i}\) be the greatest integer such that \(p_{i}^{\beta_{i}}\mid c_{k}\). Then, we must have \(p_{i}^{\alpha_{i}-\beta_{i}}\mid a_{k}-b\); if not, the greatest power of \(p_{i}\) dividing \((a_{k}-b)c_{k}\) would have been strictly less than \(\beta_{i}+(\alpha_{i}-\beta_{i})=\alpha_{i}\), a contradiction.
It is clear that we cannot have all \(\beta_{i}\geq\alpha_{i}\); if so, we would have all \(p_{i}^{\alpha_{i}}\mid c_{k}\), hence the product \(p_{1}^{\alpha_{1}}\ldots p_{m}^{\alpha_{m}}=B\mid c_{k}\), but \(0<c_{k}<B\), a contradiction. Thus, there must be some \(\beta_{j}<\alpha_{j}\) corresponding to which \(p^{\alpha_{j}-\beta_{j}}\mid a_{k}-b\), hence \(\gcd(a_{k}-b,B)\geq p_{j}>1\). This proves the second part of our lemma.
To prove the first part, we use induction on the block size \(k\). Consider \(k=1\), where \(a_{k}=a\), \(c_{k}=c\), and suppose that all \(\beta_{i}=0\). This forces all \(p_{i}^{\alpha_{i}-\beta_{i}}=p_{i}^{\alpha_{i}}\mid a-b\), hence their product \(p_{1}^{\alpha_{1}}\ldots p_{m}^{\alpha_{m}}=B\mid a-b\). But \(0\leq|a-b|<B\), forcing \(a-b=0\). By Lemma 9, this gives a trivial solution, a contradiction. Thus, there must be some \(\beta_{j^{\prime}}>0\), hence \(\gcd(c,B)\geq p_{j}^{\beta_{j}^{\prime}}>1\).
Next, suppose that the statement holds for some \(k\geq 1\), and let
\[N=[a_{1}a_{2}\ldots a_{k}a_{k+1}\,b\,c_{1}c_{2}\ldots c_{k+1}]=[a\,b\,c]\]
have property \(P_{k+1}^{*}\). Then, \(a,b,c,>0\), \(c_{k+1}>0\), and \(B\mid(a_{k+1}-b)c_{k+1}\). Again, if all \(\beta_{i}=0\), then we must have all \(p_{i}^{\alpha_{i}-\beta_{i}}=p_{i}^{\alpha_{i}}\mid a_{k+1}-b\), hence their product \(B\mid a_{k+1}-b\). Since \(0\leq|a_{k+1}-b|<B\), we have \(a_{k+1}-b=0\). But we also have \(b=c_{1}>c_{k+1}\) by Lemma 15. By Proposition 1, the number with the digits \(a_{k+1},c_{1}=b\) removed, i.e.
\[N^{\prime}=[a_{1}a_{2}\ldots a_{k}\,b\,c_{2}\ldots c_{k+1}],\]
must have the \(P_{k}^{*}\) property (each block remains non-zero, and \(b>c_{k+1}\) so not all digits are equal). Applying our induction hypothesis, we have \(\gcd(c_{k+1},B)>1\).
Thus, by induction, our statement holds for all \(k\geq 1\).
## Appendix D Estimate of saturation points.
We are now ready to prove Theorem 5 using Corollary 16.
Proof of Theorem 5: Suppose that the digits \(b,c_{k}\) generate a solution of the \(P_{k}^{*}\) problem as per Lemma 15; further suppose that they do _not_ generate a solution for the \(P_{k-1}^{*}\) problem. Then, Corollary 16 guarantees that
\[\frac{bc_{k}B^{k-1}}{bB-(B-1)c_{k}}\]
is an integer. Since \(b,c_{k}\) do not generate a solution for the \(P_{k-1}^{*}\) problem, either
\[\frac{bc_{k}B^{k-2}}{bB-(B-1)c_{k}}\]
is not an integer, or the first block \(a^{\prime}\) is too small, i.e. the block
\[a^{\prime}=\frac{b(B^{k-2}-1)}{B-1}+\frac{bc_{k}B^{k-2}}{bB-(B-1)c_{k}}<B^{k-2}.\]
The divisibility conditions in the first case are enough to obtain certain relations between \(k\) and the prime factors of \(B\) and \(bc_{k}\). The bound on \(k\) in the second case requires only direct algebraic manipulation.
Consider the former case, and factorize
\[B=p_{1}^{\alpha_{1}}\cdots p_{r}^{\alpha_{r}},\qquad bc_{k}=p_{1}^{\beta_{1}} \cdots p_{r}^{\beta_{r}},\]
where \(p_{1},\ldots,p_{r}\) are primes, and each \(\alpha_{i},\beta_{i}\geq 0\). Then, the denominator
\[bB-(B-1)c_{k}\mid bc_{k}B^{k-1}=p_{1}^{\alpha_{1}(k-1)+\beta_{1}}\cdots p_{r} ^{\alpha_{r}(k-1)+\beta_{r}},\]
hence its prime factorization cannot have any primes apart from \(p_{1},\ldots,p_{r}\). Write
\[bB-(B-1)c_{k}=p_{1}^{\gamma_{1}}\cdots p_{r}^{\gamma_{r}}\]
for \(\gamma_{i}\geq 0\). On the other hand, the denominator
\[bB-(B-1)c_{k}\nmid bc_{k}B^{k-2}=p_{1}^{\alpha_{1}(k-2)+\beta_{1}}\cdots p_{r} ^{\alpha_{r}(k-2)+\beta_{r}}.\]
This means that we must have some \(j\) for which \(\gamma_{j}>\alpha_{j}(k-2)+\beta_{j}\). Furthermore, \(\gamma_{j}\leq\alpha_{j}(k-1)+\beta_{j}\) so as to satisfy the first divisibility, ensuring that \(\alpha_{j}\neq 0\). Now,
\[(B-1)^{2}\geq(B-1)(B-c_{k})\geq bB-(B-1)c_{k}\geq p_{j}^{\gamma_{j}}>p_{j}^{ \alpha_{j}(k-2)+\beta_{j}},\]
which gives
\[(B-1)^{2}>p_{j}^{\alpha_{j}(k-2)+\beta_{j}}\geq 2^{\alpha_{j}(k-2)},\qquad 2 \log_{2}(B-1)>\alpha_{j}(k-2).\]
Since \(\alpha_{j}\geq 1\),
\[k-2<\frac{2\log_{2}(B-1)}{\alpha_{j}}\leq 2\log_{2}(B-1),\qquad k<2\log_{2}(B-1)+2.\]
Next, we consider the latter case where the block \(a^{\prime}\) for the \(P_{k-1}^{*}\) problem is too small, hence
\[a^{\prime}=\frac{b(B^{k-2}-1)}{B-1}+\frac{bc_{k}B^{k-2}}{bB-(B-1)c_{k}}<B^{k-2}.\]
However, the corresponding block \(a\) for the \(P_{k}^{*}\) problem is of the right size, hence
\[a=\frac{b(B^{k-1}-1)}{B-1}+\frac{bc_{k}B^{k-1}}{bB-(B-1)c_{k}}\geq B^{k-1}.\]
Collecting the powers of \(B\), we have the equations
\[B^{k-2}\left[\frac{b^{2}B}{(B-1)(bB-(B-1)c_{k})}-1\right]<\frac {b}{B-1},\] \[B^{k-1}\left[\frac{b^{2}B}{(B-1)(bB-(B-1)c_{k})}-1\right]\geq \frac{b}{B-1}.\]
Note that the term inside the square bracket must be positive by the second equation. Thus, we can divide the first equation by this and obtain the estimate
\[B^{k-2} <\frac{b}{B-1}\cdot\frac{(B-1)(bB-(B-1)c_{k})}{b^{2}B-(B-1)(bB-(B-1 )c_{k})}\] \[=\frac{b^{2}B-(B-1)bc_{k}}{b^{2}B-bB(B-1)+(B-1)^{2}c_{k}}\] \[\leq b^{2}B-(B-1)bc_{k}\] \[\leq(B-1)^{3}.\]
Thus,
\[(B-1)^{k-2}<B^{k-2}\leq(B-1)^{3},\qquad k-2<3,\qquad k<5.\]
This means that new solutions cannot appear with increasing \(k\), when
\[k\geq 2\log_{2}(B-1)+2,\quad\text{ and }\quad k\geq 5,\]
that is,
\[k\geq\max\{5,\,2\log_{2}(B-1)+2\}.\]
## Appendix E Powers of primes.
In this section, we examine solutions of \(P_{k}^{*}\) where the base \(B=p^{n}\) for prime \(p\), \(n\geq 2\).
**Lemma 18**: _Let \(B=p^{n}\) where \(p\) is prime, \(n>1\). If \(N_{k}\) has property \(P_{k}^{*}\) for \(k\geq 1\), then \(a_{i\neq 1}=b=c_{i\neq k}\). Furthermore, \(p\mid c_{k}\)._
_Proof_ Suppose that \(N_{k}\) has property \(P_{k}^{*}\) for \(k>1\). Then, \(a,b,c>0\) via Lemma 8, the last digit \(c_{k}>0\), and \(p\mid c_{k}\) using Lemma 17. By Corollary 16, write
\[(a,b,c)=\left(\frac{bM}{B}+\frac{bc_{k}B^{k-1}}{bB-(B-1)c_{k}},b,bM+c_{k} \right),\qquad M=\frac{B^{k}-B}{B-1}.\]
Specifically,
\[\frac{bc_{k}B^{k-1}}{bB-(B-1)c_{k}}=q\]
is an integer. Write \(B=p^{n}\), and \(c_{k}=p^{r}c_{k}^{\prime}\), \(q=p^{s}q^{\prime}\), \(b=p^{t}b^{\prime}\) with \(p\nmid c_{k}^{\prime},q^{\prime},b^{\prime}\). Since \(c_{k}<B=p^{n}\) is a single digit, we must have \(r<n\). Now, we have
\[bc_{k}p^{n(k-1)}=q(bp^{n}-(p^{n}-1)c_{k})=q((b-c_{k})p^{n}+c_{k}),\]
hence
\[bc_{k}^{\prime}p^{r}p^{n(k-1)}=q((b-c_{k})p^{n}+p^{r}c_{k}^{\prime}),\qquad bc _{k}^{\prime}p^{n(k-1)}=q((b-c_{k})p^{n-r}+c_{k}^{\prime}).\]
Now,
\[b^{\prime}c_{k}^{\prime}p^{t}p^{n(k-1)}=q^{\prime}p^{s}((b-c_{k})p^{n-r}+c_{k }^{\prime}),\]
hence
\[b^{\prime}c_{k}^{\prime}p^{n(k-1)+t-s}=q^{\prime}(b-c_{k})p^{n-r}+q^{\prime}c _{k}^{\prime}.\]
Note that we have integers on both sides. Since \(r<n\), we have \(p\mid q^{\prime}(b-c_{k})p^{n-r}\); but by construction, \(p\nmid q^{\prime}c^{\prime}_{k}\). Thus, the left hand side \(p\nmid b^{\prime}c^{\prime}_{k}p^{n(k-1)+t-s}\). Again, \(p\nmid b^{\prime}c^{\prime}_{k}\), hence we have \(s=n(k-1)+t\). Thus,
\[\frac{bc_{k}}{bB-(B-1)c_{k}}=\frac{q}{p^{n(k-1)}}=q^{\prime}p^{s-n(k-1)}=q^{ \prime}p^{t}\]
is an integer.
Now, the number \(N_{*}=[a_{*}\,b_{*}\,c_{*}]\) where
\[(a_{*},b_{*},c_{*})=\left(\frac{bc_{k}}{bB-(B-1)c_{k}},b,c_{k}\right)\]
has property \(P^{*}_{1}\). To see this, note that \(c_{k}<b\) gives
\[a_{*}=\frac{bc_{k}}{bB-(B-1)c_{k}}<\frac{bc_{k}}{bB-(B-1)b}=c_{k}<B\]
ensuring that \(a_{*}\) is a single digit, and that
\[\frac{1}{a_{*}}+\frac{B-1}{b_{*}}=\frac{bB-(B-1)c_{k}}{bc_{k}}+\frac{B-1}{b}= \frac{B}{c_{k}}=\frac{B^{1}}{c_{*}},\]
satisfying the \(P_{k}\) property.
By Proposition 1, its extension
\[N_{*}^{+}=[a_{*}b_{*}\ldots b_{*}\,b_{*}\,b_{*}\ldots b_{*}c]=[a_{*}b\ldots b \,b\,b\ldots bc_{k}]=[a_{*}^{+}\,b\,c]\]
has property \(P^{*}_{k}\). However, the digits \(b,c_{k}\) uniquely determine the first block \(a_{*}^{+}\), and we already have a solution \(N=[a\,b\,c]\) generated by \(b,c_{k}\). This forces \(a=a_{*}^{+}\), hence all \(a_{i\neq 1}=b\) as desired. In other words, our original solution \(N\) for \(P_{k}\) is an extension of the solution \(N_{*}\) for \(P_{1}\).
|
2301.00129 | Parameter-free analytic continuation for quantum many-body calculations | We develop a reliable parameter-free analytic continuation method for quantum
many-body calculations. Our method is based on a kernel grid, a causal spline,
a regularization using the second-derivative roughness penalty, and the L-curve
criterion. We also develop the L-curve averaged deviation to estimate the
precision of our analytic continuation. To deal with statistically obtained
data more efficiently, we further develop a bootstrap-averaged analytic
continuation method. In the test using the exact imaginary-frequency Green's
function with added statistical error, our method produces the spectral
function that converges systematically to the exact one as the statistical
error decreases. As an application, we simulate the two-orbital Hubbard model
for various electron numbers with the dynamical-mean field theory in the
imaginary time and obtain the real-frequency self-energy with our analytic
continuation method, clearly identifying a non-Fermi liquid behavior as the
electron number approaches the half filling from the quarter filling. Our
analytic continuation can be used widely and it will facilitate drawing clear
conclusions from imaginary-time quantum many-body calculations. | Mancheon Han, Hyoung Joon Choi | 2022-12-31T05:49:58Z | http://arxiv.org/abs/2301.00129v1 | # Parameter-free analytic continuation for quantum many-body calculations
###### Abstract
We develop a reliable parameter-free analytic continuation method for quantum many-body calculations. Our method is based on a kernel grid, a causal spine, a regularization using the second-derivative roughness penalty, and the L-curve criterion. We also develop the L-curve averaged deviation to estimate the precision of our analytic continuation. To deal with statistically obtained data more efficiently, we further develop a bootstrap-averaged analytic continuation method. In the test using the exact imaginary-frequency Green's function with added statistical error, our method produces the spectral function that converges systematically to the exact one as the statistical error decreases. As an application, we simulate the two-orbital Hubbard model for various electron numbers with the dynamical-mean field theory in the imaginary time and obtain the real-frequency self-energy with our analytic continuation method, clearly identifying a non-Fermi liquid behavior as the electron number approaches the half filling from the quarter filling. Our analytic continuation can be used widely and it will facilitate drawing clear conclusions from imaginary-time quantum many-body calculations.
## I Introduction
Numerical simulations of quantum many-body systems in real time often suffer from the instability caused by oscillatory real-time evolution of \(\exp(-iHt)\). The severe dynamical sign problem in the real-time quantum Monte Carlo simulation is an example of such instability [1, 2, 3]. This instability can be reduced by exploiting the imaginary time by changing \(\exp(-iHt)\) to \(\exp(-H\tau)\)[4]. However, it is not straightforward to compare the imaginary-time Green's function with experimental results, so the real-frequency Green's function needs to be obtained from the imaginary-frequency one. The relation between the imaginary-frequency Green's function \(G(i\omega_{n})\) and the real-frequency spectral function \(A(x)\) is
\[G(i\omega_{n})=\int\frac{A(x)}{i\omega_{n}-x}dx,\quad A(x)\geq 0. \tag{1}\]
Using Eq. (1), one needs to find \(A(x)\) from numerically calculated \(G(i\omega_{n})\). This procedure is called as the numerical analytic continuation, and it is severely ill-posed [5].
Strong demands for the numerical analytic continuation led to the development of many methods despite the ill-posed nature. Among the various methods, two categories are popular. One category is estimates of a function \(G(z)\) of a complex variable \(z\) which interpolates \(G(i\omega_{n})\). This category contains the Pade approximant [6, 7, 8] and the Nevanlinna analytic continuation [9]. The interpolation approach is independent of the real-frequency grid and can produce the entire spectral function [8, 9], but it is very sensitive to numerical precision and the accuracy of input data [8].
The other category is estimates of the spectral function \(A(x)\) using \(\chi^{2}=\sum_{i\omega_{n}}|G(i\omega_{n})-\int\frac{A(x)}{i\omega_{n}-x}dx|^{2}\). One way to use \(\chi^{2}\) is to obtain \(A(x)\) by averaging various spectral functions with the weight of \(\exp(-\chi^{2})\)[10, 11, 12, 13]. Another way to use \(\chi^{2}\) is to regularize it by adding a regularization parameter \(\lambda\) times a functional \(R[A]\) so that \(A(x)\) is obtained by minimizing \(\chi^{2}+\lambda R[A]\). A representative example of the regularization approach is the maximum entropy method [14, 15, 16, 17, 5, 18], where \(R[A]\) is the entropy of the spectral function with respect to the default model \(D(x)\).
The regularization approach is stable but its implementations so far require many control parameters. The maximum entropy method requires the real-frequency grid \(\{x_{i}\}\), \(D(x)\), and \(\lambda\). These parameters can affect the resulting \(A(x)\) substantially [18, 19, 15]. To find optimal parameters, one needs to test several values of \(\{x_{i}\},D(x),\) and \(\lambda\). While the optimal \(\lambda\) can be found with some criteria [18, 15, 20], methods to determine \(\{x_{i}\}\) and \(D(x)\) are not established yet. In addition, when applied to a metallic system, the maximum entropy method requires the preblur [17, 5, 20, 21], which makes it not straightforward to obtain \(A(x)\) for metallic and insulating phases on equal footing.
Quantum many-body calculations are often conducted with the imaginary-time quantum Monte Carlo method [22, 23, 24, 25, 26], which yields imaginary-frequency data with statistical errors. To consider statistical errors, it is typical to scale \(\chi^{2}\) with the standard deviation or the covariance matrix [14, 15, 5, 18], which can be estimated with resampling methods such as the jackknife approach [27] or the bootstrap approach [28, 29]. Because statistical errors can induce artifacts in the analytically continued spectral function, the analytic continuation requires careful consideration of statistical errors.
In this work, we develop a reliable parameter-free analytic continuation method. Our method is based on the regularization approach, where we remove any arbitrary selection of control parameters as follows. First, we develop a real-frequency kernel grid which can be used generally and can support the precise description of corresponding imaginary-frequency data. Second, we use the second-derivative roughness penalty [30], which ensures our method does not need the default model \(D(x)\). Then, the proper regularization parameter \(\lambda\) is found by the L-curve criterion [31, 32]. We also develop the L-curve averaged deviation to estimate the precision of our analytic continuation. In addition, to deal with statistical errors more carefully, we develop a bootstrap-averaged analytic continuation method.
## II Parameter-free analytic continuation
### Kernel grid
The analytic continuation finds a spectral function \(A(x)\) which satisfies Eq. (1) for given \(G(i\omega_{n})\). Numerical implementation of this procedure requires a continuous description of \(A(x)\) using a finite number of values. For this, we used the natural cubic spline [33; 34] interpolation, where \(A(x)\) is represented as \(A(x)=C_{i,0}+C_{i,1}(x-x_{i})+C_{i,2}(x-x_{i})^{2}+C_{i,3}(x-x_{i})^{3}\) in the \(i\)th interval of \(x_{i}{\leq}x{\leq}x_{i+1}\) for \(i=1,2,\cdots,n_{x}-1\), with \(A^{\prime\prime}(x_{1})=A^{\prime\prime}(x_{n_{x}})=0\). Here \(n_{x}\) is the number of grid points and \(A^{\prime\prime}(x)\) is the second derivative of \(A\). For appropriate grid points, we develop a kernel grid which depends only on the temperature \(k_{B}T\), the real-frequency cutoff \(x_{\text{max}}\), and the number of grid points \(n_{x}\). The accurate analytic continuation requires the real-frequency grid which describes \(G(i\omega_{n})\) of Eq. (1) accurately, so the grid should be dense near \(x\) where \(A(x)\) contributes greatly to \(G(i\omega_{n})\). Thus, for a single \(i\omega_{n}\), the appropriate grid density should be proportional to \(\left|\delta G(i\omega_{n})/\delta A(x)\right|^{2}=1/(\omega_{n}^{2}+x^{2})\). Hence, to describe \(G(i\omega_{n})\) for all \(i\omega_{n}\), we use the grid density \(\rho(x)\) such that
\[\rho(x)\propto\sum_{n=0}^{\infty}\left|\frac{\delta G(i\omega_{n})}{\delta A (x)}\right|^{2}=\frac{1}{4k_{B}Tx}\tanh\left(\frac{x}{2k_{B}T}\right), \tag{2}\]
where \(\omega_{n}=(2n+1)\pi k_{B}T\) for the fermionic Green's function. Then, grid points are determined by the equidistribution principle [34], \(\int_{x_{i}}^{x_{i+1}}\rho(x)dx=C/(n_{x}-1)\) with \(x_{1}=-x_{\text{max}}\), \(x_{n_{x}}=x_{\text{max}}\), and \(C=\int_{-x_{\text{max}}}^{x_{\text{max}}}\rho(x)dx\). Here \(x_{\text{max}}\) and \(n_{x}\) are determined to be large enough to make the obtained spectral function converge.
We compare the performance of our kernel grid with that of a uniform grid in Fig. 1. Figures 1(a) and 1(c) show two different spectral functions \(A(x)\). The spectral function \(A(x)\) shown in Fig. 1(a) has a sharp peak at the Fermi level (\(x=0\)), while that shown in Fig. 1(c) has no peak at the Fermi level. Figures 1(b) and 1(d) show \(G(i\omega_{n})\) corresponding to \(A(x)\) shown in Figs. 1(a) and 1(c), respectively. In the case that \(A(x)\) has a sharp peak at the Fermi level (\(x=0\)), our kernel grid describes \(A(x)\) accurately enough to produce \(G(i\omega_{n})\) correctly, while the uniform grid does not [Fig. 1(b)]. In the case that \(A(x)\) does not have a sharp peak, both our kernel grid and the uniform grid describe \(A(x)\) accurately enough to produce \(G(i\omega_{n})\) correctly [Fig. 1(d)].
### Causal cubic spline
Finding the spectral function \(A(x)\) from \(G(i\omega_{n})\) by minimizing \(\chi^{2}[A]=\sum_{i\omega_{n}}|G(i\omega_{n})-\int\frac{A(x)}{i\omega_{n}-x} dx|^{2}\) is extremely ill-posed [5]. This ill-posedness can be significantly weakened by imposing the causality condition \(A(x)\geq 0\)[35]. Since \(A(x_{i})\geq 0\), \(i=1,\cdots,n_{x}\), satisfies the causality only at the grid points \(x_{i}\), we develop conditions that impose the causality for all \(x\) as follows. The cubic spline can be expressed as a linear combination of cubic B-splines which are non-negative functions [36]. Thus, \(A(x)\geq 0\) for all \(x\) if expansion coefficients are non-negative [36]:
\[A(x_{1})\geq 0,\quad A(x_{n_{x}})\geq 0,\] \[A(x_{i})+\frac{1}{3}(x_{i\pm 1}-x_{i})A^{\prime}(x_{i})\geq 0. \tag{3}\]
Here \(A^{\prime}(x)\) is the derivative of \(A\). The cubic spline constrained by Eq. (3) satisfies \(A(x)\geq 0\) not only at grid points but also throughout intervals between grid points. This cubic spline, which we call the _causal_ cubic spline, weakens the ill-posedness of the analytic continuation significantly, but it does not resolve the ill-posedness completely so minimization of \(\chi^{2}[A]\) with the constraint Eq. (3) produces \(A(x)\) still having spiky behavior due to overfitting to numerical errors [35]. To obtain smooth and physically meaningful \(A(x)\), we employ an appropriate roughness penalty and the L-curve criterion.
### The roughness penalty and the L-curve criterion
To avoid the spiky behavior in \(A(x)\) caused by overfitting to numerical errors, we use the second-derivative roughness penalty [30]. The roughness penalty \(R[A]\) is defined as
\[R[A]=\int_{-x_{\text{max}}}^{x_{\text{max}}}\left|A^{\prime\prime}(x)\right|^{2 }dx. \tag{4}\]
Then, we obtain \(A(x)\) by minimizing a regularized functional
\[Q_{\lambda}[A]=\chi^{2}[A]+\lambda R[A], \tag{5}\]
with a regularization parameter \(\lambda\). We use the interior-point method [37; 38] to implement this minimization. Then, the
Figure 1: Comparison of our kernel grid and a uniform grid in describing the real-frequency spectral function \(A(x)\). (a) and (c) \(A(x)\) versus the real frequency \(x\). (b) and (d) The imaginary part of Green’s function \(G(i\omega_{n})\) versus the Matsubara frequency \(\omega_{n}\) corresponding to \(A(x)\) shown in (a) and (c), respectively. Our kernel grid and the uniform grid are generated with \(n_{x}=51\) and \(x_{\text{max}}=5\). Temperature is \(0.01\). In (a) and (c), values of \(A(x)\) at our kernel grid (at the uniform grid) are shown by red (green) dots. In (b) and (d), values of the imaginary part of \(G(i\omega_{n})\) calculated from values of \(A(x)\) at our kernel grid (at the uniform grid) are shown by red (green) dots. In (a)–(d), exact values are shown by black lines.
optimal \(\lambda\) that balances \(\chi^{2}[A]\) and \(R[A]\) is found by the L-curve criterion [31, 32], which is a popular approach to determine the regularization parameter in various cases as follows. For each \(\lambda\), one finds \(A_{\lambda}\) that minimizes \(Q_{\lambda}[A]\) in Eq. (5). Then, let \(\chi^{2}(\lambda)=\chi^{2}[A_{\lambda}]\) and \(R(\lambda)=R[A_{\lambda}]\). The L-curve is the plot of \(\log_{10}[R(\lambda)]\) versus \(\log_{10}[\chi^{2}(\lambda)]\). The L-curve criterion is to choose \(\lambda\) that corresponds to the corner of the L-curve as the optimal value, \(\lambda_{\text{opt}}\) as illustrated in Fig. 2(a). This procedure can be performed stably and efficiently by using a recently developed algorithm [39] which typically requires minimizations of \(Q_{\lambda}[A]\) at about \(20\) different values of \(\lambda\). For a very small \(\lambda\), \(\chi^{2}[A]\) dominates \(Q_{\lambda}[A]\) in Eq. (5), resulting in unphysical peaks in \(A_{\lambda}(x)\), as shown in Fig. 2(b). For a very large \(\lambda\), \(R[A]\) dominates \(Q_{\lambda}[A]\) in Eq. (5), resulting in too much broadening in \(A_{\lambda}(x)\), as shown in Fig. 2(d). On the other hand, the spectral function computed with \(\lambda_{\text{opt}}\) matches excellently the exact one, as shown in Fig. 2(c). With this criterion for \(\lambda\) and with large enough values of \(n_{x}\) and \(x_{\text{max}}\) (see Appendix A for the convergence test with respect to \(n_{x}\) and \(x_{\text{max}}\)), our analytic continuation does not have any arbitrarily chosen parameter that can affect the real-frequency result significantly, so we call our method a _parameter-free_ method. Our analytic continuation method can be applied to the self-energy or other Matsubara frequency quantities which can be represented in a way similar to Eq. (1).
We can use the L-curve to estimate the precision of the analytic continuation as well. In the L-curve, \(\lambda<\lambda_{\text{opt}}\) produces \(A_{\lambda}(x)\) which is more fitted to \(G(i\omega_{n})\), so the precision of \(A_{\lambda_{\text{opt}}}(x)\) can be estimated by comparing it with \(A_{\lambda}(x)\). In this regard, we define the L-curve averaged deviation (LAD),
\[\text{LAD}(x)=\frac{\int_{C}[A_{\lambda}(x)-A_{\lambda_{\text{opt}}}(x)]ds}{ \int_{C}ds}, \tag{6}\]
where \(C\) is the L-curve from \(\lambda=0\) to \(\lambda=\lambda_{opt}\), and we use it as an error estimator.
### Bootstrap-averaged analytic continuation
The Monte Carlo approach [22, 23, 24, 25, 26] is often used to calculate the imaginary-frequency Green's function \(G(i\omega_{n})\). As a result, the calculated \(G(i\omega_{n})\) has statistical errors. The spectral function calculated by the analytic continuation of such data can exhibit artifacts from statistical errors in \(G(i\omega_{n})\) [see Fig. 3(a) for an example]. Here we devise a bootstrap-averaged analytic continuation method. The bootstrap approach [28, 29] is a widely used resampling method in statistics. Suppose we have \(N\) independent data \(G(i\omega_{n})_{j}\), where \(j=1,\cdots,N\), and we repeat the bootstrap sampling \(N_{\text{B}}\) times. For the \(k\)th bootstrap sampling, we randomly sample \(N\) data from \(G(i\omega_{n})_{j}\) with replacement and calculate their average \(g_{k}^{\text{B}}(i\omega_{n})=\frac{1}{N}\sum_{j=1}^{N}n_{kj}G(i\omega_{n})_{j}\), where \(n_{kj}\) is the number of repetitions of \(G(i\omega_{n})_{j}\) in the \(k\)th bootstrap sampling. Then, we obtain the analytically continued spectral function \(A[g_{k}^{\text{B}}]\) for \(g_{k}^{\text{B}}\). Finally, the spectral function \(A(x)\) is calculated by \(A(x)=\frac{1}{N_{\text{B}}}\sum_{k=1}^{N_{\text{B}}}A[g_{k}^{\text{B}}](x)\), which converges as \(N_{\text{B}}\) increases. In our present work, we used \(N_{\text{B}}=256\), which is large enough to obtain converged results. Similarly, we can also obtain bootstrap-averaged LAD(\(x\)) for the error estimation of bootstrap-averaged \(A(x)\).
Figure 3 compares the results of our analytic continuation without and with the bootstrap average. We consider an exact spectral function \(A_{\text{exact}}(x)\) which has a narrow peak at \(x=0\) and a broad peak at \(x=-4\) and another broad peak at \(x=5\), as shown by the black line in Fig. 3. We obtain the exact Green's function \(G_{\text{exact}}(i\omega_{n})\) from \(A_{\text{exact}}(x)\). We used a temperature of \(0.01\) and the first \(100\) Matsubara frequencies. The kernel grid is generated with \(x_{\text{max}}=10\) and \(n_{x}=101\). To perform our analytic continuation without bootstrap average, we obtain the Green's function \(G(i\omega_{n})\) by adding a Gaussian error of the standard deviation of \(10^{-6}\) to \(G_{\text{exact}}(i\omega_{n})\). Then
Figure 2: The L-curve criterion to find the optimal regularization parameter \(\lambda\). (a) The L-curve. Spectral functions \(A(x)\) versus the real frequency \(x\), shown by green lines, computed by minimizing Eq. (5), with (b) \(\lambda=10^{-16}\), (c) \(\lambda=\lambda_{\text{opt}}=6.21\times 10^{-11}\), and (d) \(\lambda=1\). In (b)–(d), black lines show the exact spectral function for comparison. The imaginary-frequency Green’s function \(G(i\omega_{n})\) is generated by adding Gaussian errors with a standard deviation of \(10^{-6}\) to the exact one. Temperature is \(0.01\). We used the first \(100\) Matsubara frequencies and the kernel grid with \(x_{\text{max}}=10\) and \(n_{x}=101\), which are large enough for converged results.
Figure 3: Comparison of the spectral functions \(A(x)\) from our analytic continuation method without and with the bootstrap average. (a) \(A(x)\) obtained without the bootstrap average. (b) \(A(x)\) obtained with the bootstrap average. See the text for detailed procedures of our analytic continuation without and with the bootstrap average. In (a) and (b), green lines are \(A(x)\) obtained by our analytic continuation and black lines are the exact spectral function \(A_{\text{exact}}(x)\).
we obtain the spectral function \(A(x)\) by applying our analytic continuation method to \(G(i\omega_{n})\). Figure 3(a) shows the obtained \(A(x)\), which deviates slightly from \(A_{\text{exact}}(x)\) at around \(x=5\). Next, to perform our bootstrap-averaged analytic continuation, we obtain \(N=100\) independent values of \(G(i\omega_{n})\) by adding a Gaussian error of the standard deviation of \(10^{-5}\) to \(G_{\text{exact}}(i\omega_{n})\). Then, we obtain the spectral function \(A(x)\) by applying our bootstrap-averaged analytic continuation method with \(N_{\text{B}}=256\). Figure 3(b) shows the obtained \(A(x)\), which agrees excellently with \(A_{\text{exact}}(x)\). The deviation of \(A(x)\) from \(A_{\text{exact}}(x)\) at around \(x=5\) in Fig. 3(a) is due to overfitting of \(A(x)\) to \(G(i\omega_{n})\) with statistical errors, and it is avoided by the bootstrap average, as shown in Fig. 3(b). So the bootstrap average is useful for avoiding the overfitting to data with statistical errors. This bootstrap average can be used with any analytic continuation method.
## III Benchmarks and applications
### Tests with exact results
Figure 4 demonstrates the statistical-error dependence of the spectral function \(A(x)\) and LAD\((x)\) obtained with our method. We consider an exact \(A(x)\) consisting of three peaks, from which we obtain the exact \(G(i\omega_{n})\). Then, we add statistical errors to the exact \(G(i\omega_{n})\) and apply our method to obtain \(A(x)\) and LAD\((x)\). Here the standard deviation \(\delta\) of the statistical errors is independent of \(\omega_{n}\). (See Appendix B for \(\delta\) varying with \(\omega_{n}\).) As shown in Fig. 4(a), when statistical errors in \(G(i\omega_{n})\) are large, the obtained \(A(x)\) is broader than the exact one, and LAD\((x)\) is large. As the statistical errors are reduced, \(A(x)\) converges to the exact one, and LAD\((x)\) diminishes [Figs. 4(b)-(d)]. These results show that our method behaves well with respect to the statistical errors, reproducing the exact spectral function if the statistical errors are small enough.
For a more detailed analysis, we fit the obtained \(A(x)\) in Fig. 4 with three Gaussian functions. Table 1 shows the obtained peak centers and peak widths. These results show explicitly that peak centers and peak widths converge to the corresponding exact values as statistical errors in \(G(i\omega_{n})\) decrease and show that peak centers converge faster than peak widths.
In addition, we tested our analytic continuation method with various spectral functions (Fig. 5). Here we performed the bootstrap average with \(N_{\text{B}}=256\), using \(N=100\) independent values of \(G(i\omega_{n})\). Each independent value of \(G(i\omega_{n})\) was obtained by adding a Gaussian error of the standard deviation of \(10^{-5}\) to the exact Green's function \(G_{\text{exact}}(i\omega_{n})\) ob
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline & \multicolumn{2}{c}{Left peak} & \multicolumn{2}{c}{Middle peak} & \multicolumn{2}{c}{Right peak} \\ \(A(x)\) & Center Width & \multicolumn{2}{c}{Center Width} & \multicolumn{2}{c}{Center Width} \\ \hline Fig. 4(a) & \(-3.24\) & \(1.28\) & \(1.34\) & \(0.670\) & \(2.67\) & \(1.04\) \\ Fig. 4(b) & \(-3.01\) & \(0.981\) & \(1.30\) & \(0.657\) & \(2.88\) & \(1.18\) \\ Fig. 4(c) & \(-3.01\) & \(0.813\) & \(1.02\) & \(0.466\) & \(3.05\) & \(0.687\) \\ Fig. 4(d) & \(-3.01\) & \(0.813\) & \(1.00\) & \(0.414\) & \(3.02\) & \(0.609\) \\ Exact & \(-3.00\) & \(0.800\) & \(1.00\) & \(0.400\) & \(3.00\) & \(0.600\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Analysis of peak centers and peak widths in the spectral functions shown by green lines in Fig. 4.
Figure 4: The statistical-error dependence of the spectral function \(A(x)\), shown by green lines, obtained with our bootstrap-averaged analytic continuation method. The standard deviation \(\delta\) of the statistical error in imaginary-frequency data is (a) \(10^{-2}\), (b) \(10^{-3}\), (c) \(10^{-4}\), and (d) \(10^{-5}\). Black lines show the exact spectral function, and gray lines show the deviation of \(A(x)\) by LAD\((x)\). In (a) and (b), Gaussian fits for \(A(x)\) are shown in dotted lines. Temperature is \(0.01\). We used the first \(100\) Matsubara frequencies and a kernel grid with \(x_{\text{max}}=10\) and \(n_{x}=101\), which are large enough for converged results.
Figure 5: Tests of our bootstrap-averaged analytic continuation method for various spectral functions \(A(x)\) consisting of (a) a broad peak at \(x<0\), (b) two broad peaks at \(x<0\) and \(x=0\), (c) a narrow peak at \(x=0\) and a broad peak at \(x>0\), and (d) a narrow peak at \(x=0\) and two broad peaks at \(x<0\) and \(x>0\). In (a)–(d), green lines show \(A(x)\) obtained with our bootstrap-averaged analytic continuation, while black lines show exact spectral functions \(A_{\text{exact}}(x)\). We used a temperature of \(0.01\), the first \(100\) Matsubara frequencies, and a kernel grid with \(x_{\text{max}}=10\) and \(n_{x}=101\).
tained from the exact spectral function \(A_{\text{exact}}(x)\). Figure 5 confirms analytically continued spectral functions \(A(x)\) agree excellently with corresponding \(A_{\text{exact}}(x)\). We also compared our method with the maximum entropy method (Appendix C).
### Dynamical mean-field theory simulation of the two-orbital Hubbard model
As an application of our method, we consider the two-orbital Hubbard model [40] described by the Hamiltonian
\[H= -\sum_{\langle ij\rangle,ab\sigma}t^{ab}_{ij}d^{\dagger}_{ia \sigma}d_{jb\sigma}-\sum_{i\sigma}\mu n_{i\sigma}+\sum_{ia}Un_{ia\uparrow}n_{ ia\downarrow}\] \[+\sum_{i,a<b,\sigma}\{(U-2J)n_{ia\sigma}n_{ib\bar{a}}+(U-3J)n_{ia \sigma}n_{ib\sigma}\}\] \[-\sum_{i,a\neq b}J(d^{\dagger}_{ia\downarrow}d^{\dagger}_{ib \uparrow}d_{ib\downarrow}d_{ia\uparrow}+d^{\dagger}_{ib\uparrow}d^{\dagger}_{ ib\downarrow}d_{ia\uparrow}d_{ia\downarrow}) \tag{7}\]
in the infinite-dimensional Bethe lattice with a semicircular noninteracting density of states. Here \(d_{ia\sigma}(d^{\dagger}_{ia\sigma})\) is the annihilation (creation) operator of an electron of spin \(\sigma\) in the \(a\)th (\(a=1,2\)) orbital at the \(i\)th site, \(n_{ia\sigma}=d^{\dagger}_{ia\sigma}d_{ia\sigma}\), \(t^{ab}_{ij}\) is the nearest-neighbor hopping energy, \(\mu\) is the chemical potential, \(U\) is the local Coulomb interaction, and \(J\) is the Hund's coupling. With the Pade approximant, the imaginary part of the self-energy in this model shows a peak at around the Fermi level in the non-Fermi-liquid phase [41]. Our analytic continuation method makes it possible to analyze the existence and evolution of peaks in detail, as shown below.
We simulate the two-orbital Hubbard model of Eq. (7) with the dynamical mean field theory (DMFT) [42; 43; 44; 45; 46]. We implemented the hybridization-expansion continuous-time quantum Monte Carlo method [25] and used it as an impurity solver. Both orbitals have the same bandwidth of \(4\) in our energy units. We simulated the case with \(U=8,J=U/6\), and temperature \(T=0.02\) and considered the paramagnetic phase. We applied our analytic continuation method to obtain the real-frequency self-energy \(\Sigma^{R}(x)\) from the first \(100\) imaginary-frequency self-energies \(\Sigma(i\omega_{n})\). We used \(n_{x}=101\) and \(x_{\text{max}}=30\) to form the kernel grid, which are large enough for converged results. For the bootstrap average, we used \(1.28\times 10^{4}\) independent sets of \(\Sigma(i\omega_{n})\) obtained with \(2\times 10^{6}\) Monte Carlo steps. After obtaining \(\Sigma^{R}(x)\), we computed the spectral function \(A(x)=-\frac{1}{\pi}\operatorname{Im}[G(x+i\eta)]\) by using \(G(x+i\eta)=\int_{-\infty}^{\infty}D(\epsilon)/(x+i\eta+\mu-\epsilon-\Sigma^{ R}(x))d\epsilon\). Here \(D(\epsilon)=\sqrt{4-x^{2}}/(2\pi)\).
Figure 6 shows the spectral function as a function of the electron number \(n\) per site, \(A(x,n)\). The spectral function varies continuously except for the half filling (\(n=2\)), where the insulating phase appears. The particle-hole symmetry, \(A(x,n)=A(-x,4-n)\), is clearly observed in Fig. 6, although it is not enforced. At \(n\leq 1\) or \(n\geq 3\), the spectral function shows a quasiparticle peak at the Fermi level and two Hubbard bands. As \(n\) is increased from \(1.2\) or decreased from \(2.8\), a shoulder appears in the Fermi-level quasiparticle peak.
To investigate the origin of this shoulder, we plot \(\operatorname{Im}[\Sigma^{R}(x,n)]\) in Fig. 7. At \(n\leq 1\) or \(n\geq 3\), \(\operatorname{Im}[\Sigma^{R}(x)]\) shows two peaks corresponding to the lower and upper Hubbard bands, which we call the lower and upper Hubbard peaks. As \(n\) is increased from \(1.2\) (decreased from \(2.8\)), the lower (upper) Hubbard peak splits into two peaks, resulting in the shoulder of the quasiparticle peak in \(A(x)\). These Hubbard-peak splittings induce non-Fermi-liquid behavior, as discussed be
Figure 7: The imaginary part of the self-energy of the two-orbital Hubbard model as a function of the electron number \(n\) per site, obtained by applying our analytic continuation method to imaginary-frequency DMFT results. The imaginary part of the self-energy is plotted (a) for \(0{\leq}n{\leq}4\) continuously with an intensity map and (b) for several selected \(n\). In (b), self-energies are offset with a step of \(-7\) for clarity. The self-energy diverges at the Fermi level (\(x=0\)) in the case of \(n=2\), which is marked with a black dot in (a). The lower (upper) Hubbard peak splits into two peaks at about \(n=1.2\) (\(n=2.8\)), which is marked with a dotted line in (a).
Figure 6: The spectral function of the two-orbital Hubbard model as a function of the electron number \(n\) per site, obtained by applying our analytic continuation method to imaginary-frequency DMFT results. The spectral function is plotted (a) for \(0{\leq}n{\leq}4\) continuously with an intensity map and (b) for several selected \(n\). In (b), spectral functions are offset with a step of \(0.4\) for clarity. The shoulder of the quasiparticle peak appears at about \(n=1.2\) and \(n=2.8\), which are marked with dotted lines in (a).
low.
In Fig. 8, we compare the self-energies for \(n=0.85\) and \(n=1.55\). For \(n=0.85\), \(\mathrm{Im}[\Sigma(i\omega_{n})]\) is proportional to \(\omega_{n}\) at small \(\omega_{n}\), which indicates a Fermi-liquid behavior [47]. In the real frequency, the Fermi-liquid behavior, \(\mathrm{Im}[\Sigma^{R}(x)]=C+\alpha x^{2}\) for small \(x\)[48], is obtained from our analytic continuation [Fig. 8(b)]. On the other hand, for \(n=1.55\), \(\mathrm{Im}[\Sigma(i\omega_{n})]\) is almost proportional to \(\sqrt{\omega_{n}}\), indicating a non-Fermi-liquid behavior [47]. This non-Fermi-liquid behavior appears in the real frequency \(x\) as linear dependance of \(\mathrm{Im}[\Sigma^{R}(x)]\) on \(x\) near the Fermi level (\(x=0\)) [Fig. 8(d)]. This linear dependence comes from splitting of Hubbard peaks in \(\mathrm{Im}[\Sigma^{R}(x)]\). Non-Fermi-liquid behavior in \(\mathrm{Im}[\Sigma(i\omega_{n})]\) is known to appear at the spin-freezing crossover [49, 47, 41, 50]. Our results show that the spin-freezing crossover (or, equivalently, the spin-orbital separation [51]) occurs with the Hubbard-peak splitting in \(\mathrm{Im}[\Sigma^{R}(x)]\).
To show the spin-freezing crossover, we obtain the local magnetic susceptibility \(\chi_{\text{loc}}\) and the dynamic contribution \(\Delta\chi_{\text{loc}}\) to the local magnetic susceptibility. With the operator \(S_{z}=(1/2)\sum_{a=1}^{2}(n_{a\uparrow}-n_{a\downarrow})\), we define the local magnetic susceptibility as
\[\chi_{\text{loc}}=\int_{0}^{\beta}\langle S_{z}(\tau)S_{z}(0)\rangle d\tau, \tag{8}\]
and the dynamic contribution as
\[\Delta\chi_{\text{loc}}=\int_{0}^{\beta}[\langle S_{z}(\tau)S_{z}(0)\rangle- \langle S_{z}(\beta/2)S_{z}(0)\rangle]d\tau. \tag{9}\]
Here \(\beta=1/k_{B}T\). Figure 9 shows \(\chi_{\text{loc}}\) and \(\Delta\chi_{\text{loc}}\) as functions of the electron number \(n\) per site. As the electron number approaches the half filling (\(n=2\)), \(\chi_{\text{loc}}\) increases monotonically. On the other hand, \(\Delta\chi_{\text{loc}}\), which represents the fluctuation of the local spin moment, is maximal near \(n=1.6\) and \(2.4\). This indicates the spin-freezing crossover [47, 50].
## IV Summary
In summary, we developed a reliable parameter-free analytic continuation method, tested it with exact cases, and studied the two-orbital Hubbard model as an application. We developed a kernel grid which is suitable for the numerical analytic continuation and employed the causal cubic spline, the second-derivative roughness penalty, and the L-curve criterion. With these, we developed a reliable parameter-free analytic continuation method and an error estimator. We also developed a bootstrap-averaged analytic continuation. We demonstrated that our method reproduces the exact spectral function as statistical errors in imaginary-frequency data decrease. As an application, we computed real-frequency quantities from imaginary-frequency DMFT results of the two-orbital Hubbard model, where we found that peaks in \(\mathrm{Im}[\Sigma^{R}(x)]\) split as the electron number approaches the half filling from the quarter filling. We verified that this peak splitting corresponds to non-Fermi-liquid behavior considered to be the signature of the spin-freezing crossover in previous works [47, 49, 50, 41]. Our analytic continuation method does not depend on any specific detail of the system under consideration, so it can be used widely to carry out a clear real-frequency analysis from various imaginary-time quantum many-body calculations.
###### Acknowledgements.
This work was supported by NRF of Korea (Grants No. 2020R1A2C3013673 and No. 2017R1A5A1014862) and the KISTI supercomputing center (Project No. KSC-2021-CRE-0384).
## Appendix A Convergence test of our kernel grid
Our analytic continuation uses the kernel grid which is a set of non-uniform \(n_{x}\) points in the range of \(-x_{\text{max}}\leq x\leq x_{\text{max}}\), as described in the main text. We test the convergence of the spectral function \(A_{\lambda_{\text{log}}}(x)\) calculated from our analytic continuation method with respect to \(n_{x}\) and \(x_{\text{max}}\) by using the
Figure 8: Imaginary part of \(\Sigma(i\omega_{n})\), shown by red dots, and \(\Sigma^{R}(x)\), shown by blue lines, of the two-orbital Hubbard model for (a) and (b) \(n=0.85\) and (c) and (d) \(n=1.55\). In (a) and (c), solid lines are fitted to the lowest three points of \(\mathrm{Im}[\Sigma(i\omega_{n})]\). In (b) and (d), dotted lines are fitted to low-frequency part of \(\mathrm{Im}[\Sigma^{R}(x)]\). Gray lines show the deviation of \(\mathrm{Im}[\Sigma^{R}(x)]\) by \(\mathrm{LAD}(x)\) applied to the self-energy.
Figure 9: Spin-freezing crossover of the two-orbital Hubbard model. (a) Local magnetic susceptibility \(\chi_{\text{loc}}\) and (b) dynamic contribution \(\Delta\chi_{\text{loc}}\) to the local magnetic susceptibility as a function of electron number \(n\) per site. See the text for computational details.
data and the spectral function considered in Fig. 2. To quantify the difference between \(A_{\lambda_{\text{opt}}}(x)\) and \(A_{\text{exact}}(x)\), we define a norm \(||dA||=\{\int_{-\infty}^{\infty}(A_{\lambda_{\text{opt}}}(x)-A_{\text{exact}}( x))^{2}dx\}^{1/2}\). Figure 10 shows \(||dA||\) versus \(n_{x}\) and \(x_{\text{max}}\), confirming that \(A_{\lambda_{\text{opt}}}(x)\) converges to \(A_{\text{exact}}(x)\) as \(n_{x}\) and \(x_{\text{max}}\) increase. At large enough \(n_{x}\) and \(x_{\text{max}}\), \(||dA||\) may have small nonzero values, as shown in Fig. 10, which are due to (i) statistical errors in the imaginary-frequency data used for the analytic continuation and (ii) the presence of the roughness penalty \(R[A]\) in Eq. (4) in the regularized functional \(Q_{\lambda}[A]\) of Eq. (5).
If the Green's function \(G(i\omega_{n})\) is well represented with a spectral function so that \(Q_{\lambda=0}[A]\) is minimized to a tiny value comparable to the computer precision (as in the case of an exact Green's function without any statistical errors), it is difficult to find \(\lambda_{\text{opt}}\) by using the L-curve. In that case, it is suitable to find and use \(\lambda\) at which \(Q_{\lambda}[A_{\lambda}]=2Q_{\lambda=0}[A_{\lambda=0}]\).
## Appendix B Test with statistical errors proportional to \(\omega_{n}\)
The standard deviation of the statistical error in the imaginary-frequency data \(G(i\omega_{n})\), or \(\Sigma(i\omega_{n})\), may vary with the Matsubara frequency \(\omega_{n}\), in general. For instance, in the two-orbital Hubbard model considered in Sec. III.2, we used a quantum Monte Carlo method to calculate the self-energy \(\Sigma(i\omega_{n})\), and the obtained \(\Sigma(i\omega_{n})\) has larger statistical errors at larger \(\omega_{n}\).
As an explicit test of the case where the statistical error varies with \(\omega_{n}\), we consider again the exact spectral function in Fig. 4 and add Gaussian errors to the exact \(G(i\omega_{n})\) whose standard deviation is proportional to \(\omega_{n}\). This mimics the self-energy calculated by using the Dyson equation. Let \(\delta_{n}\) be the standard deviation of the statistical error in \(G(i\omega_{n})\) and \(\bar{\delta}\) be the averaged value \(\bar{\delta}=\frac{1}{N_{\omega_{n}}}\sum_{\omega_{n}}\delta_{n}\). Figure 11 shows the results of our analytic continuation with different values of \(\bar{\delta}\), confirming that our method works well even when the statistical error is proportional to \(\omega_{n}\). We also note \(\bar{\delta}\) plays the role of \(\delta\) in Fig. 4.
## Appendix C Comparison with the maximum entropy method
We compare the results of our analytic continuation method with those of the maximum entropy method. For this comparison, we implemented the maximum entropy method [14; 5; 15], which obtains the spectral function \(A(x)\) by minimizing \(\chi^{2}/2-\alpha S[A]\). Here \(\chi^{2}=\sum_{i\omega_{n}}|G(i\omega_{n})-\int\frac{A(x)}{i\omega_{n}-x}dx|^ {2}\), and the relative entropy \(S[A]\) is
\[S[A]=-\int A(x)\ln\left(\frac{A(x)}{D(x)}\right)dx, \tag{12}\]
Figure 11: The statistical-error dependence of the spectral function \(A(x)\), shown by green lines, obtained with our analytic continuation method. Here \(G(i\omega_{n})\) have statistical errors whose standard deviation is proportional to \(\omega_{n}\). The averaged value \(\bar{\delta}\) of the standard deviation is (a) \(10^{-2}\), (b) \(10^{-3}\), (c) \(10^{-4}\), and (d) \(10^{-5}\). Black lines show the exact spectral function, and gray lines show the deviation of \(A(x)\) by \(\text{LAD}(x)\). In (a) and (b), Gaussian fits for \(A(x)\) are shown by dotted lines. Temperature is \(0.01\). We used the first \(100\) Matsubara frequencies and a kernel grid with \(x_{\text{max}}=10\) and \(n_{x}=101\), which are large enough for converged results.
Figure 12: Comparison of our method with the maximum entropy method. Green lines are spectral functions from our analytic continuation method, and red lines are those from the maximum entropy method. The standard deviation \(\delta\) of the statistical error in imaginary-frequency data is (a) \(10^{-2}\), (b) \(10^{-3}\), (c) \(10^{-4}\), and (d) \(10^{-5}\). Black lines are exact spectral functions. Temperature is \(0.01\), and we used the first \(100\) Matsubara frequencies.
where \(D(x)\) is the default model. The optimal value for \(\alpha\) is obtained by finding the value of \(\alpha\) that maximizes the curvature in the plot of \(\log_{10}\chi^{2}\) versus \(0.2\log_{10}\alpha\)[18]. As a test example, we considered an exact spectral function \(A_{\text{exact}}(x)\) which consists of two Gaussian peaks: one at \(x=-1\) with a standard deviation of \(0.6\) and the other at \(x=1\) with a standard deviation of \(0.5\). In both our method and the maximum entropy method, we used the kernel grid with \(x_{\text{max}}=10\) and \(n_{x}=101\). We generated \(256\) bootstrap samples with constant Gaussian error with a standard deviation of \(\delta\) which varied from \(10^{-2}\) to \(10^{-5}\), and we averaged analytically continued spectral functions over bootstrap samples. For the maximum entropy method, we used the Gaussian default model that consists of a single broad Gaussian peak at \(x=0\) with a standard deviation of \(3\), and we did not apply any pre-blur process [20; 17; 21]. As shown in Fig. 12, spectral functions calculated with our method and the maximum entropy method converge to the exact one if statistical errors are small enough [Fig. 12(d)]. If statistical errors are not small enough, the maximum entropy method produces some cusps near the Fermi level [Fig. 12(a)-(c)], as reported in the literature [20; 17; 5; 21]. While these cusps from the maximum entropy method become more pronounced with larger statistical errors in the imaginary-frequency data, our method does not produce such behaviors even for large statistical-error cases.
|
2301.13633 | QCD equation of state at finite isospin density from the linear sigma
model with quarks: The cold case | We use the two-flavor linear sigma model with quarks to study the phase
structure of isospin asymmetric matter at zero temperature. The meson degrees
of freedom provide the mean field chiral- and isospin-condensates on top of
which we compute the effective potential accounting for constituent quark
fluctuations at one-loop order. Using the renormalizability of the model, we
absorb the ultraviolet divergences into suitable counter-terms that are added
respecting the original structure of the theory. These counter-terms are
determined from the stability conditions which require the effective potential
to have minima in the condensates directions at the classical values, as well
as the transition from the non-condensed to the condensed phase to be smooth as
a function of the isospin chemical potential. We use the model to study the
evolution of the condensates as well as the pressure, energy and isospin
densities and the sound velocity as functions of the isospin chemical
potential. The approach does a good average description up to isospin chemical
potentials values not too large as compared to the vacuum pion mass. | Alejandro Ayala, Aritra Bandyopadhyay, Ricardo L. S. Farias, Luis A. Hernández, José Luis Hernández | 2023-01-31T13:48:31Z | http://arxiv.org/abs/2301.13633v2 | QCD equation of state at finite isospin density from the linear sigma model with quarks: The cold case
###### Abstract
We use the two-flavor linear sigma model with quarks to study the phase structure of isospin asymmetric matter at zero temperature. The meson degrees of freedom provide the mean field chiral- and isospin-condensates on top of which we compute the effective potential accounting for quark fluctuations at one-loop order. Using the renormalizability of the model, we absorb the ultraviolet divergences into suitable counter-terms that are added respecting the original structure of the theory. These counter-terms are determined from the stability conditions which require the effective potential to have minima in the condensates directions at the classical values, as well as the transition from the non-condensed to the condensed phase to be smooth as a function of the isospin chemical potential. We use the model to study the evolution of the condensates as well as the pressure, energy and isospin densities and the sound velocity as functions of the isospin chemical potential. The approach does a good average description up to isospin chemical potentials values not too large as compared to the vacuum pion mass.
Quantum Chromodynamics, Linear Sigma Model with Quarks, Isospin Asymmetry
## I Introduction
Multiple implications of the remarkably rich phase structure of Quantum Chromodynamics (QCD) have been extensively explored over the last years. QCD at finite density is usually characterized by the baryon \(\mu_{B}\) and the isospin \(\mu_{I}\) chemical potentials. Nature provides us with physical systems at finite baryon densities with non zero \(\mu_{I}\) in the form of isospin asymmetric matter, for example, compact astrophysical objects such as neutron stars. Because of this, along with the imminent arrival of new generation relativistic heavy-ion collision experiments at the FAIR [1] and NICA [2] facilities, the study of the phase structure in the temperature \(T\) and the chemical potentials \(\mu_{B}\) and \(\mu_{I}\) has become an ideal subject of scrutiny within the heavy-ion and astroparticle physics communities [3; 4].
A typical \(T-\mu_{B}-\mu_{I}\) phase diagram is anticipated to be full of rich phase structures [5]. However, from the theoretical perspective, systems with finite \(\mu_{B}\) are not straightforwardly accessible to the first-principle methods of Lattice QCD (LQCD), due to the well-known fermion determinant sign problem [6; 7]. Hence, studies on the \(\mu_{B}-\mu_{I}\) plane have been performed mainly using low energy effective models. These models have revealed the existence of an exciting phase structure that includes Gapless Pion Condensates (GPC), a Bose-Einstein Condensed (BEC) phase with gaped single particle excitations, a BEC-BCS crossover, etc [8; 9].
On the other hand, LQCD calculations for vanishing and even small \(\mu_{B}\) do not suffer from the sign problem. These calculations have predicted the existence of a superfluid pion condensate phase for high enough \(\mu_{I}\)[10; 11; 12; 13; 14; 15]. At zero temperature, they show that a second order phase transition at a critical isospin chemical potential (corresponding to the vacuum pion mass), separates the hadron from the pion condensate phase [14]. In addition to LQCD, these phases are also found using chiral perturbation theory (\(\chi\)PT) [16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28], Hard Thermal Loop perturbation theory (HTLPt) [29], the Nambu-Jona-Lasinio (NJL) model [9; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51] and its Polyakov loop (PNJL) extended version [46; 47], the quark meson model (QMM) [48; 49; 50; 51] and other low energy effective models exploiting functional RG studies [52]. Calculations using a LQCD equation of state for finite \(\mu_{I}\) have investigated the viability of the existence of pion stars, with a pion condensate as the dominant core constituent [24; 53]. Since LQCD calculations with \(\mu_{I}\neq 0,\;\mu_{B}=\mu_{s}=T=0\) can be carried out without being hindered by the sign problem, they can be used as a benchmark to test effective model predictions. For example, recently, the NJL model has been used in this domain and it has been found that results agree exceptionally well with LQCD results [54; 55].
In this work we study another effective QCD model, the Linear Sigma Model with quarks (LSMq), extended
to consider a finite \(\mu_{I}\) to describe the properties of strongly interacting systems with an isospin imbalance. The LSMq is a renormalizable theory that explicitly implements the QCD chiral symmetry. It has been successfully employed to study the chiral phase transition at finite \(T\) and \(\mu_{B}\)[56; 57; 58; 59], as well as in the presence of a magnetic field [60; 61; 62; 63; 64; 65; 66; 67]. The Linear Sigma Model has been used at finite \(\mu_{I}\), albeit considering the meson degrees of freedom as an effective classical background, in the Hartree or Hartree Fock approximations within the Cornwall-Jackiw-Tomboulis (CJT) formalism [68]. In contrast, in the LSMq mesons are treated as dynamical fields able to contribute to quantum fluctuations. Part of the reason for other models to avoid considering mesons as dynamical fields, for example the QMM, is that when mesons become true quantum fields and chiral symmetry is only spontaneously broken, their masses are subject to change as a result of medium effects. During this change, the meson square masses can become zero or even negative. At zero temperature, this drawback is avoided by considering an explicit symmetry breaking term that provides pions with a vacuum finite mass. At finite temperature, the plasma screening effects need to also be included.
In this work we use the LSMq to describe the evolution of the chiral and isospin (pion) condensates, as well as thermodynamical quantities such as pressure, isospin and energy densities and the sound velocity at zero temperature and finite \(\mu_{I}\). We restrict ourselves to considering only the effects of fermion quantum fluctuations, reserving for a future work the inclusion of meson quantum fluctuations effects. We make use of the renormalizability of the LSMq and describe in detail the renormalization procedure which is achieved by implementing the stability conditions. The results thus obtained are valid for the case where \(\mu_{I}^{2}\)) is small compared to the sum of the squares of the chiral and isospin condensates multiplied by the square of the boson-fermion coupling constant \(g\).
The work is organized as follows: In Sec. II we write the LSMq Lagrangian using degrees of freedom appropriate to describe an isospin imbalanced system. We work with an explicit breaking of the chiral symmetry introducing a vacuum pion mass and expanding the charged pion fields around the values of their condensates. The effective potential is constructed by adding to the tree-level potential the one-loop contribution from the fermion degrees of freedom. Renormalization is carried out by introducing counter-terms to enforce that the tree-level structure of the effective potential is preserved by loop corrections. We first work out explicitly the treatment in the condensed phase to then work out the non-condensed phase. In Sec. III we study the condensates evolution with \(\mu_{I}\) as well as that of the pressure, isospin and energy density and the sound velocity, and compare to recent LQCD results. We finally summarize and conclude in Sec. IV. We reserve for a follow up work the computation of the meson quantum fluctuations as well as finite temperature effects. The appendix is devoted to the explicit computation of the one-loop fermion contribution to the effective potential.
## II LSMq at finite isospin chemical potential
The LSMq is an effective theory that captures the approximate chiral symmetry of QCD. It describes the interactions among small-mass mesons and quarks. We start with a Lagrangian invariant under \(SU(2)_{L}\times SU(2)_{R}\) chiral transformations
\[\mathcal{L} =\frac{1}{2}(\partial_{\mu}\sigma)^{2}+\frac{1}{2}(\partial_{\mu }\vec{\pi})^{2}+\frac{a^{2}}{2}(\sigma^{2}+\vec{\pi}^{2})-\frac{\lambda}{4}( \sigma^{2}+\vec{\pi}^{2})^{2}\] \[+\,i\bar{\psi}\gamma^{\mu}\partial_{\mu}\psi-ig\bar{\psi}\gamma^ {5}\vec{\tau}\cdot\vec{\pi}\psi-g\bar{\psi}\psi\sigma, \tag{1}\]
where \(\vec{\tau}=(\tau_{1},\tau_{2},\tau_{3})\) are the Pauli matrices,
\[\psi_{L,R}=\begin{pmatrix}u\\ d\end{pmatrix}_{L,R}, \tag{2}\]
is a \(SU(2)_{L,R}\) doublet, \(\sigma\) is a real scalar field and \(\vec{\pi}=(\pi_{1},\pi_{2},\pi_{3})\) is a triplet of real scalar fields. \(\pi_{3}\) corresponds to the neutral pion whereas the charged ones are represented by the combinations
\[\pi_{-}=\frac{1}{\sqrt{2}}(\pi_{1}+i\pi_{2}),\quad\pi_{+}=\frac{1}{\sqrt{2}}( \pi_{1}-i\pi_{2}). \tag{3}\]
The parameters \(a^{2}\), \(\lambda\) and \(g\) are real and positive definite. Equation (1) can be written in terms of the charged and neutral-pion degrees of freedom as
\[\mathcal{L} =\frac{1}{2}[(\partial_{\mu}\sigma)^{2}+(\partial_{\mu}\pi_{0})^ {2}]+\partial_{\mu}\pi_{-}\partial^{\mu}\pi_{+}+\frac{a^{2}}{2}(\sigma^{2}+ \pi_{0}^{2})\] \[+\,a^{2}\pi_{-}\pi_{+}-\frac{\lambda}{4}(\sigma^{4}+4\sigma^{2} \pi_{-}\pi_{+}+2\sigma^{2}\pi_{0}^{2}+4\pi_{-}^{2}\pi_{+}^{2}\] \[+\,4\pi_{-}\pi_{+}\pi_{0}^{2}+\pi_{0}^{4})+i\bar{\psi}\bar{ \partial}\psi-g\bar{\psi}\psi\sigma-ig\bar{\psi}\gamma^{5}(\tau_{+}\pi_{+}\] \[+\,\tau_{-}\pi_{-}+\tau_{3}\pi_{0})\psi, \tag{4}\]
where we introduced the combination of Pauli matrices
\[\tau_{+}=\frac{1}{\sqrt{2}}(\tau_{1}+i\tau_{2}),\quad\tau_{-}=\frac{1}{\sqrt{2 }}(\tau_{1}-i\tau_{2}). \tag{5}\]
The Lagrangian in Eq. (4) possesses the following symmetries: A \(SU(N_{c})\) global color symmetry, a \(SU(2)_{L}\times SU(2)_{R}\) chiral symmetry and a \(U(1)_{B}\) symmetry. The sub-index of the latter emphasizes that the conserved charge is the baryon number \(B\). A conserved isospin charge can be added to the LSMq Hamiltonian, multiplied by the isospin chemical potential \(\mu_{I}\). The result is that the Lagrangian gets modified such that the ordinary derivative becomes a covariant derivative [69]
\[\partial_{\mu}\to D_{\mu}=\partial_{\mu}+i\mu_{I}\delta^{0}_{\mu},\quad \partial^{\mu}\to D^{\mu}=\partial^{\mu}-i\mu_{I}\delta^{\mu}_{0}, \tag{6}\]
As a result, Eq. (4) is modified to read as
\[\mathcal{L} = \frac{1}{2}[(\partial_{\mu}\sigma)^{2}+(\partial_{\mu}\pi_{0})^{2}] +D_{\mu}\pi_{-}D^{\mu}\pi_{+}+\frac{a^{2}}{2}(\sigma^{2}+\pi_{0}^{2}) \tag{7}\] \[+ a^{2}\pi_{-}\pi_{+}-\frac{\lambda}{4}\left(\sigma^{4}+4\sigma^{2} \pi_{-}\pi_{+}+2\sigma^{2}\pi_{0}^{2}+4\pi_{-}^{2}\pi_{+}^{2}\right.\] \[+ \left.4\pi_{-}\pi_{+}\pi_{0}^{2}+\pi_{0}^{4}\right)+i\bar{\psi} \not{\partial}\psi-g\bar{\psi}\psi\sigma+\bar{\psi}\mu_{I}\tau_{3}\gamma_{0}\psi\] \[- ig\bar{\psi}\gamma^{5}(\tau_{+}\pi_{+}+\tau_{-}\pi_{-}+\tau_{3} \pi_{0})\psi.\]
Because of the spontaneous breaking of the chiral symmetry in the Lagrangian given in Eq. (7), the \(\sigma\) field acquires a non-vanishing vacuum expectation value
\[\sigma\to\sigma+v.\]
To make better contact with the meson vacuum properties and to include a finite vacuum pion mass, \(m_{0}\), we can add an explicit symmetry breaking term in the Lagrangian such that
\[\mathcal{L}\to\mathcal{L}^{\prime}=\mathcal{L}+h(\sigma+v). \tag{8}\]
The constant \(h\) is fixed by requiring that the model expression for the neutral vacuum pion mass squared in the non-condensed phase, Eq. (11a), corresponds to \(m_{0}^{2}\). This yields
\[h = m_{0}^{2}\sqrt{\frac{a^{2}+m_{0}^{2}}{\lambda}}, \tag{9}\] \[\equiv m_{0}^{2}f_{\pi},\]
where \(f_{\pi}\) is the pion decay constant and have used its explicit model expression. Equation (9) provides a relation for the model parameters \(a\) and \(\lambda\) in terms of \(f_{\pi}\).
Before diving into the formalism details, here we first pause to discuss the symmetry properties of the theory. Notice that the introduction of \(\mu_{I}\) and \(h\) modifies the structure of the effective Lagrangian given in Eq. (8). In the presence of a finite \(\mu_{I}\), the \(U(1)_{B}\times SU(2)_{L}\times SU(2)_{R}\) symmetry is reduced to \(U(1)_{B}\times U(1)_{I_{3}L}\times U(1)_{I_{3}R}\) for \(h=0\), and to \(U(1)_{B}\times U(1)_{I_{3}}\) for \(h\neq 0\), thereby representing the explicit breaking of the chiral symmetry [70]. The notation also emphasizes that the third component of the isospin charge, \(I_{3}\), corresponds to the generator of the remaining symmetry \(U(1)_{I_{3}}\). Since in the present work, we are interested in the dynamics of the pion fields, further simplifications in the pseudoscalar channels can be obtained using the ansatz \(\langle\bar{\psi}i\gamma_{5}\tau_{3}\psi\rangle=0\) combined with \(\langle\bar{u}i\gamma_{5}d\rangle=\langle\bar{d}i\gamma_{5}u\rangle^{*}\neq 0\)[9]. This further breaks the residual \(U(1)_{I_{3}}\) symmetry and corresponds to a Bose-Einstein condensation of the charged pions. Then, the charged pion fields can be referred from their condensates as
\[\pi_{+}\to\pi_{+}+\frac{\Delta}{\sqrt{2}}e^{i\theta},\quad\pi_{-}\to\pi_{-}+ \frac{\Delta}{\sqrt{2}}e^{-i\theta}, \tag{10}\]
where the phase factor \(\theta\) indicates the direction of the \(U(1)_{I_{3}}\) symmetry breaking. We take \(\theta=\pi\) for definitiveness. The shift in the sigma field produces that the fermions and neutral bosons acquire masses given by
\[m_{f} = gv \tag{11a}\] \[m_{\pi^{0}}^{2} = \lambda v^{2}-a^{2}+\lambda\Delta^{2}\] (11b) \[m_{\sigma}^{2} = 3\lambda v^{2}-a^{2}+\lambda\Delta^{2}. \tag{11c}\]
The charged pions also acquire masses. However, in the condensed phase (\(\Delta\neq 0\)) they need to be described in terms of the \(\pi_{1,2}\) fields [71]. Since for our purposes, pions are not treated as quantum fluctuations, hereby we just notice that, as a consequence of the breaking of the \(U(1)_{I_{3}}\) symmetry, one of these fields becomes a Goldstone boson. In the absence of the explicit symmetry breaking term in the Lagrangian of Eq. (8), this mode's mass would vanish. However, a finite \(h\) prevents this mode from being massless.
### Condensed phase
In the condensed phase the tree-level potential, extracted from Eqs. (7) and (8), can be written as
\[V_{\rm tree}=-\frac{a^{2}}{2}\left(v^{2}+\Delta^{2}\right)+\frac{\lambda}{4} \left(v^{2}+\Delta^{2}\right)^{2}-\frac{1}{2}\mu_{I}^{2}\Delta^{2}-hv. \tag{12}\]
The fermion contribution to the one-loop effective potential becomes
\[\sum_{f=u,d}V_{f}^{1}=-2N_{c}\int\frac{d^{3}k}{(2\pi)^{3}}\left[E_{\Delta}^{u} +E_{\Delta}^{d}\right], \tag{13}\]
with (see Appendix A)
\[E_{\Delta}^{u} = \left\{\left(\sqrt{k^{2}+m_{f}^{2}}+\mu_{I}\right)^{2}+g^{2} \Delta^{2}\right\}^{1/2}, \tag{14a}\] \[E_{\Delta}^{d} = \left\{\left(\sqrt{k^{2}+m_{f}^{2}}-\mu_{I}\right)^{2}+g^{2} \Delta^{2}\right\}^{1/2}, \tag{14b}\]
where we chose that
\[\mu_{d} = \mu_{I}\] \[\mu_{u} = -\mu_{I}. \tag{15}\]
Equation (13) is ultraviolet divergent. Ultraviolet divergences are a common feature of loop vacuum contributions. However, since Eq. (13) depends on \(\mu_{I}\), this divergence needs to be carefully treated given that matter contributions cannot contain ultraviolet divergences. To identify the divergent terms, we work in the approximation whereby the fermion energies, Eqs. (14), are expanded in powers of \(\mu_{I}^{2}/[g^{2}(v^{2}+\Delta^{2})]\). Considering terms up to \(\mathcal{O}(\mu_{I}^{4})\), we obtain
\[\sum_{f=u,d}E_{\Delta}^{f} \simeq 2\sqrt{k^{2}+m_{f}^{2}+g^{2}\Delta^{2}}+\frac{\mu_{I}^{2}g^{2} \Delta^{2}}{(k^{2}+m_{f}^{2}+g^{2}\Delta^{2})^{3/2}}\] \[+\frac{\mu_{I}^{4}\left[4(k^{2}+m_{f}^{2})g^{2}\Delta^{2}-g^{4} \Delta^{4}\right]}{4\left(k^{2}+m_{f}^{2}+g^{2}\Delta^{2}\right)^{7/2}}+ \mathcal{O}(\mu_{I}^{6}).\]
Notice that the ultraviolet divergent part corresponds only to the first and second terms on the right-hand side of Eq. (16). In this approximation, and up to terms of order \(\mu_{I}^{2}\), the expression for the leading fermion contribution to the one-loop effective potential is given by
\[\sum_{f=u,d}V_{f}^{1} = -2N_{c}\int\frac{d^{3}k}{(2\pi)^{3}}\Big{(}2\sqrt{k^{2}+m_{f}^{2}+ g^{2}\Delta^{2}} \tag{17}\] \[+ \frac{\mu_{I}^{2}g^{2}\Delta^{2}}{(k^{2}+m_{f}^{2}+g^{2}\Delta^{2 })^{3/2}}\Big{)}\]
This expression can be readily computed using dimensional regularization in the \(\overline{\rm MS}\) scheme, with the result (see Appendix A)
\[\sum_{f=u,d}V_{f}^{1} = 2N_{c}\frac{g^{4}\left(v^{2}+\Delta^{2}\right)^{2}}{(4\pi)^{2}} \left[\frac{1}{\epsilon}+\frac{3}{2}+\ln\left(\frac{\Lambda^{2}/g^{2}}{v^{2}+ \Delta^{2}}\right)\right] \tag{18}\] \[- 2N_{c}\frac{g^{2}\mu_{I}^{2}\Delta^{2}}{(4\pi)^{2}}\left[\frac{ 1}{\epsilon}+\ln\left(\frac{\Lambda^{2}/g^{2}}{v^{2}+\Delta^{2}}\right) \right],\]
where \(N_{c}=3\) is the number of colors, \(\Lambda\) is the dimensional regularization ultraviolet scale and the limit \(\epsilon\to 0\) is to be understood. The explicit computation of Eq. (18) is described also in Appendix A. Notice that Eq. (18) contains an ultraviolet divergence proportional to \(\mu_{I}^{2}\Delta^{2}\). Since a term with this same structure is already present in the tree-level potential, Eq. (12), it is not surprising that this ultraviolet divergence can be handled by the renormalization procedure with the introduction of a counter-term with the same structure, as we proceed to show.
To carry out the renormalization of the effective potential up to one-loop order, we introduce counter-terms that respect the structure of the tree-level potential and determine them by accounting for the stability conditions. The latter are a set of conditions satisfied by the tree-level potential and that must be preserved when considering loop corrections. These conditions require that the position of the minimum in the \(v\)- and \(\Delta\)-directions remain the same as the tree-level potential ones.
The tree-level minimum in the \(v\), \(\Delta\) plane is found from
\[\frac{\partial V_{\rm tree}}{\partial v} = \left[\lambda v^{3}-(a^{2}-\lambda\Delta^{2})v-h\right]\biggr{|}_ {v_{0},\,\Delta_{0}}=0 \tag{19a}\] \[\frac{\partial V_{\rm tree}}{\partial\Delta} = \left[\lambda\Delta^{2}-(\mu_{I}^{2}-\lambda v^{2}+a^{2})\right] \biggr{|}_{v_{0},\,\Delta_{0}}=0. \tag{19b}\]
Notice that the second of Eqs. (19) admits a real, non-vanishing solution, only when
\[\mu_{I}^{2}>\lambda v^{2}-a^{2}=m_{0}^{2}, \tag{20}\]
which means that a non-zero isospin condensate is developed only when, for positive values of the isospin chemical potential, the latter is larger than the vacuum pion mass. This is what we identify as the condensed phase. The simultaneous solutions of Eqs. (19) are
\[v_{0} = \frac{h}{\mu_{I}^{2}}, \tag{21a}\] \[\Delta_{0} = \sqrt{\frac{\mu_{I}^{2}}{\lambda}-\frac{h^{2}}{\mu_{I}^{4}}+\frac {a^{2}}{\lambda}}. \tag{21b}\]
Hereafter, we refer to the expressions in Eq. (21) as the classical solution.
The effective potential, up to one-loop order in the fermion fluctuations, including the counter-terms, can be written as
\[V_{\rm eff} = V_{\rm tree}+\sum_{f=u,d}V_{f}^{1}-\frac{\delta\lambda}{4}(v^{2 }+\Delta^{2})^{2} \tag{22}\] \[+ \frac{\delta a}{2}(v^{2}+\Delta^{2})+\frac{\delta}{2}\Delta^{2} \mu_{I}^{2}.\]
The counter-terms \(\delta\lambda\) and \(\delta\) are determined from the _gap equations_
\[\left.\frac{\partial V_{\rm eff}}{\partial v}\right|_{v_{0},\, \Delta_{0}}=0, \tag{23a}\] \[\left.\frac{\partial V_{\rm eff}}{\partial\Delta}\right|_{v_{0},\, \Delta_{0}}=0. \tag{23b}\]
These conditions suffice to absorb the infinities of Eq. (18). The counter-term \(\delta a\) is determined by requiring that the slope of \(V_{\rm eff}\) vanishes at \(\mu_{I}=m_{0}\),
\[\left.\frac{\partial V_{\rm eff}}{\partial\mu_{I}}\right|_{\mu_{I}=m_{0}}=0, \tag{24}\]
or in other words, that the transition from the non-condensed to the condensed phase be smooth. The effective potential thus obtained is ultraviolet finite as well as \(\Lambda\)-independent.
### Non-condensed phase
In the non-condensed phase, \(0\leq\mu_{I}\leq m_{0}\), the only allowed solution for the second of Eqs. (19) is \(\Delta=0\). For this case, the first of Eqs. (19) becomes a cubic equation in \(v\). The only real solution is
\[\tilde{v}_{0} = \frac{(\sqrt{3}\sqrt{27h^{2}\lambda^{4}-4a^{6}\lambda^{3}}+9h \lambda^{2})^{1/3}}{(18)^{2/3}\lambda} \tag{25}\] \[+ \frac{(2/3)^{1/3}a^{2}}{(\sqrt{3}\sqrt{27h^{2}\lambda^{4}-4a^{6} \lambda^{3}}+9h\lambda^{2})^{1/3}}.\]
In the limit when \(h\) is taken as small one gets
\[\tilde{v}_{0}\simeq\frac{a}{\sqrt{\lambda}}+\frac{h}{2a^{2}}, \tag{26}\]
an approximation that some times is considered. However, hereafter we work instead with the full expression given by Eq. (25).
The effective potential \(V_{\rm eff}^{\rm noncond}\) up to one-loop order can be obtained from the corresponding one in the condensed phase, by setting \(\Delta=0\). Therefore, we can write
\[V_{\rm eff}^{\rm noncond} = \frac{\lambda}{4}v^{4}-\frac{a^{2}}{2}v^{2}-hv-\frac{\tilde{\delta }_{1}}{4}v^{4}+\frac{\tilde{\delta}_{2}}{2}v^{2} \tag{27}\] \[+ 2N_{c}\frac{g^{4}v^{4}}{(4\pi)^{2}}\left[\frac{1}{\epsilon}+ \frac{3}{2}+\ln\left(\frac{\Lambda^{2}}{g^{2}v^{2}}\right)\right].\]
In this case, only two conditions are needed to stabilize the vacuum. We take these as the requirement that the position and curvature of \(V_{\rm eff}^{\rm noncond}\) remain at its classical value when evaluated at \(\tilde{v}_{0}\), namely,
\[\frac{\partial V_{\rm eff}^{\rm noncond}}{\partial v}\Bigg{|}_{ \tilde{v}_{0}} = 0 \tag{28a}\] \[\frac{\partial^{2}V_{\rm eff}^{\rm noncond}}{\partial v^{2}} \Bigg{|}_{\tilde{v}_{0}} = 3\lambda\tilde{v}_{0}^{2}-a^{2}, \tag{28b}\]
from where the counter-terms \(\tilde{\delta}_{1}\), \(\tilde{\delta}_{2}\) can be determined. Therefore, in the non-condensed phase, in addition to \(\Delta=0\), the \(v\)-condensate is simply given by the constant \(\tilde{v}_{0}\) given in Eq. (25). As for the case of the condensed phase, in the non-condensed phase the effective potential is ultraviolet finite as well as \(\Lambda\)-independent.
## III Thermodynamics of the condensed phase
Armed with the expressions for the effective potential, we can now proceed to study the dependence of the condensates as well as of the thermodynamical quantities as functions of \(\mu_{I}\). Since the \(\mu_{I}\)-dependence in the non-condensed phase is trivial, we concentrate in the description of the behavior of these quantities in the condensed phase.
The model requires fixing three independent parameters: the boson self-coupling \(\lambda\), the boson-fermion coupling \(g\) and the mass parameter \(a\). For a vacuum pion mass \(m_{0}=135\) MeV, these parameters are fixed by requiring that the pion vacuum decay constant is \(f_{\pi}=93\) MeV, the light quark mass is \(m_{q}=235\) MeV and the sigma mass is \(m_{\sigma}=400\) MeV. The phase space for these parameters is limited since for certain combinations, the gap equation conditions in the \(v\)-\(\Delta\) plane become saddle points rather than global minima.
Figure 1 shows the \(v\)- and \(\Delta\)-condensates as functions of the scaled variable \(\mu_{I}/m_{0}\). The behavior is qualitatively as expected: for \(\mu_{I}\geq m_{0}\), the \(v\)-condensate decreases while the \(\Delta\)-condensate increases.
Figure 2 shows the normalized pressure, defined as the negative of the effective potential referred from its value at \(\mu_{I}=m_{0}\), as a function of the scaled variable \(\mu_{I}/m_{0}\) and divided by \(m_{0}^{4}\). Shown are the results obtained by using the tree-level and the fermion one-loop corrected effective potentials, compared to the results from Ref. [54] and the LQCD results from [72]. Notice that the one-loop improved calculation does a better description than the tree-level one and that deviations from the LQCD result appear for \(\mu_{I}\gtrsim 1.5\ m_{0}\).
Figure 3 shows the normalized isospin density, \(n_{I}=dP/d\mu_{I}\), divided by \(m_{0}^{3}\) as a function of the scaled variable \(\mu_{I}/m_{0}\) compared to results obtained using the tree-level potential as well as to the results from Ref. [54] together with the LQCD results from Ref. [72]. Notice that the one-loop improved calculation is close to the NJL one up to \(\mu_{I}\sim 1.5\ m_{0}\) but the latter does a better job describing the LQCD results for \(\mu_{I}\gtrsim 1.5\ m_{0}\). However, it is fair to say that neither the current calculation nor the NJL result reproduce the change of curvature that
Figure 2: Normalized pressure as a function of the scaled variable \(\mu_{I}/m_{0}\). Shown are the tree-level and one-loop fermion improved pressures compared to the results from Ref. [54] together with the LQCD results from Ref. [72].
Figure 1: \(v\)- and \(\Delta\)-condensates as functions of the scaled variable \(\mu_{I}/m_{0}\). For \(\mu_{I}\geq m_{0}\), the \(v\)-condensate decreases while the \(\Delta\)-condensate increases.
seems to be present in the LQCD result.
Figure 4 shows the normalized energy density, \(\epsilon/m_{0}^{4}\), as a function of the scaled variable \(\mu_{I}/m_{0}\), compared to the results from Ref. [54] together with the LQCD results from Ref. [72]. Although the change in curvature shown by the LQCD results is not described by the present calculation, it is fair to say that neither the NJL calculation captures such trend. The one-loop improved calculation does a better average description of the LQCD result although deviations appear for \(\mu_{I}\gtrsim 1.5~{}m_{0}\).
Figure 5 shows the equation of state, pressure vs. energy density, compared to the results from Ref. [54] together with the LQCD results from Ref. [72]. Notice that for the latter, the vacuum pion mass is taken as \(m_{0}=135\) MeV. As can be seen, the initial increasing trend of LQCD results is properly described by the low-energy models considered. Given that the accuracy of our results is limited to the low \(\mu_{I}\) domain the NJL calculation does a better description of the LQCD results.
Figure 6 shows the square of the speed of sound, \(c_{s}^{2}\), as a function of the scaled variable \(\mu_{I}/m_{0}\). Shown are the one-loop results compared to the results from Ref. [54] together with the LQCD results from Ref. [72]. The apparent peak in the LQCD results is not reproduced by any model. However, notice that for the range of shown \(\mu_{I}\) values, the one-loop improved result is above, although closer to the conformal bound, shown as a horizontal line at \(c_{s}^{2}=1/3\).
## IV Summary and conclusions
In this work we have used the LSMq, with two quark flavors, to study the phase structure of isospin asymmetric matter at zero temperature. The meson degrees of freedom are taken as providing the mean field on top of which we include quantum quark fluctuations at one-loop order. We have used the renormalization of the LSMq to absorb the ultraviolet divergences with the addition of counter-terms that respect the original structure of the theory. An interesting aspect of the method is that it allows the proper handling of the disturbing \(\mu_{I}\)-dependent ultraviolet divergence. The one-loop quark contributions are treated in the approximation whereby \(\mu_{I}^{2}\) is taken as small compared to \(g^{2}(v^{2}+\Delta^{2})\) and working up to \(\mathcal{O}(\mu_{I}^{2})\). After determining the model parameters, we have studied the evolution of the chiral and isospin condensates as well as the pressure, energy and isospin densities and the sound velocity. We have compared the model results with a recent NJL calculation of the same quantities and with LQCD data. The model does a good description for \(\mu_{I}\lesssim 1.5~{}m_{0}\), except perhaps for the sound velocity for which it does not reproduce the peak seemingly appearing in the LQCD calculations.
The results are encouraging and set the stage to explore whether the method can be used to incorporate the effect of meson fluctuations. The method also lends itself to include in the description higher powers of \(\mu_{I}^{2}\) as well as finite temperature effects. We are currently exploring these avenues and will report on the findings elsewhere in the near future.
###### Acknowledgements.
The authors are grateful to G. Endrodi and B. B. Brandt for kindly sharing their LQCD data in tabular form. Support for this work was received in part by
Figure 4: Normalized energy density as a function of the scaled variable \(\mu_{I}/m_{0}\). Shown are the tree-level and one-loop fermion improved effective potentials compared to the results from Ref. [54] together with the LQCD results from Ref. [72].
Figure 3: Normalized isospin density as a function of the scaled variable \(\mu_{I}/m_{0}\). Shown are the tree-level and one-loop fermion improved effective potentials compared to a recent \(SU(2)\) NJL calculation [54] and the LQCD results from Ref. [72].
UNAM-PAPIIT IG100322 and by Consejo Nacional de Ciencia y Tecnologia grant number A1-S-7655. L. A. H. acknowledges support from a PAPIIT-DGAPA-UNAM fellowship. This work was partially supported by Conselho Nacional de Desenvolvimento Cientifico e Tecnologico (CNPq), Grant No. 309598/2020-6 (R.L.S.F.); Fundacao de Amparo a Pesquisa do Estado do Rio Grande do Sul (FAPERGS), Grants Nos. 19/2551-0000690-0 and 19/2551-0001948-3 (R.L.S.F.). A.B. acknowledges the support from the Alexander von Humboldt Foundation postdoctoral research fellowship in Germany.
## Appendix A One-loop quark contribution to the effective potential
The thermodynamic potential accounting for the quark contribution at one-loop order is given by
\[V_{f}^{1}=iV^{-1}\ln\bigl{(}\mathcal{Z}_{f}^{1}\bigr{)}, \tag{10}\]
where
\[\ln\left(\mathcal{Z}_{f}^{1}\right)=\ln\left(\det\bigl{\{}\bigl{(}S_{\rm mf}^ {-1}\bigr{)}\bigr{\}}\right), \tag{11}\]
and \(V\) is the space-time volume. Also here, \(S_{\rm mf}^{-1}\) is the inverse propagator of the two light-quark species. Therefore, we are bound to compute the determinant of a matrix \(M\) of the form
\[M=\begin{pmatrix}A&B\\ C&D\end{pmatrix}, \tag{12}\]
where \(A\), \(B\), \(C\), \(D\) can be thought of as \(p\times p\), \(p\times q\), \(q\times p\), and \(q\times q\) complex matrices, respectively. When \(A\), and \(D\), are invertible, the determinant of \(M\) is given by
\[\det\{(M)\}=\det\{(A)\}\det\bigl{\{}(D-CA^{-1}B)\bigr{\}}, \tag{13}\] \[\det\{(M)\}=\det\{(D)\}\det\bigl{\{}(A-BD^{-1}C)\bigr{\}}. \tag{14}\]
Equation (13) can be written as
\[\det\{(M)\} =\det\{(A)\}\det\bigl{\{}(D-CA^{-1}B)\bigr{\}}\] \[=\det\{(A)\}\det\bigl{\{}(C^{-1}C)\}\det\bigl{\{}(D-CA^{-1}B) \bigr{\}}\] \[=\det\bigl{\{}(-C^{2}A^{-1}BC^{-1}A+CDC^{-1}A)\bigr{\}}, \tag{15}\]
whereas Eq. (14) as
\[\det\{(M)\} =\det\{(D)\}\det\bigl{\{}(A-BD^{-1}C)\bigr{\}}\] \[=\det\{(D)\}\det\bigl{\{}(C^{-1}C)\bigr{\}}\det\bigl{\{}(A-BD^{-1 }C)\bigr{\}}\] \[=\det\bigl{\{}(-CB+CAC^{-1}D)\bigr{\}}. \tag{16}\]
For our purposes, \(B=C=ig\Delta\gamma^{5}\). Thus, from Eqs. (15) and (16), we obtain
\[\det\{(M)\} =\det\bigl{\{}(-C^{2}+CDC^{-1}A)\bigr{\}}, \tag{17}\] \[\det\{(M)\} =\det\bigl{\{}(-C^{2}+CAC^{-1}D)\bigr{\}}. \tag{18}\]
We explicitly compute both expressions. First, we use that the standard spin projectors \(\Lambda_{\pm}\) satisfy
\[\gamma^{0}\Lambda_{\pm}\gamma^{0}=\tilde{\Lambda}_{\mp}, \tag{19}\]
and
\[\gamma^{5}\Lambda_{\pm}\gamma^{5}=\tilde{\Lambda}_{\pm}, \tag{20}\]
Figure 5: Equation of state, pressure vs. energy density. Shown are the tree-level and one-loop fermion improved effective potentials compared to the results from Ref. [54] together with the LQCD results from Ref. [72]. For the latter, the vacuum pion mass is taken as \(m_{0}=135\) MeV.
Figure 6: Square of the speed of sound as a function of the scaled variable \(\mu_{I}/m_{0}\). Shown are the tree-level and one-loop fermion improved effective potentials compared to a recent \(SU(2)\) NJL calculation [54] and the LQCD results from Ref. [72].
with the projectors \(\tilde{\Lambda}_{\pm}\) defined as
\[\tilde{\Lambda}_{\pm}=\frac{1}{2}\left(1\pm\frac{\gamma^{0}(\vec{\gamma}\cdot\vec{ k}-gv)}{E_{k}}\right). \tag{100}\]
Next, we notice that \(A=S_{u}^{-1}\) and \(D=S_{d}^{-1}\). Therefore, working first in the absence of an isospin chemical potential, for which
\[S_{u}^{-1}=S_{d}^{-1}=k_{0}\gamma^{0}-\vec{\gamma}\cdot\vec{k}-gv, \tag{101}\]
\[D_{1} \equiv-C^{2}+CDC^{-1}A\] \[=g^{2}\Delta^{2}+(ig\Delta\gamma^{5})S_{d}^{-1}\left(\frac{1}{ig \Delta}\gamma^{5}\right)S_{u}^{-1}\] \[=g^{2}\Delta^{2}-\left[k_{0}^{2}-\left(E_{k}^{u}\right)^{2} \right]\Lambda_{-}-\left[k_{0}-\left(E_{k}^{d}\right)^{2}\right]\Lambda_{+}, \tag{102}\]
and
\[D_{2} \equiv-C^{2}+CAC^{-1}D\] \[=g^{2}\Delta^{2}+\gamma^{5}S_{u}^{-1}\gamma^{5}S_{d}^{-1}\] \[=g^{2}\Delta^{2}-\left[k_{0}^{2}-\left(E_{k}^{d}\right)^{2} \right]\Lambda_{-}-\left[k_{0}^{2}-\left(E_{k}^{u}\right)^{2}\right]\Lambda_{ +}. \tag{103}\]
Thus, using that \(\Lambda_{+}+\Lambda_{-}=\openone\) and defining \(E_{\Delta}^{q}=\sqrt{\left(E_{k}^{q}\right)^{2}+g^{2}\Delta^{2}}\), we have
\[D_{1} =-\left(k_{0}^{2}-\left(E_{\Delta}^{u}\right)^{2}\right)\Lambda_ {-}-\left(k_{0}-\left(E_{\Delta}^{d}\right)^{2}\right)\Lambda_{+}, \tag{104}\] \[D_{2} =-\left(k_{0}^{2}-\left(E_{\Delta}^{d}\right)^{2}\right)\Lambda_ {-}-\left(k_{0}^{2}-\left(E_{\Delta}^{u}\right)^{2}\right)\Lambda_{+}, \tag{105}\]
and
\[\det\bigl{\{}(S_{\text{mf}}^{-1})\bigr{\}}=\det\{(D_{1})\}=\det\{(D_{2})\}. \tag{106}\]
Note that
\[\begin{split}\ln\left(\mathcal{Z}_{f}^{1}\right)&= \ln\left(\det\bigl{\{}\bigl{(}S_{\text{mf}}^{-1}\bigr{)}\bigr{\}}\right)\\ &=\frac{1}{2}\ln\left(\det\Bigl{\{}\bigl{(}S_{\text{mf}}^{-1} \bigr{)}^{2}\Bigr{\}}\right)\\ &=\frac{1}{2}\ln\left(\det\{(D_{1}D_{2})\}\right)\\ &=\frac{1}{2}\text{Tr}\left[\ln\left(D_{1}D_{2}\right)\right], \end{split} \tag{107}\]
and since the product \(D_{1}D_{2}\) is given by
\[D_{1}D_{2}=\left(k_{0}^{2}-\left(E_{\Delta}^{u}\right)^{2}\right)\left(k_{0} ^{2}-\left(E_{\Delta}^{d}\right)^{2}\right), \tag{108}\]
we get
\[\ln\left(\mathcal{Z}_{f}^{1}\right)=\frac{1}{2}\sum_{q=u,d}\text{Tr}\left[\ln \left(k_{0}^{2}-\left(E_{\Delta}^{q}\right)^{2}\right)\right], \tag{109}\]
where the trace is taken in Dirac, color (factors of 4 and \(N_{c}\), respectively), and in coordinate spaces, namely,
\[\begin{split}\ln\left(\mathcal{Z}_{f}^{1}\right)&= 2N_{c}\sum_{q=u,d}\int d^{4}x\Bigl{\langle}x\Bigl{|}\ln\left(k_{0}^{2}-\left(E_ {\Delta}^{q}\right)^{2}\right)\Bigr{|}x\Bigr{\rangle}\\ &=2N_{c}\sum_{q=u,d}\int d^{4}x\int\frac{d^{4}k}{(2\pi)^{4}}\,\ln \left(k_{0}^{2}-\left(E_{\Delta}^{q}\right)^{2}\right).\end{split} \tag{110}\]
Therefore
\[\ln\left(\mathcal{Z}_{f}^{1}\right)=2VN_{c}\sum_{q=u,d}\int\frac{d^{4}k}{(2\pi) ^{4}}\,\ln\left(k_{0}^{2}-\left(E_{\Delta}^{q}\right)^{2}\right). \tag{111}\]
In order to obtain a more compact expression, we integrate and differentiate with respect to \(E_{\Delta}^{q}\) as follows
\[\ln\left(\mathcal{Z}_{f}^{1}\right)=2VN_{c}\sum_{q=u,d}\int\frac{d^{4}k}{(2\pi) ^{4}}\int dE_{\Delta}^{q}\,\frac{E_{\Delta}^{q}}{k_{0}^{2}-(E_{\Delta}^{q})^{ 2}}. \tag{112}\]
Performing a Wick rotation \(k_{0}\to ik_{4}\), we obtain
\[\ln\left(\mathcal{Z}_{f}^{1}\right)=4iVN_{c}\sum_{q=u,d}\int\frac{d^{4}k_{E}}{ (2\pi)^{4}}\int dE_{\Delta}^{q}\,\frac{E_{\Delta}^{q}}{k_{0}^{2}-(E_{\Delta}^{ q})^{2}}, \tag{113}\]
and integrating over \(k_{4}\) and \(E_{\Delta}^{q}\), in this order, we get
\[\ln\left(\mathcal{Z}_{f}^{1}\right)=2iVN_{c}\sum_{q=u,d}\int\frac{d^{3}k}{(2 \pi)^{3}}\,E_{\Delta}^{q}, \tag{114}\]
with \(\text{Re}[(E_{\Delta}^{q})^{2}]\geq 0\). Therefore, the quark contribution to the effective potential at one-loop order is given by
\[V_{f}^{1}=iV^{-1}\ln\left(\mathcal{Z}_{f}^{1}\right). \tag{115}\]
Thus,
\[V_{f}^{1}=-2N_{c}\sum_{q=u,d}\int\frac{d^{3}k}{(2\pi)^{3}}\,E_{\Delta}^{q}. \tag{116}\]
In the presence of an isospin chemical potential for which
\[S_{u}^{-1} =(k_{0}+\mu_{I})\gamma^{0}-\vec{\gamma}\cdot\vec{k}-gv,\] \[S_{d}^{-1} =(k_{0}-\mu_{I})\gamma^{0}-\vec{\gamma}\cdot\vec{k}-gv, \tag{117}\]
and repeating the steps starting from Eq. (102), we obtain Eq. (116), with the energies \(E_{\Delta}^{u}\) and \(E_{\Delta}^{d}\) given by Eqs. (14).
We now proceed to the explicit computation of Eq. (13). In the limit where \(\mu_{I}^{2}/[g^{2}(v^{2}+\Delta^{2})]\) is small, Eq. (116) can be written as in Eq. (17). We use dimensional regularization. The first of the integrals on the right hand side of Eq. (17) is expressed as
\[\int\frac{d^{3}k}{(2\pi)^{3}}\sqrt{k^{2}+g^{2}v^{2}+g^{2}\Delta^{2}} \rightarrow\Lambda^{3-d}\frac{\Gamma\left(-\frac{1}{2}-\frac{d}{2} \right)}{(4\pi)^{\frac{d}{2}}\Gamma\left(-\frac{1}{2}\right)}\] \[\times\,\left(\frac{1}{g^{2}v^{2}+g^{2}\Delta^{2}}\right)^{-\frac {1}{2}-\frac{d}{2}}. \tag{118}\]
Taking \(d\to 3-2\epsilon\) and working in the \(\overline{\rm MS}\) scheme
\[\Lambda^{2}\rightarrow\frac{\Lambda^{2}e^{\gamma_{E}}}{4\pi}, \tag{100}\]
where \(\gamma_{E}\) is the Euler-Mascheroni constant, we get
\[\int\frac{d^{3}k}{(2\pi)^{3}}\sqrt{k^{2}+g^{2}v^{2}+g^{2}\Delta^{2}}\to-\frac {(g^{2}v^{2}+g^{2}\Delta^{2})^{2}}{2(4\pi)^{2}}\left[\frac{1}{\epsilon}+\frac {3}{2}+\ln\left(\frac{\Lambda^{2}}{g^{2}v^{2}+g^{2}\Delta^{2}}\right)\right]. \tag{101}\]
The second of the integrals on the right hand side of Eq. (17) is expressed as
\[\int\frac{d^{3}k}{(2\pi)^{3}}\frac{1}{(k^{2}+g^{2}v^{2}+g^{2}\Delta^{2})^{3/2}} \rightarrow\Lambda^{3-d}\frac{\Gamma\left(\frac{3}{2}-\frac{d}{2}\right)}{(4 \pi)^{\frac{d}{2}}\Gamma\left(\frac{3}{2}\right)}\left(\frac{1}{g^{2}v^{2}+g^{ 2}\Delta^{2}}\right)^{\frac{3}{2}-\frac{d}{2}}. \tag{102}\]
Taking \(d\to 3-2\epsilon\) and working in the \(\overline{\rm MS}\) scheme we get
\[\int\frac{d^{3}k}{(2\pi)^{3}}\frac{1}{(k^{2}+g^{2}v^{2}+g^{2}\Delta^{2})^{3/2} }\rightarrow\frac{2}{(4\pi)^{2}}\left[\frac{1}{\epsilon}+\ln\left(\frac{ \Lambda^{2}}{g^{2}v^{2}+g^{2}\Delta^{2}}\right)\right], \tag{103}\]
from where the result of Eq. (18) follows.
|
2309.10518 | Unsupervised Landmark Discovery Using Consistency Guided Bottleneck | We study a challenging problem of unsupervised discovery of object landmarks.
Many recent methods rely on bottlenecks to generate 2D Gaussian heatmaps
however, these are limited in generating informed heatmaps while training,
presumably due to the lack of effective structural cues. Also, it is assumed
that all predicted landmarks are semantically relevant despite having no ground
truth supervision. In the current work, we introduce a consistency-guided
bottleneck in an image reconstruction-based pipeline that leverages landmark
consistency, a measure of compatibility score with the pseudo-ground truth to
generate adaptive heatmaps. We propose obtaining pseudo-supervision via forming
landmark correspondence across images. The consistency then modulates the
uncertainty of the discovered landmarks in the generation of adaptive heatmaps
which rank consistent landmarks above their noisy counterparts, providing
effective structural information for improved robustness. Evaluations on five
diverse datasets including MAFL, AFLW, LS3D, Cats, and Shoes demonstrate
excellent performance of the proposed approach compared to the existing
state-of-the-art methods. Our code is publicly available at
https://github.com/MamonaAwan/CGB_ULD. | Mamona Awan, Muhammad Haris Khan, Sanoojan Baliah, Muhammad Ahmad Waseem, Salman Khan, Fahad Shahbaz Khan, Arif Mahmood | 2023-09-19T10:57:53Z | http://arxiv.org/abs/2309.10518v1 | # Unsupervised Landmark Discovery Using Consistency Guided Bottleneck
###### Abstract
We study a challenging problem of unsupervised discovery of object landmarks. Many recent methods rely on bottlenecks to generate 2D Gaussian heatmaps however, these are limited in generating informed heatmaps while training, presumably due to the lack of effective structural cues. Also, it is assumed that all predicted landmarks are semantically relevant despite having no ground truth supervision. In the current work, we introduce a consistency-guided bottleneck in an image reconstruction-based pipeline that leverages landmark consistency - a measure of compatibility score with the pseudo-ground truth - to generate adaptive heatmaps. We propose obtaining pseudo-supervision via forming landmark correspondence across images. The consistency then modulates the uncertainty of the discovered landmarks in the generation of adaptive heatmaps which rank consistent landmarks above their noisy counterparts, providing effective structural information for improved robustness. Evaluations on five diverse datasets including MAFL, AFLW, LS3D, Cats, and Shoes demonstrate excellent performance of the proposed approach compared to the existing state-of-the-art methods. Our code is publicly available at [https://github.com/MamonaAwan/CGB_ULD](https://github.com/MamonaAwan/CGB_ULD).
## 1 Introduction
Object landmark detection is an important computer vision problem. It portrays important information about the shape and spatial configuration of key semantic parts in 3D space
for deformable objects like human and animal faces [B,,,, ]. Many existing works have approached this problem in a fully-supervised manner [B,,,,,,, ] which requires abundance of annotated images. Acquiring a large dataset of dense annotations for a particular object category may be infeasible. Therefore, the current work aims discovering object landmarks in an unsupervised way. Unsupervised learning of object landmarks is a challenging problem because the landmarks can express diverse configurations even for simple object categories like human faces. Also, recovering underlying mapping between spatial location and high-level semantic understanding of landmarks without involving human supervision is quite challenging. Finally, the consistency of landmark detection should not be compromised under viewpoint variations, and detected landmarks should capture the shape of the deformable object [C].
Existing approaches to unsupervised landmark detection either impose equivariance constraint to 2D image transformation [C], [C], or leverage pre-text tasks such as (conditional) image generation [B,,, ]. For instance, [G] uses softargmax layer [G] to map the label heatmaps to a vector of points, and supervises the model with an equivariant error and a diversity constraint. Recently, Jakab _et al._[B] proposed conditional image generation to guide learning of unsupervised landmark detection. They mapped the output of the softargmax layer to 2D Gaussian-like heatmaps using a _bottleneck_ which is tasked with distillation of object geometry, and hence it learns structured embeddings. These heatmaps are then utilized to reconstruct the input image from its deformed version. The bottleneck is a crucial component in their pipeline as it guides the landmark detector to detect landmarks which are able to effectively reconstruct a deformed version of the same image. Using the same pipeline, Sanchez _et al._[C] approached unsupervised landmark detection from a domain adaptation perspective via learning a projection matrix to adapt to new object categories. A problem inherent to these approaches is that they cannot alleviate the impact of noisy structural cues, which can affect robustness under pose variations (see Fig.1). We argue that a key reason is the naive formulation of the bottleneck. It assumes that, during training, all discovered landmarks by the detection network are equally meaningful under various variations. This is a strict assumption, as it is likely that at least some discovered landmarks will be noisy. The resulting noisy structural cues can potentially limit the reconstruction ability and affect the robustness of landmark detector, making it detect semantically irrelevant landmarks, lacking appropriate correspondence (see Fig.1).
In the current work, we address the aforementioned issues by introducing a _consistency-guided bottleneck_ formulation that leverages landmark consistency to generate adaptive heatmaps. We rank the discovered landmarks based on their consistency and hence favour relatively consistent ones. We obtain pseudo-supervision via establishing landmarks correspondence across the images. It includes clustering landmarks after estimating their confidence in a KNN affinity graph. This consistency is then used to modulate the uncertainty of the landmark in the generation of adaptive heatmaps. As a result, the adaptive heatmaps favour consistent landmarks over their counterparts, thereby providing effective structural cues while reconstructing the input image. This, in turn, facilitates the landmark detector to produce semantically meaningful landmarks. 1 (see Fig.1).
Footnote 1: Note that, the consistency-guided bottleneck facilitates detecting semantically meaningful landmarks and not semantic landmarks as such.
**Contributions: (1)** We introduce a novel _consistency-guided bottleneck_ formulation in the image reconstruction-based unsupervised landmark detection pipeline. It utilizes landmark consistency, a measure of affinity score with the pseudo-ground truth, for the generation
of adaptive heatmaps. Such heatmaps potentially encode better structural information to facilitate an improved discovery of semantically meaningful and stable points. **(2)** We propose a principled way of generating adaptive heatmaps in an unsupervised mode. We first rank landmarks based on their consistencies and then modulate their corresponding uncertainties in the 2D Gaussian heatmaps. **(3)** We also introduce pseudo-supervision via establishing landmark correspondence across images. **(4)** Comprehensive experiments and analysis are performed on five diverse datasets: MAFL, AFLW, LS3D, Cats, and Shoes. Our approach provides significant gains over the existing state-of-the-art methods.
## 2 Related Work
**Unsupervised landmark detection methods** can be broadly categorised into either imposing equivariance constraint to image transformations, or leveraging image reconstruction as a pre-text task. In the absence of ground truth annotations, the equivariance constraint provides self-supervisory training signal. In particular, equivariance constraint requires representations across locations to be invariant to the geometric transformations of the image. Further constraints, based on locality and diversity are introduced to avoid trivial solutions. The generative methods employ equivariance constraints rather implicitly by considering objects as a deformation of the shape template in-tandem with the appearance variation in a disentangled manner. In, landmark discovery is formulated as an intermediate step of image representation learning. Similarly, casts this as disentangling shape and appearance and introduced equivariance and invariance constraints into the generative framework. Wiles _et al._[4] proposed a self-supervised framework to embed facial attributes from videos and then utilized those to predict landmarks. Most of these methods observe lack of robustness under pose variations.
**Deep clustering** methods employ clustering as pre-text task, to partition the images into different clusters and a classifier is trained to identify samples with same cluster id or by using the cluster assignments as pseudo-labels. For unsupervised landmark discovery, Mallis _et al._[5] recovers landmark correspondence via k-means clustering and utilized them to select pseudo-labels for self-training in the first stage. The pseudo-labels are
Figure 1: Left: Compared to ours, Jakab _et al._[3] (top) and Sanchez _et al._[4] (middle) are prone to discovering semantically irrelevant landmarks lacking appropriate correspondence across varying poses. Right: Comparison in terms of pose-wise NME(%) based on yaw-angles on the AFLW dataset.
used to learn a landmark detector in a supervised manner in the second stage. In contrast, we obtain pseudo-supervision to quantify landmark consistency. It is then used to modulate its 2D gaussian uncertainty in generating adaptive heatmaps. We do not use a dedicated feature head descriptor for learning landmark representations, and instead extract them directly from the encoder network. Moreover, we realize learning correspondence through clustering landmark representations after estimating their confidence in a KNN affinity graph.
## 3 Proposed Consistency Guided Bottleneck
We aim to train a model capable of detecting landmarks for an arbitrary object category, without requiring ground truth annotations. Similar to the prior works, we adopt an image generation based unsupervised landmark detection pipeline as shown in Fig. 2. It consists of a landmark detector network \(\Psi\), and a generator network \(\Phi\). An important part of this pipeline is conditional image generation to guide the detection network in learning effective landmark representations. The object appearance in the first example image is combined with object landmark configuration in the second example image, where the two example images differ in viewpoint and/or object deformation. Heatmap bottleneck is a crucial component in this pipeline for factorizing appearance and pose. It has a softargmax layer and a heatmap generation process. Specifically, the network \(\Psi\) is terminated with a layer that ensures the output of \(\Psi\) is a set of \(k\) landmark detections. First, \(k\) heatmaps are formed, one for each landmark, then each heatmap is renormalised to a probability distribution via spatial Softmax and condensed to a point by computing the spatial expected value. Finally, each heatmap is replaced with a Gaussian-like function centred at landmark location with a particular standard deviation depending upon the consistency of that landmark. Although this unsupervised landmark detection pipeline shows encouraging results for some object categories, it struggles to detect semantically meaningful landmarks, especially under large pose variations (Figs. 1, & 4). We believe the key reason is the naive formulation of the bottleneck, comprising of a softargmax layer and a heatmap generation process. The bottleneck assumes that all predicted landmarks are equally meaningful (i.e. have same semantic relevance). It is likely that at least some of the landmark detections will be noisy, particularly in the absence of ground truth supervision. To address this, we introduce a _consistency-guided bottleneck_ formulation that utilizes the landmark consistency towards generating adaptive heatmaps (Fig. 2).
Figure 2: Overall architecture with consistency-guided bottleneck and pseudo-supervision.
### Consistency of a Landmark
The consistency of a landmark is the proximity of its representation to an assigned pseudo-label which is a cluster centroid in our case. As such, it allows us to rank landmarks based on their consistency measures and hence favour relatively consistent ones over inconsistent ones. We obtain pseudo-supervision via establishing correspondence of landmarks across images. The process includes clustering the landmark representations after estimating their respective confidences in a KNN affinity graph. The consistency is then used to modulate the uncertainty of the landmark's 2D gaussian to generate adaptive heatmaps. Consequently, the adaptive heatmaps allow reducing the impact of noisy structural information (e.g., unstable landmarks) while reconstructing the image, which in turn allows the landmark detector to produce semantically meaningful and stable landmarks.
### Obtaining Pseudo-Supervision
We obtain pseudo-supervision through establishing landmark correspondence across images. If two landmarks \(k^{i}\) and \(k^{j}\) in image \(i\) and image \(j\) correspond to the same semantic attribute (e.g. nose-tip), then their corresponding landmark representations \(\mathbf{z}^{i}_{k}\), \(\mathbf{z}^{j}_{k}\) should have the same pseudo-label. We realize this by clustering landmark representations after estimating their respective confidences in a KNN affinity graph. We use the landmark representations to construct a KNN affinity graph \(G=(V,E)\). Where each landmark representation is a vertex belonging to \(V\), and is connected to its \(\mathcal{K}\) nearest neighbors, forming \(\mathcal{K}\) edges belonging to \(E\). The affinity between landmark \(k^{i}\) and landmark \(k^{j}\) is denoted as \(s_{i,j}\), which is the cosine similarity between their representations \(\mathbf{z}^{i}_{k}\) and \(\mathbf{z}^{j}_{k}\).
Using this affinity graph, we intend to perform the clustering of landmark representations by estimating the confidence of each landmark representation. The confidence reflects whether a landmark representation (a vertex in the affinity graph) belongs to a specific semantic attribute. However, due to different variations in face appearance and pose, each landmark representation may have different confidence values even when they belong to the same semantic attribute (e.g., nose). For a landmark representation with high confidence, its neighboring landmark representations tend to belong to the same semantic attribute, while a landmark representation with low confidence is usually adjacent to the representations from the other landmarks. Based on this, it is possible to obtain the confidence \(c_{\mathbf{z}^{i}_{k}}\) for each landmark representation vertex based on the neighboring labeled representations as [53],
\[c_{\mathbf{z}^{i}_{k}}=\frac{1}{|\mathcal{N}_{\mathbf{z}^{i}_{k}}|}\sum_{ \mathbf{z}^{i}_{k}\in\mathcal{N}^{i}_{\mathbf{z}^{i}_{k}}}(\mathbf{1}_{y^{i} =y^{i}}-\mathbf{1}_{y^{i}\neq y^{i}}).s_{i,j}, \tag{1}\]
where \(\mathcal{N}^{i}_{\mathbf{z}^{i}_{k}}\) is the neighborhood of \(\mathbf{z}^{i}_{k}\), \(y^{i}\) is the ground truth label of \(\mathbf{z}^{i}_{k}\) and \(s_{i,j}\) is the affinity between \(\mathbf{z}^{i}_{k}\) and \(\mathbf{z}^{j}_{k}\). However, due to training in unsupervised mode, we cannot use aforementioned expression to compute the confidence for a landmark representation, and instead use a pre-trained graph convolutional network [41] (GCN) to achieve the same.
With a pre-trained GCN, we can categorize the landmark representations based on their estimated confidences, to ultimately compute their cluster centroids. For a landmark representation vertex \(\mathbf{z}^{i}_{k}\), neighbors with confidence larger than \(\tilde{c}_{\mathbf{z}^{i}_{k}}\) show that they are more confident to belong to a certain cluster. Where \(\tilde{c}_{\mathbf{z}^{i}_{k}}\) is the predicted confidence of \(\mathbf{z}^{i}_{k}\). In this way, we assign each landmark representation to a cluster, and then compute the cluster-centroid by
taking the mean of representations assigned to this cluster. We denote the number of cluster centroids by \(T\) and they are much larger than the number of landmarks \(K\) for capturing the intra-class variance in each semantic attribute 2. So, each semantic attribute could occupy more than one cluster.
Footnote 2: Note that, the value of \(T\) is determined by the KNN+GCN clustering itself, and is set to 80 in Kmeans clustering.
### Quantifying landmark consistency
We quantify the consistency of a landmark by relating it to each of the cluster centroids. In particular, given a landmark feature representation \(\mathbf{z}_{k}\), we compute its similarity with the representations of \(T\) cluster centroids and take the maximum similarity:
\[d_{\mathbf{z}_{k}}=\max_{t\in T}\langle\mathbf{z}_{k},\mathbf{z}_{t}\rangle, \tag{2}\]
where \(\langle.,.\rangle\) is the cosine similarity operator, \(\mathbf{z}_{t}\) is feature representation of \(t^{th}\) cluster centroid, and \(d_{\mathbf{z}_{k}}\) denotes the consistency of \(k^{th}\) landmark. We assume that, if a landmark representation \(\mathbf{z}_{k}\) has higher similarity to its assigned cluster centroid, compared to another landmark representation, then it should be ranked higher in consistency compared to the other. We empirically observed that our model's learning strives to improve landmark consistencies. Landmark consistency is also related to the performance, so the improvement in landmark consistency is corroborated by the decrease in error.
### Generating Adaptive Heatmaps
We propose to generate adaptive 2D Gaussian heatmaps, as opposed to fixed ones, as it is likely that at least some proportion of the discovered landmarks will be noisy. In fixed heatmaps, the uncertainties of 2D Gaussians have a same constant value. This is particularly suitable if all landmark positions are semantically relevant, lying very close to the true spatial location of the semantic attribute. It is only possible if those landmarks are either carefully annotated by a human or perhaps, produced by some state-of-the-art fully-supervised landmark detector. However, in unsupervised mode, this is rather unlikely and hence we propose to rank these landmarks via modulating their 2D Gaussian uncertainties, to alleviate the impact of noisy landmarks in heatmap generation process.
Let \(\Omega\) denote the image grid of size \(H\times W\). The landmark detector \(\Psi(\mathbf{y})\) produces \(K\) heatmaps \(S_{u}(\mathbf{y};k)\), \(u\in\Omega\) one for each landmark \(k=1,...,K\). Where \(u\) are the coordinates of a landmark. These heatmaps are generated as the channels of a \(\mathbb{R}^{H\times W\times K}\) tensor. We re-normalize each heatmap to a probability distribution using spatial softmax [D]:
\[u_{k}^{*}(\mathbf{y})=(\sum_{u\in\Omega}ue^{S_{u}(\mathbf{y};k)})/(\sum_{u\in \Omega}e^{S_{u}(\mathbf{y};k)}). \tag{3}\]
In this work, we allow each 2D gaussian in a heatmap to reflect landmark's consistency. In particular, we modulate the uncertainty \(\sigma_{k}\) of 2D gaussian using the consistency \(d_{\mathbf{z}_{k}}\) described in Eq. (2) as:\(\sigma_{k}=1/exp(d_{\mathbf{z}_{k}})\). Using this modulated uncertainty \(\sigma_{k}\), we create _adaptive heatmaps_ by forming a Gaussian-like function, centred at the location of discovered landmark \(k\) i.e. \(u_{k}\).
\[\Psi_{u}(\mathbf{y};k)=\exp[-1/(2\sigma_{k}^{2})||u-u_{k}^{*}(\mathbf{y})||^{2}] \tag{4}\]
This results in a new set of \(K\) adaptive heatmaps encoding the 2D Gaussian heatmaps the location of \(K\) maxims, however, with a modulated uncertainty of 2D Gaussians reflecting
landmark consistency. As such, this alleviates the impact of noisy landmark detections, thereby highlighting the consistent ones. These adaptive heatmaps then become input along with the deformed image representation to the reconstructor network \(\Phi\). We observe that these adaptive heatmaps are a more informed encoding of spatial locations for the reconstructor network \(\Phi\). This in turn better facilitates the landmark detector \(\Psi\) in producing semantically meaningful landmarks across poses and object categories.
## 4 Experiments
**Datasets:** We validate our approach on human faces, cat faces and shoes. For human faces, we use CelebA [11] (comprising of more than 200k celebrity images), AFLW [11], and the challenging LS3D [12] (containing large poses). For CelebA, we exclude the subset of test images of MAFL [11], which are used to test our trained models. For AFLW, we used the official train and test partitions. For LS3D, we follow the same protocol as in [12] and use 300W-LP [11] for training. For cat faces, we choose Cats Head dataset [11] ( 10k images). Following [11], we use 7,500 for training the landmark detector and the rest for testing. For Shoes, we choose UT-Zappos50k [11], [12] (50k images), and use train/test splits from [11].
**Landmark detector network:** We use the Hourglass architecture [11] as landmark detection network \(\Psi\). To obtain landmark representation, we concatenate the feature maps from the last block of encoder (768-D) and then reduce their dimensions to 256 using 1x1 convolution. The network produces heatmaps of spatial resolution \(32\times 32\), which are converted into \(K\times 2\) tensor with a softargmax layer. We use element-wise multiplication of 256-D feature maps and heatmaps, to get 256-D representations of landmarks. For a fair comparison and following [11], the landmark detector \(\Psi\) is initialised with the checkpoint, pre-trained on MPII. For details on image reconstruction network, we refer to the supplementary material.
**Evaluation metrics:** We use _forward_ error [11], [12], _backward_ error [11], and Normalised Mean-squared Error (NME), normalized by inter-ocular distance to report the performance.
**Training details:** We use \(\mathcal{K}=80\) in KNN affinity graph and use GCN to estimate confidences of the landmark representation vertices. In particular, we use a 1-layer pre-trained GCN on MS-Celeb-1M [12] dataset.We obtain pseudo-supervision after every 5 epochs. Our overall network architecture is trained for 145 epochs, with a learning rate of \(1\times 10^{-4}\), and a mini-batch size of 16 using Adam optimizer.
Figure 3: Cumulative error distribution (CED) curves for forward and backward errors.
Comparison with the state-of-the-art (SOTA):
**MAFL and AFLW:** In the forward error evaluation (Tab. 1), our method outperforms the baseline by a notable margin in both MAFL and AFLW datasets. Furthermore, it provides a significant improvement over the recent top performing methods of [] and [] in both datasets. Our baseline is an in-house implementation of the existing pipeline. In backward error evaluation (Tab. 3), our approach demonstrates the best performance by achieving the lowest NME of 4.26% and 6.39% on MAFL and AFLW, respectively. See Fig. 3 for Cumulative Error Distribution (CED) curves. **LS3D, Cats and Shoes:** In LS3D, our method achieves the best performance in both forward and backward errors (Tab. 2 (left)), and detects semantically meaningful landmarks with improved correspondence (Fig. 4). On Cats Head, our method delivers improved performance compared to others in both forward and backward errors (Tab. 2 (right)), and despite variations (e.g., appearance and expressions) it discovers landmarks displaying improved correspondence across images (Fig. 4).
**Stability Analysis:** The stability of discovered landmarks is evaluated by measuring the error per landmark [] as, \(e_{k}=||\Psi_{k}(A(\mathbf{y}))-A(\Psi_{k}(\mathbf{y}))||\), where \(A\) denotes a random similarity transformation. We report stability error, averaged over K=10 landmarks, in Tab. 4. Our method produces more stable landmarks than the competing approaches on most datasets.
**Ablation Study and Analysis:** See suppl. for a study on method specific hyperparameters.
**On landmark consistency:** We compare landmark consistencies via the consistency measure \(d\) during the training (Fig. 5). Our model learning strives to gradually improve landmark consistencies. In contrast, in baseline, the landmark consistencies remain almost the same during training. The landmark consistency also impacts (forward) error on test set and so in our case the improvement in landmark consistency is reflected by the decrease in the error. Fig. 6 (right) displays consistency-modulated heatmaps during training. Larger blob radius and higher redness indicate lower consistency.
\begin{table}
\begin{tabular}{l c c} \hline \hline Method & MAFL & AFLW \\ \hline Baseline[] & 4.53 & 8.84 \\ Sanchez[] & 14.74 & 25.85 \\ Mallis[] & 8.23 & - \\ Ours & **4.26** & **6.39** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Backward errors comparison on MAFL and AFLW datasets.
\begin{table}
\begin{tabular}{l|l c} \hline \hline Method & Forw. Err. Backw. Err. \\ \hline Baseline[] & 5.38 & 7.06 \\ Sanchez[] & 26.41 & 5.44 \\ Mallis[] & 6.53 & 6.57 \\ Ours & **5.21** & **4.69** \\ \hline \hline \end{tabular} \begin{tabular}{l c c} \hline \hline Method & Forw. Err. Backw. Err. \\ \hline Baseline[] & 5.38 & 7.06 \\ Sanchez[] & 26.41 & 5.44 \\ Mallis[] & 6.53 & 6.57 \\ Ours & **5.21** & **4.69** \\ \hline \hline \end{tabular}
\begin{tabular}{l c c} \hline \hline Method & Forw. Err. Backw. Err. \\ \hline Baseline[] & 4.53 & 4.06 \\ Sanchez[] & 4.42 & 4.17 \\ Ours & **3.76** & **3.94** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Error comparison on (left) LS3D, (right) Cats Head datasets.
\begin{table}
\begin{tabular}{l|l c} \hline \hline Method & MAFL & AFLW \\ \hline Baseline[] & 4.53 & 8.84 \\ Sanchez[] & 14.74 & 25.85 \\ Mallis[] & 8.23 & - \\ Ours & **4.26** & **6.39** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Performance comparison with the SOTA on MAFL and AFLW in forward errors. \(\dagger\): uses the VGG-16 for perceptual loss, \(\ddagger\): uses a pre-trained network for perceptual loss. Our method outperforms baseline by a notable margin in both datasets.
#### 4.2.3 On landmark detector (\(\Psi\)) trained from scratch:
Tab. 6 reports the performance of the baseline and our method when \(\Psi\) is trained from scratch instead of being initialized from a checkpoint. Our method outperforms baseline by notable margins.
#### 4.2.4 Clustering Landmark Representations:
Fig. 6 (left) visualizes the clustered landmark features using t-SNE. The features are well-separated into different classes, and hence facilitate effective correspondence establishment. We also observe clustering quality by KNN+GCN is much better than only Kmeans (see Tab. 5).
#### 4.2.5 On pseudo-supervision:
Tab. 7 evaluates the strength of our novel _consistency-guided bottleneck formulation_, by replacing KNN affinity graph and refinement (KNN+GCN) with K-means for achieving pseudo-supervision.
We also plot the evolution of \(T\) during training for different pre-fixed values of \(\mathcal{K}\) in KNN (Fig. 7). We see that, for a given \(\mathcal{K}\), the value of \(T\) produced is less (approx by 20) than value of \(\mathcal{K}\) throughout training.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{Methods/Datasets} & MAFL & AFLW & Cats Head \\ \cline{2-5} & Fwd Bwd & Fwd Bwd & Fwd Bwd \\ \hline Baseline & 6.27 16.6 & 9.02 26.3 & 14.1 44.4 \\ Ours & 3.92 8.49 & 6.85 11.7 & 4.1 3.41 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Performance of baseline and our method when the landmark detection network \(\Psi\) is trained from scratch.
Figure 4: Visual comparison of ours with Jakab et al. [1] and Sanchez et al. [2]. Our method discovers more semantically relevant landmarks and recovers improved correspondence.
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline \multirow{2}{*}{Methods/Datasets} & MAFL & AFLW & Cats Head & LS3D & \multirow{2}{*}{
\begin{tabular}{} \end{tabular} } \\ \cline{2-2} \cline{5-8} & F & B & F & B & F & B \\ \hline Baseline [1] & 3.99 4.53 & 4.53 4.06 & 5.38 7.06 \\ SOTA & 3.99 4.53 & 4.42 4.06 & 5.38 6.57 \\ Ours w/ KMeans & 3.73 **3.29** & 3.95 4.95 & 5.34 4.70 \\ Ours w/ KNN+GCN & **3.50** 4.26 & **3.76** & **3.94** & **5.21** **4.60** \\ \hline \hline \end{tabular}
\end{table}
Table 7: Comparison when either using KNN+GCN or K-means for pseudo-supervision with baseline [1] and SOTA methods [1, 2, 2]. Red: Best, Blue: Second best.
Figure 4: Visual comparison of ours with Jakab et al. [1] and Sanchez et al. [2]. Our method discovers more semantically relevant landmarks and recovers improved correspondence.
Finally, we report both the forward and backward errors with different values of \(\mathcal{K}\) (see Tab. 8). \(\mathcal{K}\)=80 used in our experiments, shows the best performance.
## 5 Conclusion
In this work, unsupervised landmark detection is improved by introducing a novel consistency-guided bottleneck. The landmark consistency is used for generating adaptive heatmaps. The consistency of a landmark is gauged by the proximity of its representation to the cluster center considered as pseudo label. Pseudo-supervision is established via landmark correspondence across multiple images. Extensive experiments on five publicly available datasets and a thorough analyses has demonstrated the effectiveness of the proposed approach. Excellent performance is observed compared to existing SOTA methods.
Figure 5: Comparison of average landmark consistency via \(d\). (a) Baseline (Jakab et al. ) (b) Ours (c) the impact of \(d\) on test forward error.
Figure 6: Left: Clustered features using tSNE with cluster ids. Right: Consistency-modulated heatmaps during training on AFLW. Larger blobs indicate lower consistency.
\begin{table}
\begin{tabular}{l c c c} \hline \hline \(\mathcal{K}\) & 40 & 80 (ours) & 120 \\ \hline Forward (F) / Backward (B) & 6.29/6.71 & **5.91/6.39** & 6.20/7.23 \\ \hline \hline \end{tabular}
\end{table}
Table 8: Performance with different values of \(\mathcal{K}\).
Figure 7: Evolution of \(T\) for three different pre-fixed \(\mathcal{K}\) values. |
2309.07064 | A Comprehensive Analysis of the Role of Artificial Intelligence and
Machine Learning in Modern Digital Forensics and Incident Response | In the dynamic landscape of digital forensics, the integration of Artificial
Intelligence (AI) and Machine Learning (ML) stands as a transformative
technology, poised to amplify the efficiency and precision of digital forensics
investigations. However, the use of ML and AI in digital forensics is still in
its nascent stages. As a result, this paper gives a thorough and in-depth
analysis that goes beyond a simple survey and review. The goal is to look
closely at how AI and ML techniques are used in digital forensics and incident
response. This research explores cutting-edge research initiatives that cross
domains such as data collection and recovery, the intricate reconstruction of
cybercrime timelines, robust big data analysis, pattern recognition,
safeguarding the chain of custody, and orchestrating responsive strategies to
hacking incidents. This endeavour digs far beneath the surface to unearth the
intricate ways AI-driven methodologies are shaping these crucial facets of
digital forensics practice. While the promise of AI in digital forensics is
evident, the challenges arising from increasing database sizes and evolving
criminal tactics necessitate ongoing collaborative research and refinement
within the digital forensics profession. This study examines the contributions,
limitations, and gaps in the existing research, shedding light on the potential
and limitations of AI and ML techniques. By exploring these different research
areas, we highlight the critical need for strategic planning, continual
research, and development to unlock AI's full potential in digital forensics
and incident response. Ultimately, this paper underscores the significance of
AI and ML integration in digital forensics, offering insights into their
benefits, drawbacks, and broader implications for tackling modern cyber
threats. | Dipo Dunsin, Mohamed C. Ghanem, Karim Ouazzane, Vassil Vassilev | 2023-09-13T16:23:53Z | http://arxiv.org/abs/2309.07064v2 | A Comprehensive Analysis of the Role of Artificial Intelligence and Machine Learning in Modern Digital Forensics and Incident Response
###### Abstract
In the dynamic landscape of digital forensics, the integration of Artificial Intelligence (AI) and Machine Learning (ML) stands as a transformative technology, poised to amplify the efficiency and precision of digital forensics investigations. However, the use of ML and AI in digital forensics is still in its nascent stages. As a result, this paper gives a thorough and in-depth analysis that goes beyond a simple survey and review. The goal is to look closely at how AI and ML techniques are used in digital forensics and incident response. This research explores cutting-edge research initiatives that cross domains such as data collection and recovery, the intricate reconstruction of cybercime timelines, robust big data analysis, pattern recognition, safeguarding the chain of custody, and orchestrating responsive strategies to hacking incidents. This endeavour digs far beneath the surface to unearth the intricate ways AI-driven methodologies are shaping these crucial facets of digital forensics practice. While the promise of AI in digital forensics is evident, the challenges arising from increasing database sizes and evolving criminal tactics necessitate ongoing collaborative research and refinement within the digital forensics profession. This study examines the contributions, limitations, and gaps in the existing research, shedding light on the potential and limitations of AI and ML techniques. By exploring these different research areas, we highlight the critical need for strategic planning, continual research, and development to unlock AI's full potential in digital forensics and incident response. Ultimately, this paper underscores the significance of AI and ML integration in digital forensics, offering insights into their benefits, drawbacks, and broader implications for tackling modern cyber threats.
Acticle history:
## 1 Introduction
In recent years, the field of digital forensics has expanded rapidly, relying on technology to collect and analyse digital evidence during criminal investigations, in accordance with Casey (2011). As the use of digital evidence in criminal investigations continues to rise, there is a greater need for efficient and effective crime investigation strategies. Machine learning (ML) and artificial intelligence (AI) are two potent technologies that have the potential to revolutionise digital forensics by enabling analysts to process vast amounts of data swiftly and precisely, thereby detecting crucial evidence, as stated by Du et al., (2020).
This research paper will begin by providing an overview of the field of digital forensics and the challenges that digital forensic analysts face, including the sheer volume of data, the variety of digital devices, and the dynamic nature of the digital world. The paper will then examine the current use of AI and ML in digital forensics and the obstacles it encounters, such as the lack of standardisation and interpretability issues. Also, this paper will explore several ways in which AI and ML can be utilised to improve the efficiency and accuracy of digital forensic analysis based on image and text analysis, network analysis, and machine-assisted decision-making. Lastly, the challenges and limitations of using AI and ML in digital forensics will be discussed, as well as potential future research directions, discussions, and findings.
The use of digital forensics in criminal investigations has
emerged as a burgeoning area of interest. This new field requires intensive computing to acquire, process, and analyse enormous quantities of data, making the process laborious and time-consuming. To address this challenge, Dunsin., et al. (2022) propose a variety of applications and the implementation of artificial intelligence (AI), such as how AI techniques can be applied in the field of disaster response (DF) and in the context of incident response in a constrained environment. Notably, the use of AI in criminal investigations is essential, especially given the increasing prevalence of technology and cybercrime. Numerous studies have shown that electronic-based cybercrimes constitute the vast majority of offences, highlighting the significance of a digital solution as outlined by Qadir and Noor (2021). Even though databases for storing solved, unsolved, and pending cases are growing in size, it is necessary to maintain this information online for the sake of accessibility and security. For this reason, it is natural to utilise AI and machine learning (ML) applications to train datasets that digital forensics investigators can broadly utilise.
According to Thagard (1990), human experts currently conduct forensics investigations using a variety of tools and script-based applications, which call for time and expertise and are prone to human error. In light of this, the introduction of AI technology has the potential to resolve these obstacles and enhance the investigative process's efficiency. With AI, digital forensics will be quicker, more accurate, and more streamlined, as algorithms will be able to swiftly scan through vast amounts of data, including previously closed cases. This frees up detectives' time so they can focus on other pressing matters. Also, because AI is automated, it can create networks that give criminal investigators access to added resources regardless of location.
In today's society, the use of artificial intelligence (AI) and machine learning (ML) in criminal investigations is becoming increasingly crucial. According to a report from the Identity Theft Resource Center's 2021 Annual Data Breach Report Sets New Record for Number of Compromises - ITRC, (2022), cyberattacks have increased by 68% compared to the previous year. In digital forensics, which entails the acquisition, processing, and analysis of vast quantities of digital data for criminal investigations, AI and ML have proven particularly useful. According to Garfinkel (2010), the acquisition phase of digital forensics' lifecycle employs AI algorithms to analyse complex data sets that would be impossible for a human forensic expert to do manually. As a result, AI and ML have aided the criminal justice system in solving complex digital forensic investigation issues. Cabitza, Campagner, and Basile (2023) state that despite the significant progress made with AI and ML, there are still numerous challenges in this field, such as the incompatibility of existing applications. Moreover, the acquisition and reconstruction of data for the identification of criminal acts may violate privacy laws, posing a moral and legal challenge in this field. In order for digital forensics to keep up with perpetrators, it is crucial to develop more agile and effective tools capable of overcoming these obstacles.
The deployment of artificial intelligence (AI) in digital forensics requires careful consideration of the validity of the data being analysed and processed. However, determining the validity of the data presents a significant challenge because few researchers have shared effective methods for validating the data. According to Quick and Choo (2014), assessing the value of processed data to enable researchers to reduce, compress, or duplicate large datasets during investigation and analysis is a challenge associated with the use of AI in digital forensics. It can be difficult to convey the value of data due to the fact that different cultures use different communication styles. Moreover, given their cultural backgrounds, various communities may place varying degrees of importance on a variety of factors. Moreover, according to Mohammed et al. (2019), forensic studies of digital data have not been sufficiently diversified, and the majority of cybercrime investigators have concentrated on cases involving popular Western culture. Nevertheless, dedicated machine learning of data from various regions and cultures could improve AI's ability to work with diverse groups and datasets. As well, a more diverse group of researchers could play an important role in resolving these issues.
### Research Aims
This research is dedicated to exploring the integration of AI and ML techniques within the context of digital forensics and incident response. It places a strong emphasis on the potential these techniques hold for automating various phases of digital forensics investigations. Concurrently, the research confronts challenges associated with data volume and device diversity, assesses the practical applications of AI and ML, and investigates potential trajectories for future research. Importantly, the research takes on the substantial challenges that arise from applying AI and ML in the field of digital forensics. These challenges encompass critical concerns related to data privacy, security, data quality, and data integrity. They include things like managing data well to stop privacy breaches and intrusions, dealing with the effects of biassed data that can lead to unfair or wrong results, and figuring out how to train and test ML models in the field of digital forensics when data is incomplete, disorderly, or biassed.
To conduct this study, the research will rely on a meticulous literature review, drawing extensively from scholarly and peer-reviewed sources. This approach aims to develop a profound understanding of the subject matter, assess the various factors influencing the research problem, and propose well-informed solutions. Also, the research will explore the broader implications of its findings and identify promising avenues for future research, including the exploration of emerging technologies and evolving methodologies.
### Research Contributions
The research paper provides a valuable set of contributions to the field of digital forensics and AI/ML integration. These contributions encompass various dimensions of this field, furnishing both theoretical and practical insights that can serve as guiding lights for future research and practical endeavours.
First and foremost, the paper delivers a comprehensive overview of the contemporary landscape of AI and ML in digital
forensics. It effectively addresses the formidable challenges that confront forensic analysts operating within the swiftly evolving digital milieu. These challenges encompass the formidable task of managing vast data volumes, contending with a multitude of digital device types, and grappling with the dynamic nature of digital interactions. The paper adeptly unravels the intricacies and multifaceted nature of these challenges, establishing a solid foundation for a nuanced comprehension of the roles played by AI and ML in this domain.
A substantial contribution to this research lies in its meticulous analysis of the practical application of AI and ML techniques across various facets of digital forensics. The paper ventures into the domain of how these technologies can significantly enhance the practice of digital forensic analysis. It showcases their potential for boosting efficiency and accuracy, particularly in domains such as image and text analysis, network analysis, and machine-assisted decision-making. Importantly, this exploration isn't purely theoretical; it is firmly rooted in real-world instances and case studies, providing tangible evidence of the benefits accruing from the integration of AI and ML into digital forensic practices.
Furthermore, the paper confronts the formidable challenges and limitations entailed in the utilisation of AI and ML in digital forensics. It casts a spotlight on critical issues, including concerns related to data privacy, security, data quality, and data integrity. These concerns assume paramount importance in the context of forensic investigations, with far-reaching implications for the credibility and admissibility of digital forensic evidence in legal proceedings. The research methodology adopted, a comprehensive literature review, stands as yet another substantial contribution. By drawing extensively from scholarly and peer-reviewed secondary sources, the paper achieves the synthesis of a wide spectrum of perspectives and findings. This approach not only accentuates the current state of research but also identifies gaps and outlines future trajectories, thus enriching the ongoing discourse within the field.
### Research Motivation
There has been significant interest in the use of artificial intelligence (AI) and machine learning (ML) in digital forensics in recent years. This is due to the fact that current human expertise procedures are time-consuming, error-prone, and incapable of handling the vast quantities of forensic data that modern digital devices generate. According to Stoney and Stoney (2015), one compelling reason for this integration is to enhance the efficacy and overall performance of forensic examination. On top of that, Jarrett and Choo (2021) state that by automating and streamlining various actions involved in reviewing digital evidence, such as data analysis, image and video processing, and pattern recognition, digital forensic investigators can swiftly analyse massive amounts of data, identify pertinent information, and establish connections that may not be discernible using conventional human methods.
Guarino (2013) noted that the incorporation of AI and ML in digital forensics has grown in significance due to their potential to improve investigational precision and consistency. Moreover, Ngejane et al. (2021) reported that by training digital forensic tools to recognise specific patterns or characteristics that indicate certain types of behaviour, ML algorithms can reduce the number of false positives and improve overall accuracy. Moreover, some AI and ML algorithms, for instance, can detect patterns and anomalies that may not be immediately apparent to the human eye, which can be particularly advantageous when identifying concealed or disguised evidence, resulting in more accurate and reliable results.
AI and ML, as previously stated by Hemdan and Manjaiah (2017), can aid in digital forensic analysis by identifying anomalies in network traffic, detecting malware, classifying files based on their content, and recognising objects and people in images and videos. Another crucial application of AI and ML in digital forensics is their ability to enhance investigation consistency and identify new criminal trends. James et al. (2021) stated that machine learning (ML) models can be trained on data sets and use statistical learning to predict new data sets, allowing for the identification of new evidence and cases to investigate and reducing the number of cases that must be manually analysed.
### Research Context and Scope
The purpose of the present study is to assess the current state of research on the application of artificial intelligence (AI) and machine learning (ML) to digital forensics and incident response tasks. Specifically, the investigation will examine the techniques and methods used to employ AI and ML for a variety of tasks, such as data analysis and triage, incident detection and response, forensic investigation and analysis, network security, and cyber security. The comparative analysis will also consider the advantages and disadvantages of deploying AI and ML in various contexts, such as bias, precision, and interpretability. The analysis will incorporate a thorough evaluation of the legal and ethical implications of employing AI and ML in digital forensics and incident response.
### Research Challenges
The application of AI and ML in digital forensics presents a number of significant research challenges that demand scholarly attention. In 2018, Losavio et al., identified data privacy and security as one of the primary challenges that must be carefully managed to prevent intrusions and privacy violations during digital forensics investigations. The quality and integrity of data may be compromised during data collection and analysis, resulting in unreliable and inaccurate outcomes. Moreover, the presence of data bias and discrimination may result in unjust or inaccurate outcomes, highlighting the significance of ensuring unbiased training data. According to Zhang et al., (2018), the availability and quality of data can pose challenges for training and evaluating machine learning models in digital forensics, where incomplete, chaotic, or biassed data can present difficulties.
Brkan and Bonne (2020) stated another challenge involving the interpretability and explainability of ML, which can be considered "black boxes" and difficult to explain in court situations where evidence must be presented and justified.
Mohammed et al., (2016) mentioned another challenge that pertains to scalability and performance, where processing massive volumes of data generated in digital forensics investigations is a significant issue that requires the optimisation of AI and ML algorithms. Furthermore, the lack of clear standards and best practices for using AI and ML in digital forensics poses an extra challenge for digital forensics experts, as it can be challenging to determine the most appropriate techniques for a particular investigation. Relatedly, the inability to interpret and explain machine learning models poses a significant challenge for digital forensics experts, as the findings and conclusions of such models may be difficult to articulate.
Due to the diverse array of devices, operating systems, and file format types encountered in digital forensics investigations, generalisation is another significant obstacle. According to Krizhevsky et al. (2017), machine learning models may struggle to generalise across these diverse categories of data. Lipton (2018) noted that it may be difficult to determine whether the model's outputs are accurate and reliable when machine learning techniques are employed, especially in complex and ambiguous situations. As a result, as machine learning becomes increasingly important in digital forensics, it is essential to be aware of potential adversarial attacks, in which an attacker generates inputs intended to confuse machine learning models, as noted by Biggio et al., (2013). Moreover, adversarial attacks are especially worrisome in digital forensics because the stakes are high and the i implications of inaccurate or unreliable results could be severe.
### Research Approach
This research paper will use a comprehensive literature review to understand the research problem and potential solutions. It will synthesise and present information, highlighting key issues and assessing the appropriateness of solutions. The paper will also evaluate strengths and weaknesses, identify future research directions, and provide an in-depth analysis of the research strategy, findings, and recommendations. The methodology will be rigorous and systematic, providing a holistic view of the research.
### Research Methodology for Literature Investigation
The research methodology approach for identifying the relevant literature used in this research encompasses several methodical steps to ensure the comprehensiveness and relevance of the gathered materials. This methodology involves a combination of systematic literature search, critical appraisal, and thematic categorization of the selected studies.
The first step involves defining clear objectives and the scope of the literature review. The primary objective is to explore how AI and ML are applied in digital forensics and incident response. This includes understanding the types of applications, their effectiveness, challenges faced, and future prospects. The scope is confined to scholarly articles, conference proceedings, and reputable industry reports published within the last couple of years to ensure the relevance and timeliness of the information.
The literature search is conducted across multiple London Metropolitan University academic databases, such as IEEE Xplore, ACM Digital Library, SpringerLink, and Google Scholar. Keywords and phrases like "Artificial Intelligence in Digital Forensics," "Machine Learning in Incident Response," and "AI/ML Applications in Cybersecurity Forensics" are used. Boolean operators (AND, OR, NOT) are employed to refine the search results. The use of clear inclusion and exclusion criteria is intended to filter the literature. Inclusion criteria involve factors such as the publication date, the relevance of AI/ML in digital forensics, and the academic credibility of the source. Exclusion criteria include non-English publications, redundant studies, and papers not peer-reviewed.
Data extraction involves systematically collecting information from the selected studies. This includes authors, publication year, research objectives, methodologies, findings, and key conclusions. Tools such as Mendeley or Zotero are used for reference management and to organise the literature efficiently. Each selected paper undergoes a quality assessment to evaluate the research design, methodology, data analysis, and validity of the conclusions. This step ensures that the review is based on high-quality, reliable sources. The extracted data is then subjected to thematic analysis. This involves identifying patterns and themes within the literature, such as common methodologies, findings, or gaps in research. This process helps in synthesising the data to provide a comprehensive overview of the current state of AI and ML in digital forensics and incident response.
The final step involves synthesising the findings from the thematic analysis into a cohesive narrative and comparative table. This includes discussing the prevalent trends, potential applications, challenges, and future directions of AI and ML in the field. The review aims to provide a critical evaluation of the literature, identifying areas of consensus, divergence, and unexplored territories in the research landscape. The research approach culminates in a conclusion that not only summarises the findings but also provides recommendations for future research and practical applications in the field. This includes identifying gaps in the current literature and suggesting how future studies can address these gaps. This methodology ensures a comprehensive, systematic, and unbiased review of the literature, providing valuable insights into the role of AI and ML in enhancing digital forensics and incident response capabilities.
## 2 Literature Review and Research Gaps
The digital forensic evidence life cycle is a complex and multifaceted procedure comprised of several interdependent phases. These stages include identification of data sources, collection, preservation, examination, analysis, and presentation. To acquire a comprehensive understanding of this procedure, it is imperative to meticulously investigate each phase individually, as depicted in Figure 1.
This paper presents a systematic literature review (SLR) as depicted in Figure 2, that investigates the potential of AI and ML methodologies for automating digital forensics processes. This paper's SLR provides detailed technical insights into the research gaps, limitations, and strengths of previous studies and suggests ways in which future research can resolve these gaps.
Figure 1: Digital Forensic Evidence Life Cycle
Figure 2: Cyber Forensics Cycle-Inspired Proposed Roadmap for Systematic Literature Review (SLR) of AI and ML Techniques in DFIR
### Big Data Digital Forensic Investigation
According to Song and Li (2020), the widespread adoption of the Internet has led to a significant increase in cybercrime, which poses a grave threat to safety, social and economic development, and critical infrastructure. As depicted in _Figure 3_, the research presents a practical framework for conducting digital forensic investigations utilising big data technologies that manage all aspects of data collection, processing, analysis, and presentation while incorporating the most effective and cost-effective solutions. In the fight against cybercrime, the study contributes considerably to the fields of digital forensics and big data analytics.
However, the research does not address the issue of the validity of the data being processed in the preservation process of investigations involving big data. This is a significant challenge, as people from different parts of the globe present different types of data and obstacles, and the forms may vary across devices and platforms. Besides, distinct cultural backgrounds may influence the significance and meaning of data when compared to Western languages. That being the case, it is essential to analyse and differentiate the data using the same artificial intelligence technology, which the research neglected to mention.
Despite this limitation, Song and Li's (2020) proposed framework model is robust, primarily because it takes into account the potential volume of big data and proposes advanced tools and methods for organising, standardising, and compressing the data in order to reduce the labour and cost of the process. As a result, this expedites the investigation and reduces the financial burden, given the volume of data and the rate at which it is produced. As a bonus, the framework takes into account the investigation and presentation processes, as well as how to ensure the validity, precision, security, and legitimacy of the big data investigation. Hence, this reduces the likelihood of errors, ensures dependability, and increases the likelihood that users will receive the intended results in response to their commands.
To improve the efficacy of forensic data investigations, it is crucial to avoid the inefficient use of time that frequently results from sifting through vast amounts of data without sufficient guidance or an adequate comprehension of the user's goals. Prior to initiating an investigation, it is crucial to ensure that the instructions and objectives are explicitly and exhaustively defined in order to achieve more accurate and meaningful results. Also, using artificial intelligence systems that are tailored to the region, the user's specific goals, and the values of the targeted data sources can make digital forensic investigations much more accurate and efficient than when standard intelligence systems are used alone. In this regard, Song and Li's (2020) research contributes to the advancement of digital forensics science by providing valuable insights into the use of big data technology to support cybercrime investigations, prevention, and online social interactions.
### Volatile Memory Evidence Retrieval
Thantilage and Le Khac (2019) proposed a model for extracting memory dumps from RAM, as depicted in _Figure 4_, in order to acquire forensic evidence, with the primary goal of demonstrating that social media and instant messaging artefacts can serve as evidence for investigators. The authors also sought to elaborate on the nature of memory samples retrieved from RAM and their utility for digital forensics examiners and researchers. The authors refer to the challenge of extracting volatile memory (RAM) data that contains evidence from various social media and messaging platforms. Volatile memory is temporary and stores data as long as the device is powered. The difficulty arises because each social media or instant messaging app has unique ways of handling and storing data in RAM. This diversity makes it challenging to create a universal tool that can effectively extract pertinent data from all these platforms.
Figure 3: Big Data Digital Forensics Framework by Song and Li’s (2020)
Indeed, the layout and organisation of data in RAM depend on the internal workings of each application, which vary not only among different applications but also across different versions of the same application. This variability presents significant challenges in creating filters or tools that can consistently and accurately interpret data from these applications when stored in volatile memory.
Thantilage and Le Khac's (2019) proposed framework structure and functions include nine phases to assure the credibility of the evidence retrieved. The authors emphasised the significance of recovering RAM data as soon as possible and avoiding restarting the computer in order to avoid losing crucial evidence. The study suggested two software programmes, Dumplt for Windows and OSXpmem for Mac OS, to retrieve memory data. Dumplt was selected due to its user-friendliness and rapid memory data acquisition, but as per the paper, Dumplt was originally limited to systems with up to 4 GiB of RAM due to its 32-bit architecture. However, newer versions of Dumplt have overcome this limitation, making it suitable for modern systems with larger RAM capacities.
OSXpmem retrieves data in RAW format, which is required for the proposed framework to produce accurate results. According to Kiley et al. (2008), using this tool requires the creation of separate profiles to guarantee volatility. Despite this, the paper fails to mention that users will be required to download a kernel that will operate concurrently with the framework in order for the extracted data to remain uncorrupted. Without the added utility, however, the memory dump may request the necessary access permissions, rendering it ineffective.
According to Yusoff et al. (2017) the framework proposed by Thantilage and Le Khac (2019) included a REGEX-based string search for the memory dump, which supports most programming languages but is not suitable for complex recursive data formats such as XML and HTML. Despite this limitation, the framework was experimentally tested on different social media and messaging platforms and operating systems, successfully retrieving valuable data that examiners could use in an investigation, including usernames and passwords for specific social media accounts. However, Thantilage and Le Khac (2019) should expand their investigation to include mobile devices and other smart home appliances.
### File Type Identification
Mittal et al. (2021) have contributed to the field of data carving and memory forensics by presenting a new identification method for files, as depicted in Figure 5. The research aimed to demonstrate the superiority of their tool, FiFTy, compared to older file-identifying tools. The research emphasises the advantages of FiFTy, including diversified and reliable 75 file-type datasets, faster processing, higher accuracy, and better scalability. The research, however, ignored the application of data-type classification and concentrated solely on the classification of commonly used file types. Although classification of data types would have required more complex combinations, it would have been beneficial to compare FiFTy's performance to that of other data carving tools.
The 75 file-type datasets used in the Mittal et al. (2021) study had dependency issues, which made it difficult for the classifier to generalise and study images embedded in other file types such as PDF, PPT, and DOC. Then again, the study only looked at photo and graphic data from newer files that are common on SD cards used in modern IoT (Internet of Things) devices. It didn't look at a lot of different types of data in both old and new formats. In spite of this, the authors have contributed a robust research model by comparing their methodology to three other strong baselines to obtain a more objective comparison.
The research's strength resides in its exhaustive and detailed comparison of FiFTy to numerous baseline methods, as well as its extensive use of file-type datasets. This study investigated the various techniques utilised by various data carving tools for reassembling and recovering data files. It was discovered that FiFTy is a more efficient and trustworthy tool than others because it can perform multiple functions that were
Figure 4: Proposed Framework by Thantilage and Le Khac (2019)
Figure 5: The Proposed Network Architecture by Mittal et al. (2021)
previously performed by multiple tools. However, the study could have specified the effectiveness of the file-type identification methods used on fragmented versus non-fragmented file structures. According to Sari and Mohamad (2020), file carving tools operate differently on fragmented and non-fragmented file structures, and only a limited number of tools are capable of recovering fragmented files.
According to Carrier (2005), _Foremost_ and _PhotoRec_ are a sequential file-carving tool that operates on a non-fragmented file and can identify the start and end points of files based on known file signatures, headers, and footers. However, PhotoRec can effectively recover various file types (like JPG, PNG, and DOC) by recognising their standard headers and footers. On the other hand, _Scalpel_ is an advanced carving tool with fragment handling that is designed to handle fragmented files more effectively. It uses sophisticated algorithms to identify and reconstruct file fragments, and it can be configured to search for specific file types and use complex rules to reassemble fragmented files.
Most importantly, Mittal et al. (2021) research provides invaluable insights into the creation of a new instrument, FiFTy, for file identification in data carving and memory forensics. Even though data-type classification wasn't used, the large file-type datasets used and the thorough comparison of FiFTy to many baseline methods are important contributions to the field. In contrast, the limitations of the methodology include the dependency issues of the 75 file-type dataset and the emphasis on modern and current files in the selection of photographic and graphic data. According to Teimouri et al. (2020), comparing FiFTy to other robust baselines provides an unbiased assessment of its efficacy and dependability.
### Neural Network-Based Classification
As illustrated in _Figure 6_, Mohammad's (2018) research focuses on the use of neural networks to analyse and derive conclusions from retrieved data for digital forensics in criminal investigations. The research has contributed to the reconstruction of the events leading up to the crime under investigation and the retrieval of crucial information from data such as cookies, log files, and web browser history. However, one of the limitations of his method is that data must first be transformed by third-party applications, which can be expensive and not scalable for large data volumes. Alternatively, the paper suggested that Machine Learning can address this issue by explicitly analysing data sets..
The objective of this study is to determine if neural networks are capable of identifying and tracing the history of events to determine if other applications have modified the files. Mohammad's work expands on Palmer's (2001) nine-step framework for digital forensics and proposes a finite-state machine model with ontology to facilitate the reconstruction of historical events based on the gathered data. According to Chabot et al. (2014), one of the limitations of the proposed model is that it treats events as instantaneous occurrences rather than intermittent ones, which may cast doubt on the validity of the acquired dataThe research proposes using neural network technology to determine whether or not files have been altered and whether or not the trained datasets can accurately reconstruct past events. During this process, however, criminals may readily manipulate the models used to generate features, leading to inaccurate data retrieval. In light of this limitation, the research produced robust models using the machine learning algorithm, despite the fact that the tool used to manage small datasets may run out of memory when processing large volumes of data
The experimental results demonstrated that the created feed-forward model produced substantially satisfactory outcomes with an error rate of 10.07 percent across four distinct scenarios. However, using a single algorithm to execute multiple applications may result in system overlap and invalid results. Thus, it is of the utmost importance to develop alternative algorithms that produce accurate results. The research contributes to the advancement of digital forensics by providing valuable insights into the use of neural networks and machine learning while acknowledging the limitations and challenges that must be overcome.
Figure 6: Digital Forensic Classification Model by Mohammad (2018)
### AI-Based Incident Response
As depicted in Figure 7, Hasan et al., (2011) propose a computer model that uses artificial intelligence to expedite forensic investigations and reduce the time and resources needed by crime investigators. The strengths of the proposed model include its ability to efficiently analyse crime scene evidence and generate accurate conclusions. Not to mention, the use of a specialised software tool known as "chain of custody" guarantees the security of evidence and vital information, which can be stored in a database and used as a training source for the model.
Nonetheless, the proposed model contains certain flaws. According to Nila et al., (2020), the extensive work required to collect enough data to train the model is inadequately described, and the data collected from various police agencies in the United Kingdom may be vulnerable to malware, anomalies, and malicious code injections. On the other hand, the research concisely emphasises the significance of continuously training the model to ensure accuracy, given that AI systems require substantial data inputs for training. Failure to do so can result in falsified facts and findings, leading to incorrect conclusions, according to the research.
According to Trifonov et al., (2019), Hasan et al., (2011) research method fails to account for the risk of interference from hackers and third-party software, which is a significant challenge. The model should include a method for detecting unauthorised system access, such as Intrusion Detection Systems (IDS), Intrusion Prevention Systems (IPS), Log Analysis, User Behaviour Analytics (UBA), Multi-Factor Authentication (MFA), Audit Trails and Monitoring, Port Scanning and Vulnerability Scanning, Honeypots, Anomaly Detection, and File Integrity Monitoring.
Despite the challenges, the proposed model makes significant contributions to forensic investigations. The distinctiveness of the Hasan et al., (2011) model resides in its capacity to predict any crime and to adapt and learn independently to solve new and future crimes. With sufficient resources, the proposed improvement in contemplating criminals' psychology can be a valuable accumulation for collaboration with behaviour analysts. This can establish a pattern that can be added to grouped data sets to assist with crime prevention and resolution.
### Automated Artefact Relevancy Determination
As depicted in Figure 8, Du et al., (2020) investigated the possibility of utilising previously investigated digital forensic cases to aid in the investigation of new digital forensic cases using automated artificial intelligence systems. Their research was specifically intended to rank the importance of file artefacts required for forensic examination.
By evaluating the automation process with files from three distinct case scenarios to identify similar and unknown files, the study accomplished its objective. The researchers demonstrated the advantages of using trained automated processes to examine file evidence, such as significant time savings and the avoidance of negative psychological effects that human investigators may experience when examining distressing evidence.
The researchers acknowledged the significance of timeline analysis in identifying and ordering events in chronological order. However, the research did not investigate how the applicability of known similar files and novel evidence would be weighed and prioritised in relation to new cases. Likewise, the study did not account for situations in which there are fewer known file artefacts available for machine learning training, which may impact the approach's efficacy.
Although the research approach had some limitations, such as the difficulty in identifying "interesting" files and the possibility of overfitting, the study's model was robust because it was validated using experimental data from multiple case scenarios. The use of multiple scenarios by the researchers is an excellent method for preventing bias in future research. To avoid overfitting, future research should avoid using machine learning models with very few features. Moreover, Du et al.,
Figure 8: Overview of the Approach by Du et al., (2020)
Figure 7: Proposed Model System by Hoson et al., (2011)
(2020) research could utilise disc images from actual past cases with permission from officials involved or generate experimental data with information similar to real cases to obtain better research outcomes as opposed to fabricated experimental data, which may lead to more false negatives and false positives, as stated by Eykholt et al. (2018).
### Large-Scale E-mail Dataset Analysis
As shown in _Figure 9_, Ozcan et al. (2020) emphasised the importance of email analysis as a primary source of evidence when acquiring forensically pertinent data from disc images. The objective of their research is to develop an end-to-end distributed graph analysis framework for large-scale digital forensic datasets, along with an evaluation of the accuracy of the centrality algorithms and the scalability of the proposed framework in terms of running time performance. The research proposes an algorithm-based framework that can perform the task of analysing email files more efficiently and effectively in response to the challenges posed by traditional methods for managing large volumes of email files.
The research by Ozcan et al. (2020) is robust and exhaustive, employing a controlled and empirical methodology that is critical and verifiable. The researchers developed an edge-transmitted graph methodological approach for coping with large forensic datasets, implemented it with widely adopted open-source technologies, and analysed the algorithmic precision of its nodes. The research paper presented three implementations to demonstrate the efficacy of the proposed framework, as well as experiments on an email dataset to demonstrate its superiority to conventional methods.
One limitation of Ozcan et al. (2020) research methodology is the framework treatment of email addresses in the dataset as originating from distinct individuals, which fails to account for the possibility that multiple email addresses belong to the same individual. To increase the accuracy with which prospective offenders are identified during forensic investigations, the pre-processing phase should be modified to permit the matching of email addresses. Notably, the research employs a secure and efficient local testing environment with high-performance computing resources, which increases the tests' credibility.
A future study could be enhanced by utilising multiple email datasets to evaluate the framework and avoid biased results. While Ozcan et al. (2020) utilised the Enron email dataset, which is one of the largest and most exhaustive collections of meaningful emails, it may be incomplete due to the fact that it only contains messages from users who were employees of the same company. Balayn et al. (2021) discourage the use of unique, distinct, and trained datasets in testing experiments because they are likely to be unjust to groups that are not included in the dataset. On the other hand, the use of diverse email datasets would introduce a variety of cases from diverse groups, cultures, and situations that would further demonstrate the framework's dependability.
Lastly, Ozcan et al. (2020) research makes a significant contribution to the field of digital forensics by introducing an effective framework for evaluating the accuracy and scalability of large forensic datasets. Although their methodology has limitations, their diligence and empirical approach guarantee the study's reliability and validity.
### Data Mining Methods
In their study, Tallon-Ballesteros and Riquelme (2014) examined various data mining techniques applicable to digital forensic tasks, focusing on glass identification as an illustration of a data problem. Digital data analysis is a quicker and more accurate method for evaluating large volumes of data than traditional forensic analysis through lab experiments, which can be challenging and expensive. To accomplish their research objective, the researchers employed the "stratified four-fold cross-validation" method, which involved dividing the existing dataset into four equal parts and analysing individual training sets.
The study acknowledged that statistical analysis can be used to identify statistically significant differences in the outcomes of stochastic methods. However, non-academic algorithms with a single output cannot be subjected to statistical analysis. To study many different types of classifiers and machine learning methods in 2014, Tallon-Ballesteros and Riquelme used a strong research model. They looked at decision trees, Bayes classifiers, artificial neural networks, and rule-based classifiers. This comprehensive strategy yielded more reliable and comprehensive results. However, the research was restricted to only two analysis tests, namely Cohen's Kappa and accuracy measures, in order to evaluate the models developed by the various classifiers. Incorporating supplementary types of analysis experiments could have provided further information for comparison in the machine learning task and identified any new problems with the model or data.
As shown in _Table 1_, Tallon-Ballesteros and Riquelme (2014) obtained comparable results to Silva and Hruschka (2013) using a ten-fold cross-validation procedure on the same dataset. The ten-fold cross-validation procedure, however, lacked a statistical analysis for the stated problem. The research conducted by Tallon-Ballesteros and Riquelme (2014) highlighted the significance of having diverse results for comparison using various parameters. The results of the experiment improved after the parameters were fine-tuned, resulting in algorithmic performance that exceeded the values
Figure 9: Framework with innovative technology by Ozcan et al. (2020)
of the analysis measures of other experimental results with default parameters.
In light of this, future research evaluating data mining approaches should not only concentrate on accuracy but also consider other crucial factors such as dependability and utility. These actions would provide information regarding the experiment and reduce data mining errors.
### Metadata Analysis Using Machine Learning
Toraskar et al. (2019) recommended in their study the use of information acquired from storage devices to analyse and detect alterations, as illustrated by _Figure 10's_ output results. The study explored the potential for unsupervised machine learning classification to aid in forensic analysis. This study adds to the body of research that supports the right use of machine learning and forensic technologies for data analysis, such as SOM viability, in criminal investigations.
This study emphasises the benefits of the Encase Imager tool, the Encase Forensic Tool, and the FTK (Forensic Toolkit), which are known for their speed and efficiency in producing digital reports in CSV format. However, it's important to note that while Autopsy is a robust forensic tool, it is not particularly known for its speed. Also, the CSV (Comma-Separated Values) reports are indeed exports of data, typically tables from databases, in a raw CSV format. These reports are known for their simplicity and ease of use, as they can be opened with various software, including text editors and spreadsheet programmes.
However, the research also acknowledges the limitations of these tools, such as their inability to process non-English languages and their inability to distinguish between false negatives and genuine positives. Beyond that, FTK lacks a user-friendly interface and effective search capabilities. Nonetheless, according to Wehrens (2009), research should present an alternative method for carrying out parallel inquiries and comparing results.
The research of Toraskar et al. (2019) introduces a self-organising map (SOM) as a clustering tool for MATLAB in order to resolve these limitations. The paper presents four cybercrime scenarios to evaluate the dependability of machine learning in forensic analysis. These scenarios include data theft, corporate fraud, hacking, and document fraud. In each case, SOM is used to cluster notable artefacts, demonstrating its effectiveness in different contexts. In the study, the inputs to the SOM model were primarily based on metadata categories from the forensic cases. These categories included EXIF Metadata, Extension Mismatch Detected, Web History, and USB Device Attached Data. The SOM utilised this data to cluster and identify notable files in the four different cybercrime scenarios.
SOM is advantageous because it can readily cluster data, identify common characteristics, and handle various problem classifications. The paper suggests using various cluster sizes to guarantee accurate results.
According to Toraskar et al. (2019), the SOM mappings generated by MATLAB were clustered, demonstrating the viability of using SOM with enumerated artefacts and metadata in criminal investigations. However, the research cautions that it may be difficult to acquire a perfect mapping if the groupings are unique. Therefore, anomalies may form, resulting in the appearance of two identical clusters in peculiar regions of the map. In spite of this limitation, the research findings are trustworthy, as the selected metadata and cluster sizes lead to accurate results.
The research of Toraskar et al. (2019) contributes significantly to the application of machine learning and forensic tools in digital forensic analysis. This study acknowledges the limitations of existing tools and introduces a novel method for clustering data using SOM. However, the research should have taken into account the difficulty of obtaining flawless mappings when groupings are distinct. Nevertheless, the findings are trustworthy and provide a firm foundation for future studies.
### Chain of Custody
The research conducted by Tanner and Bruno (2019) proposes a valuable tool for visualising and organising data related to the chain of custody process in criminal investigations, as depicted in _Figure 11_. The objective of the research was to develop an instrument that satisfied the three fundamentals of timeline representation: literal, linear, and global timelines. The proposed implementation of the tool included HTML input and output in the form of tables and timelines, enabling examiners to efficiently manage criminal evidence.
\begin{table}
\begin{tabular}{c l l l l l} \hline \hline Algorithm type & Classifier & Method & Accuracy (\%) & \multicolumn{2}{c}{Class\({}^{\circ}\)\%\%\%\%\%} \\ \cline{2-6} & \multicolumn{1}{c}{distribution} & & & & \\ \hline Non-checkcheck & Decision & C4.5 & 90.501.59 & 0.6808.33 & 0.78700(0012) & 0.56461.1005 \\ & Tree & & & & & \\ & Repos & & & & & \\ & Non-check & EngleNet & 100.000.00 & 60.5467.54 & 1,0000.000 & 0.58767.002 \\ & weighted & NN & & & & \\ & Weighted & NN & & & & \\ & Weighted & 1.88 & & & & \\ & Non-check & Chelychev 1: & 100.000.00 & 60.5466.18 & 1,0000.000 & 0.5222.005.0549 \\ & weighted & NN & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 1: Accuracy and Cohen’s Kappa Measures for 6-class Training and Test Results by Tallón-Ballesteros and Riquelme, (2014)
Figure 10: SOM Outputs by Toraskar et al., (2019)
Walny et al., (2020) state that one of Toraskar et al., (2019) research's strengths is the use of _taffy.js_ as a database library, which improves the tool's performance and reduces downtime, resulting in a system that runs smoothly. On top of that, the offline feature of the application protects the system from online malware and hackers, while the _.csv_ export feature enables the secure storage of data. Also, the tool's user-friendly characteristics make it simple to use and interactive.
However, Tanner and Bruno (2019) research has neglected some deficiencies that need to be addressed, such as the tool's visual appeal. While the _vi.js library_ used to construct the timeline network has built-in behaviours, it may need to be modified to enhance the visual appeal of the user interface. On the other hand, the system's load time may be sluggish due to the use of nodes instead of clusters, resulting in an annoying _"loading... "_ message for users.
One of the major contributions of Tanner and Bruno's (2019) research is the creation of a tool that satisfies all three fundamentals of timeline representation, making it superior to existing models. The implementation of the tool would assist minor departments in eliminating the manual chain of custody process and storing information securely for an extended period of time without risk of modification. According to Elgohary et al., (2022), the paper acknowledges the need for supplementary enhancements, such as adding a search engine and a data grouping feature, to facilitate access to information and patterns in similar cases. Overall, the research was successful in accomplishing its goal of developing a chain of custody data visualisation tool based on time.
### Memory Forensics Using Machine Learning
Through the extraction of memory images, Mosli et al., (2016) sought to develop a model for automating the detection of malware. Specifically, the study concentrated on three key malware artefacts: imported DLLs (Dynamic Link Library), malware-modified registry keys, and API (Application Programming Interface) functions, with the intention of developing a highly accurate and user-friendly model. Through experimentation, the researchers were able to accomplish their goal, with the model achieving accuracy rates ranging from 85.7% to 96%. The study emphasises the significance of using memory images in malware detection because they permit the extraction and analysis of multiple artefacts, resulting in more precise conclusions.
While there are numerous malware detection techniques on the market, Mosli et al., (2016) contend that the proposed model is preferable due to its resistance to manipulation. The extracted information is uncommon, precise, and diverse and is capable of handling millions of global malware variants. According to Sihwail et al., (2019), it is important to note that the design of the proposed model only enables the detection of already-present malware and does not prevent malware from infiltrating the system.
Mosli et al., (2016) research employed the finest feature-extraction techniques, resulting in a flawless data acquisition and feature extraction process, regardless of the volume of data being analysed, in order to develop a method that can address potential vulnerabilities in malware design. Scikit Learn is recommended as a tool for feature extraction due to its precision and user-friendliness, but it is not appropriate for data visualisation or string processing.
The research of Mosli et al., (2016) utilised seven training models to establish accuracy and generate linear, simple-to-analyse results. Regardless, using both accuracy and AUROC produced significant and conclusive results, as shown in Table 2. This shows that the proposed model can find malware even when there is a huge amount of data if the right tools are used. The study accomplished its objective and demonstrated that it is possible to detect malware using machine learning with the proper tools and techniques. The study suggests that future research should concentrate on identifying further memory artefacts for analysis, broadening the spectrum of data, and developing methods to detect malware before it enters the system. This can be accomplished by collecting and analysing a sufficient quantity of malware data.
### Malware Classification using Feature Engineering
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Classifiers & Registry & DLLs & APIs \\ \hline SVM & 94.4 & 88.7 & 92.3 \\ \hline SGD & 96 & 87.8 & 93 \\ \hline Random Forest & 94.9 & 90.5 & 91.5 \\ \hline Decision Tree & 94.9 & 88.7 & 90.7 \\ \hline KNN & 93.9 & 89 & 90.7 \\ \hline BernoulliNB & 93.4 & 89.6 & 89.2 \\ \hline MultinomialNB & 92.9 & 85.7 & 89.7 \\ \hline \end{tabular}
\end{table}
Table 2: Summary of Accuracy by (Mosli et al., 2016)
Figure 11: Vis.js library components by (Tanner and Bruno, 2019)
As depicted in Figure 12, Lashkari et al., (2021) presented VolMemLyzer, a digital instrument that enables memory analysis of live malware infections to extract feature sets for characterising malware. The research contributes significantly to the field of digital forensics by addressing multiple tasks, including malware extraction, memory dump analysis, feature extraction, feature ranking, and machine learning classification of both benign and malicious samples.
One of the research's strengths is its emphasis on the importance of memory analysis tools in identifying the specific areas affected or compromised by malware, thereby guiding digital forensics analysts as to where to concentrate their examinations. Further to this, the study acknowledges the limitations of classifiers used for analysis and classification, which can memorise training samples and produce incorrect results. To circumvent this restriction, the researchers added 7% noise to the sampled data, and by increasing the memory dump size by 100%, they were able to acquire 1900 samples in the dataset through Weka.
The research further emphasises the significance of employing multiple classifiers to achieve a more precise classification of malware families. Various classifiers, such as random forests, k-nearest neighbour, decision trees, and Adaboost, were used to identify benign samples during binary classification. According to San, Thwin, and Htun, (2019), it was discovered that Random Forest and k-nearest neighbour classifiers were more effective at classifying malware families. This is an important contribution to the discipline because it emphasises the significance of using the most effective classifiers for accurate classification.
Nonetheless, Lashkari et al., (2021) study has some limitations. The researchers did not account for the prospect of receiving fewer memory dumps for analysis after malware execution, necessitating a 100% increase in samples to obtain 1900 samples in the Weka dataset. Similarly, the research did not address the difficulty posed by contemporary malware, which employs techniques such as process hollowing to avoid detection and analysis. As a result, future research can explore methods to detect and analyse modern malware that employs process hollowing techniques and address these limitations.
Last but not least, the research conducted by Lachkari et al., (2021) is a significant contribution to the field of malware analysis, providing insight into the significance of memory analysis tools, the limitations of classifiers used for analysis and classification, and the necessity of using multiple classifiers to achieve more accurate classification.
### Android Device Forensic Handling Insights
The paper _"An Overview on Handling Anti-Forensic Issues in Android Devices Using Forensic Automator Tool"_ by Bhushan and Florance (2022) critically addresses digital forensic challenges, especially in Android devices, through a forensic automator tool. This review assesses its structure, approach, contributions, and limitations. Structured with a concise abstract, a comprehensive introduction to mobile forensics, and a focus on anti-forensic techniques, the paper effectively establishes its primary objective. It delves deeply into these challenges, culminating in a discussion on the proposed solution and future research.
The methodology showcases a thorough analysis of anti-forensic methods, highlighting mobile forensics evolution and the sophistication of these techniques. The innovative use of a machine learning-based forensic automator tool, particularly the Support Vector Machine (SVM), as depicted in Figure 13, used for file encoding detection, underscores the paper's technical depth.
Significantly, the paper identifies various anti-forensic techniques and their impacts, proposing a machine learning application to automate file encoding detection and decoding, which is a major advancement in forensic investigations.
Practically, the tool offers considerable implications for forensic investigators, streamlining the detection and decoding process on Android devices. The paper also provides an in-depth understanding of anti-forensic methods, which is crucial for developing stronger forensic tools. However, the paper could explore the tool's limitations and real-world application challenges more deeply, like the machine learning model's accuracy and reliability. Inclusion of case studies or practical examples would enhance the tool's demonstrated effectiveness.
Finally, future research suggestions include integrating this tool with other forensic methods and expanding its capabilities to address a broader range of anti-forensic techniques, offering
Figure 12: Proposed Model by Lashkari et al., (2021)
Figure 13: The SVM Hyperplane by Bhushan and Florance (2022)
promising avenues for further exploration in the field.
### Cybersecurity Tactics: Windows Artifact Analysis
The research paper titled _"Detecting Adversary using Windows Digital Artefacts,"_ authored by Liew and Ikeda (2019), investigates the detection of malicious behaviours linked with Advanced Persistent Threats (APTs) through the analysis of Windows digital artefacts as illustrated in _Figure 14_. This review critically assesses the paper's methodology, results, and overall contribution to the field of cybersecurity, underpinned by relevant academic literature.
The authors advocate a novel technique for identifying APTs, independent of third-party sensors. Central to their approach are Windows operating system artefacts, especially the Application Compatibility Cache (Shimcache). A key innovation is their algorithm designed to estimate the execution times of files, a vital feature considering Shimcache does not inherently provide this data. This method reflects the growing trend in cybersecurity research that emphasises the importance of harnessing system-generated data for threat detection (Berlin et al., 2015; Carvey, 2018).
The paper presents an interval estimation algorithm for determining file execution times, seamlessly integrating this with machine learning strategies to forge the XTEC system. This blended approach, merging rule-based analysis with machine learning, is gaining recognition for its effectiveness within the cybersecurity sphere (Virvilis and Gritzalis, 2013). The employment of Random Forest classifiers, lauded for their robustness in varied settings, lends further weight to their methodology (Breiman, 2001).
Included in the paper is a real-world case study, serving to authenticate the efficacy of the XTEC system. Such practical testing is pivotal for demonstrating the system's real-life applicability, resonating with the need for empirical validation in cybersecurity research (Alshamrani et al., 2019). However, the case study's specifics are somewhat constrained due to confidentiality concerns, somewhat limiting the ability to thoroughly gauge the system's effectiveness across varied scenarios.
The development of a cutting-edge algorithm for estimating file execution times is the research's primary contribution, and then comes the creation of a detection system that is independent of outside surveillance tools. This advancement holds significant value in the realms of digital forensics and incident response, bridging existing gaps in these fields (Tankard, 2011; Virvilis and Gritzalis, 2013). The authors further outline prospective avenues, including enhanced data collection methods for improved model performance and advocating for the sharing of public data, underscoring a sustained commitment to propelling the field forward.
Despite the paper's pioneering approach and significant contributions to the field of cybersecurity, several areas are identified where enhancements could be beneficial, such as an expanded evaluation of the algorithm's performance and its limitations in a broader range of scenarios. This would help to confirm its applicability and effectiveness in a variety of contexts. The methodology's reliance on Windows-specific artefacts could potentially limit its effectiveness in environments where multiple operating systems are in use. Broadening its scope to include other systems could enhance its utility. Although the included case study is valuable, it falls short in providing the depth of detail needed for a more compelling validation of the system's effectiveness. Enriching the case study with more comprehensive data would strengthen the evidence for the system's capabilities.
### Machine Learning's Role in Locked Shields Defence
The research paper titled _"Machine Learning-based Detection of C&C Channels with a Focus on the Locked Shields Cyber Defence Exercise,"_ authored by Kanzig et al. (2019) and Ghanem, (2022), introduces a system tailored to identify Command and Control (C&C) channels within network traffic, with particular emphasis on the Locked Shields cyber defence exercise, as seen in _Figure 15_.
The paper tackles a critical issue in cybersecurity: the detection of C&C channels, vital for the operation of botnets. Its distinctiveness stems from its application to the Locked Shields exercise, NATO's largest live-fire cyber defence exercise. This research is not only pertinent but also strives to offer a solution that is both efficient and scalable. By leveraging machine learning, the study aims to enhance the detection of malicious traffic, a key challenge in the field of cybersecurity.
The authors opt for a machine learning approach, specifically utilising a random forest classifier. This method is particularly well-suited given the nature of the data and the overarching need for efficiency and scalability in such systems. The model is trained on data derived from past cyber attacks, particularly from the 2017 and 2018 Locked Shields
Figure 14: The architecture of XTEC by Liew and Ikeda (2019)
exercises. While this training approach is practical, there may be concerns about its ability to generalise across various network environments or against novel attack vectors that have not previously been encountered
The paper excels in its comprehensive data analysis and meticulous feature selection. The authors have compiled a robust dataset, including traffic captures and Red Team logs, which provides a solid base for their machine learning model. The process of feature extraction and selection is detailed, focusing on computational efficiency. However, a deeper discussion of the reasoning behind the selection of specific features could further strengthen the paper.
In terms of results and evaluation, the system demonstrates high accuracy, claiming 99% precision and over 90% recall in near real-time. These impressive results underscore the model's effectiveness in the context of the Locked Shields exercise. However, a more thorough comparison with other existing systems would improve the paper and provide a better contextual understanding of its performance.
The practical application of this system in the context of locked shields is clearly articulated and well-supported. The authors also discuss the potential deployment of the system in similar exercises or real-world scenarios. However, a primary limitation noted is the model's potential inflexibility in adapting to different network environments or to new types of C&C communication methods not represented in the training data. It is recommended that future iterations of the dataset be expanded to include a broader range of attack scenarios and network environments. As a result, performing so will enhance the functionalities of the model. Moreover, a thorough investigation into alternative machine learning approaches, such as deep learning, could potentially unveil substantial insights concerning the complexities of C&C traffic patterns.ffff
### Meta-Heuristic JPEG Reassembly in Forensics
The paper _"A Meta-Heuristic Method for Reassembling Bifragmented Intertwined JPEG Image Files in Digital Forensic Investigation"_ by Ali et al., (2023) presents a novel advancement in digital forensics with the introduction of the Meta-Heuristic Reassemble Images (MHRI) method, as seen in _Figure 16_. This method, designed for the recovery of fragmented JPEG images, is a testament to the innovative approaches evolving in digital forensic investigations. The MHRI method is unique because it uses restart markers, the Coherence of Euclidean Distance metric (CoEDm), and a genetic algorithm with a cost function. This creates a new way to solve the difficult problem of putting back together two broken JPEG images that are intertwined.
The efficacy of the MHRI method is demonstrated through extensive testing on both public and private datasets. Remarkably, it achieved a complete recovery of all bifragmented intertwined JPEG images and a 48.4% recovery rate of all JPEG images in the test sets, which is a significant improvement over existing methods such as RXmK, mK, Revlt, and XmK. This superior performance highlights the method's potential to enhance the accuracy and efficiency of digital forensic investigations. Also, the paper provides a thorough explanation of each component of the MHRI method, offering valuable insights into the complexities and intricacies involved in the process.
However, despite these strengths, the paper by Ali et al., (2023) also reveals certain weaknesses in the MHRI method. One notable concern is the computational complexity of the method. The use of genetic algorithms and multiple metrics might necessitate substantial computational resources, yet this aspect is not adequately addressed in the paper. Furthermore, the MHRI method's scope of applicability appears limited, as it is specifically tailored for bifragmented intertwined JPEG images in linear order. This specialisation may restrict its application in a broader range of forensic scenarios, particularly for different types of fragmented files or formats. The paper also lacks an analysis of the method's application in real-world forensic cases, which is crucial for evaluating its practical effectiveness and reliability. However, the usability
Figure 16: The flowchart of the genetic algorithm by Ali et al., (2023)
Figure 15: Locked Shields Environment Overview by by Kunzig et al., (2019).
and accessibility of the MHRI method for forensic practitioners are not discussed, raising questions about its ease of use given the inherent complexity of the approach.
Looking ahead, future research should aim to address these limitations. Efforts could be directed towards optimising the computational efficiency and speed of the MHRI method, thereby enhancing its practicality for real-time forensic investigations. Expanding the scope of the method to include a wider range of fragmented files and different file formats would significantly increase its utility in the field of digital forensics. Furthermore, applying the MHRI method to real-world forensic scenarios would provide valuable insights into its effectiveness and highlight areas for improvement in practical settings. As well as developing a more user-friendly interface and providing comprehensive training materials, this could facilitate the adoption of the method by forensic professionals.
### Memetic Algorithms in Digital Forensics
_"Enhancing Digital Forensic Analysis Using Memetic Algorithm Feature Selection Method for Document Clustering"_ by Al-Jadir et al. (2018) is a research paper that goes into great detail about how to make digital forensic analysis better by using new feature selection methods in document clustering. An illustration of the system architecture is displayed in Figure 17. The paper's significance lies in addressing the challenges of managing the ever-increasing volume of digital documents in criminal investigations, where efficient and accurate clustering of crime-related documents is crucial.
The authors propose a Memetic Algorithm Feature Selection (MAFS) approach, combining a Genetic Algorithm-based wrapper feature selection with the Relief-F filter method. This hybrid approach is applied to enhance two clustering algorithms - k-means and Spherical k-means (Spk) - and is tested on crime reports, criminal news, and benchmark text datasets. The performance of these algorithms is evaluated based on the clustering outcomes before and after applying the MAFS method. This approach's effectiveness is demonstrated through significant improvements in the performance of both k-means and Spk algorithms after applying MAFS.
The paper's strengths include a well-structured methodology, a clear explanation of the proposed MAFS method, and thorough experimental results. The use of both crime-related and benchmark datasets ensures robust validation of the proposed method, showing its applicability and efficiency in real-world scenarios (Al-Jadir et al., 2018). However, the research could be strengthened by exploring the scalability of the proposed method, especially given the ever-increasing volume of data in digital forensics (Casey, 2011; Quick and Choo, 2014). At the same time, while the paper focuses on k-means and Spk algorithms, exploring the MAFS method's compatibility with other clustering algorithms could provide a more comprehensive understanding of its utility.
This research contributes significantly to the field of digital forensic analysis, particularly in efficiently clustering large volumes of crime-related documents. The proposed MAFS method's ability to improve clustering accuracy is of great importance for forensic investigators, aiding in quicker and more precise analysis of digital evidence. Future research could focus on the scalability of the MAFS method and its adaptability with other clustering algorithms, potentially broadening its applicability in various domains beyond digital forensics.
### Fronesis: Pioneering Cyber-Attack Early Detection
The research paper titled _"Fronesis: Digital Forensics-Based Early Detection of Ongoing Cyber-Attacks,"_ authored by Dimitriadis et al. (2023), presents a novel approach to detecting cyber-attacks in their early stages. presents a novel approach to detecting cyber-attacks in their early stages. The proposed ontology approach is displayed in Figure 18, and it is significant in the field of cybersecurity, as traditional detection methods relying on known signatures or machine learning often fall short against increasingly sophisticated cyberattacks. The authors rightly identify the gap in early detection of cyber-attacks, a crucial aspect considering that in 2020, only 59% of security incidents were detected by organisations themselves, with an adversary's median dwell time in a compromised system being 24 days (Mandiant, 2020). This context underscores the urgency of methods like Fronesis.
Its methodological strength lies in how it combines ontological reasoning with the MITRE ATT&CK framework and the Cyber Kill Chain (CKC) model, as well as using digital evidence from systems that are being watched. The application of rule-based reasoning to the Fronesis ontology for detecting cyber-attacks marks a significant advancement over traditional methods. By focusing on digital artefacts, which include both volatile data such as processes and non-volatile data like emails, Fronesis offers a more comprehensive detection mechanism. Section IV of the paper goes into more detail about how the Web Ontology Language (OWL) and the Semantic Web Rule Language (SWRL) are used to make the Fronesis ontology and its rule-based reasoning process work.
Figure 17: System Architecture by Al-Jadir et al. (2018)
The practical applicability of Fronesis is illustrated through an email phishing attack scenario. Phishing attacks, particularly prevalent in business environments, serve as a pertinent example to demonstrate the effectiveness of Fronesis in identifying and responding to real-world cyber threats. This example is instrumental in demonstrating the real-world applicability of Fronesis, emphasising its potential for improving cybersecurity defences.
The novelty of Fronesis lies in its multi-step methodology, combining the CKC model and MITRE ATT&CK, to reconstruct and detect a cyberattack based on digital artifacts. This approach surpasses the limitations of the CKC model by defining techniques for each CKC phase and using a wider array of digital artefacts for detection, leading to better results. Importantly, Fronesis focuses on detecting the cyberattack itself rather than just identifying specific techniques, thereby offering a more holistic and effective approach to cybersecurity.
While Fronesis represents a significant stride in cyberattack detection, there are areas for further development. The reliance on digital artefacts, while comprehensive, also introduces the challenge of managing and analysing vast amounts of data. However, the paper could benefit from a broader evaluation of Fronesis across diverse attack scenarios beyond email phishing to demonstrate its versatility and robustness.
### **CQSS-GA-SVM: A New Era in Audio Forensics**
The authors, Su et al., (2023), introduce a method for detecting and locating audio copy-move forgeries, utilising Constant Q Spectral Sketches (CQSS) combined with a custom genetic algorithm (GA) and support vector machine (SVM), as depicted in Figure 19. This method aims to address the challenges in blind audio forensics, particularly in identifying forgeries derived from the same audio recording.
The integration of CQSS and GA-SVM represents a significant advancement in the field of audio forensics. The authors effectively extract CQSS features and then optimise these features using a custom GA combined with SVM. This approach not only enhances the detection accuracy but also automates the feature optimisation process, which is a notable contribution to the domain. The proposed method demonstrates high robustness against various post-processing-based anti-forensics attacks. This capability is crucial for practical applications in audio forensics, where tampered audio is often subjected to various manipulations to conceal the forgery.
The methodology shows adaptability to changes in duplicated segment duration, training set size, recording length, and forgery type. This flexibility is beneficial for forensic experts who deal with a wide range of audio forgery scenarios in real-world cases. The experiments conducted to evaluate the method's performance were thorough. The authors compared their approach against state-of-the-art methods, demonstrating its superiority in terms of accuracy and robustness. The use of real-world datasets for validation adds credibility to the results.
The statistical analysis provided in the paper offers a clear understanding of how CQSS features can capture subtle changes in forged recordings. This aspect of the research is well explained and contributes significantly to the validation of the proposed method. While the authors mention the method's efficiency, there is limited discussion on the computational complexity and actual processing time, which are critical factors in practical applications. Future work could
Figure 19: Framework of the proposed COSS-GA-SVM for the audio CMFD by Su et al., (2023)
Figure 18: OntoGrid rendering of the proposed ontology by Dimitriadis et al., (2023)
focus on optimising the algorithm for faster processing without compromising accuracy. The paper primarily focuses on English and Chinese datasets. It would be beneficial to test the method's effectiveness across a broader range of languages and accents to ensure its applicability in diverse forensic scenarios.
As anti-forensics techniques continue to evolve, it is essential to regularly update and test the proposed method against newer forms of audio tampering. Continuous development in this area will maintain the relevance and effectiveness of the method.
### Applying AI to Image Forensics
The paper titled _"The Application of Intelligent Systems to Digital Image Forensics"_ by Lai and Chen (2009), which was presented at the Eighth International Conference on Machine Learning and Cybernetics, is a deep and thorough look at how genetic algorithms (GA) and support vector machines (SVM) can be used to find the camera that took a digital picture. This innovative research method, highlighted in the Research Flow Chart as seen in _Figure 20_, uses image features to determine the origin of digital images. The author has effectively employed genetic algorithms to automate the search for the most optimal features, along with the use of support vector machines for classifying these features. This approach is particularly significant in today's digital age, where the manipulation of digital images is a common challenge, thus calling into question the reliability of such images.
The results of the study underscore the effectiveness of the genetic algorithm in selecting fewer yet more pertinent features, achieving high rates of accuracy in identifying the source camera. This method marks a notable improvement over traditional techniques that often depend on metadata, which can be easily altered, or are less effective with high-end camera sensors. The findings are of considerable importance, offering a dependable method for digital image forensics, a crucial tool in various legal and security contexts.
The paper is commendable for addressing a relevant and timely issue in the field of digital forensics. The use of GA and SVM for image source identification introduces a novel approach to overcoming existing challenges in this domain. Moreover, the experimental design of the study, which includes the use of multiple cameras and a range of conditions like image resizing and the addition of post-processing graphics, adds a layer of robustness to the research. However, the study could benefit from an expansion of the experimental scope, incorporating a broader array of camera models and more diverse image manipulations.
The practical implications of this research are significant, especially in law enforcement and legal proceedings, where the authenticity of digital images is frequently a point of contention. The ability to trace an image back to its source camera accurately could revolutionise these fields. Nonetheless, the paper would benefit from discussing the limitations of this approach, particularly in terms of computational complexity and its efficacy against advanced image manipulation techniques. Future research could focus on integrating this method with other digital forensic techniques to enhance overall reliability.
### Deep Learning Techniques
The publication _"A Survey on Deep Learning for Big Data"_ by Zhang et al., (2018) covers many deep learning models and their use in big data. The authors divided the study into parts on common deep learning models like Stacked Auto-Encoders (SAE), Deep Belief Networks (DBN), Convolutional Neural Networks (CNN), and Recurrent Neural Networks.
The architecture of the multi-modal deep learning model is depicted in _Figure 21_. The paper also discusses adapting these models to handle big data difficulties such as huge data volumes, heterogeneous data, real-time data processing, and low-quality data analysis. This structure shows how deep learning techniques for massive data management have evolved and adapted. A detailed exploration of deep learning architectures is the paper's strength. Each model is well explained, including its mechanics and applicability for certain big data applications. This degree of information helps readers comprehend how these models can be used in large-scale data situations.
However, the paper might use more case studies or real-world applications to demonstrate these models' practicality. While the theoretical and technical elements are thoroughly covered, practical examples would have helped explain how these models work in real life.
The work is well organised and walks the reader through deep learning with big data. The authors made difficult models understandable, yet some portions may require deep learning knowledge. The limitations and problems of these approaches when applied to big data seem underexplored. The research addresses issues like low-quality data, but a more in-depth analysis of these models' weaknesses would provide a more balanced picture.
### Deep Learning in Digital Forensic File Analysis
The research paper _"Digital Forensic Analysis of Files Using Deep Learning"_ by Mohammed et al., (2020) from Khalifa University offers an innovative approach to
digital forensics. It focuses on the identification and analysis of file types within digital evidence, utilising deep learning to address the limitations of traditional forensic methods. _Figure 22_ shows all the different file types used in the research.
The paper introduces a deep learning-based model for file type identification, marking a significant stride in addressing the challenges inherent in conventional forensic practices. Its innovative approach is particularly effective in accurately predicting corrupted files, a notable advancement over existing methods that often falter with such file types. This innovation and relevance are crucial in a field where accuracy and the ability to handle complex data are paramount.
In terms of methodology and technical rigor, the authors employed Convolutional Neural Networks (CNN), a choice backed by CNN's established efficacy in pattern recognition tasks. The structured methodology, encompassing dataset preparation, model training, and validation, contributes to the robustness of their approach. The inclusion of a diverse range of file types, such as AVI, MP4, JPG, BMP, and PNG, enhances the model's applicability across various digital formats.
The paper also presents a comprehensive validation of the model, demonstrating high accuracy rates in file type identification. These results underscore the effectiveness of the model. However, a more nuanced discussion regarding the model's limitations and the potential for false positives or negatives would offer a more rounded perspective on its real-world applicability. A thorough review of existing methodologies in digital forensics is also conducted, highlighting the inadequacies of traditional approaches and underscoring the necessity for a deep learning-based solution. This comparative analysis solidifies the argument for the proposed methodology and its potential to revolutionise digital forensic practices.
Lastly, the paper touches on the practical implications of this research for law enforcement and digital forensic experts. The prospect of deploying the model on portable devices for on-site examination opens up exciting possibilities for future work in the field. However, an exploration into the practical challenges of field implementation would have added valuable insights into the potential real-world impact of this research.
### Deep Learning in ImageNet Classification
The paper _"ImageNet Classification with Deep Convolutional Neural Networks,"_ researched by Krizhevsky et al., (2017), has made a profound impact in the domains of computer vision and deep learning. This critical review examines the paper's methodologies, innovations, results, and broader implications in the field. Krizhevsky et al., (2017) research introduces a deep convolutional neural network (CNN) that sets new benchmarks in processing large-scale image data. An illustration of the proposed CNN architecture by Krizhevsky et al., (2017) is depicted in _Figure 23_.
Figure 22: Different File Type used in the Research by Mohammed et al., (2020)
Their network was trained on the ImageNet dataset and has five convolutional layers, followed by three fully connected layers. It was revolutionary at its inception and has since influenced a multitude of studies in deep learning. Rectified Linear Units (ReLUs) were a key innovation in their method because they made training deep networks faster than with traditional neuron models (Nair and Hinton, 2010). Also, using dropout as a regularisation method was a new idea at the time. It worked well to stop overfitting and showed how important it is for training big neural networks (Hinton et al., 2012).
The results of their work were nothing short of remarkable, with the network achieving a top-5 error rate of 15.3% in the ILSVRC-2012 competition, significantly outperforming the second-best entry. This achievement not only underscored the capabilities of deep learning in computer vision but also set a new standard for image classification tasks.
The paper's impact extends beyond its immediate results. It has demonstrated the feasibility of training deep neural networks on large-scale image datasets, inspiring a broad spectrum of research into CNNs and their applications across various domains (LeCun, Bengio and Hinton, 2015). This work established a foundational benchmark in image classification and catalysed further exploration in the field of deep learning.
However, the approach is not without its limitations. The substantial computational resources required for such deep learning models may limit their accessibility and practicality in certain contexts. The opaque nature of deep neural networks, as exemplified in Krizhevsky et al., (2017) research, poses challenges in terms of interpretability and understanding the decision-making processes within these models.
### Impact of Physical Attacks on AI Systems
The research paper _"Robust Physical-World Attacks on Deep Learning Visual Classification"_ by Eykholt et al., (2018), presented at the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, is a pioneering study in the field of AI security. A sample of physical adversarial examples against LISA-CNN and GTSRB-CNN is depicted in Figure 24.
The paper explores the vulnerability of deep neural networks to adversarial examples in real-world settings, focusing on safety-critical applications like road sign classification in autonomous driving. It introduces the Robust Physical Perturbations (RP2) method, generating adversarial perturbations that can mislead classifiers under various physical conditions. The paper introduces the RP2 method, a pioneering approach in autonomous driving and other real-world DNN applications, addressing the gap in understanding adversarial examples in physical environments. This innovation is not only relevant but essential in today's rapidly evolving field of artificial intelligence.
The methodology employed in the paper further strengthens its impact. The authors use a two-stage evaluation, comprising both lab and field tests, to assess the effectiveness of adversarial attacks in real-world scenarios. This comprehensive and rigorous approach to testing underpins the credibility of their findings, demonstrating the robustness of RP2 in a variety of conditions. Such methodological rigour is commendable and adds significant value to the research. The practical implications of this research are profound, particularly for the safety and reliability of autonomous vehicles. The paper not only demonstrates the potential for adversarial attacks to cause misclassification of traffic signs but also highlights the urgent need for more resilient DNNs in safety-critical systems. This aspect of the research is particularly relevant in the context of the increasing integration of AI technologies in everyday life.
However, the research is not without its limitations. The scope of the study is somewhat narrow, focusing primarily on road sign classification. This limited focus presents an opportunity for future research to explore the application of RP2 in other domains where DNNs are used in physical environments, such as facial recognition systems, medical imaging technologies, and digital forensics investigations.
Moreover, the generalizability of the results could be a point of concern. The specific nature of the adversarial
Figure 24: A sample of physical adversarial examples against LISA-CNN and GTSRB-CNN
Figure 23: An illustration of the proposed CNN architecture by Krizhevsky et al., (2017)
perturbations, such as stickers on stop signs, might limit the applicability of the findings to other scenarios. Future studies could investigate a broader range of physical-world attacks to enhance the universality and relevance of these insights in different contexts.
## 3 Comparison with Existing Literature Reviews
Goni et al., (2019) present a systematic review focusing on the application of machine learning (ML) algorithms in cybersecurity and cyberforensics. Their work highlights the critical aspects of confidentiality, integrity, and validity of data within cybersecurity. They delve into various cyber threats and the foundational concepts of ML algorithms, conducting a thorough literature review to explore their application in cybersecurity systems. In 2023, Balushi, Shaker, and Kumar emphasise the enhancement of digital forensic investigations through ML in their review paper. They discuss the various challenges encountered by forensic investigators and examine how ML algorithms aid in the analysis of digital evidence. Their paper categorises ML algorithms according to their specific uses in digital forensics and also discusses the limitations inherent in these technologies. Javed et al., in their 2002 survey paper, provide an extensive introduction to different computer forensic domains and tools. They conducted a comparative analysis of forensic toolkits and shed light on the current challenges and future research directions in computer forensics, adding a significant layer of understanding to this evolving field. Rizvi et al., (2022) explore the application of artificial intelligence (AI) in network forensics. Their research paper surveys existing methodologies, identifies the challenges faced, and suggests potential future directions in this niche but crucial area of cybersecurity. Kaur, Gabrielic, and Klobucar (2023) expand the discussion to the broader role of AI in cybersecurity. Their literature review analyses current research trends in AI and projects future pathways, offering a comprehensive view of AI's evolving influence in the cybersecurity landscape.
Our research contrasts with these existing literature reviews by offering a more holistic view that integrates insights from both AI and ML in the context of modern digital forensics and incident response. We provide a detailed analysis of the interplay and complementarity between AI and ML in digital forensics and incident response, an area not extensively explored in the other papers. Our paper addresses the practical applications of AI and ML in real-world incident response scenarios, shedding light on operational challenges and potential solutions. Furthermore, we present a forward-looking perspective, discussing emerging technologies and their potential impact on digital forensics and incident response.
While the existing literature reviews contribute valuable insights into specific aspects of cybersecurity, cyber forensics, and the roles of AI and ML, our research uniquely adds to the body of knowledge by offering a comprehensive, integrated, and practical perspective on the application of AI and ML in modern digital forensics and incident response. This holistic approach not only synthesises existing knowledge but also expands the discussion by highlighting practical applications and future possibilities in the field.
## 4 Comparative Analysis
This section provides a thorough comparison of the various AI and ML applications discussed in the literature review. Numerous investigation-improving applications are now feasible as a result of the incorporation of AI and ML techniques in digital forensics. The investigation of AI and ML applications has gained prominence as digital forensics professionals seek to remain ahead of evolving cyber threats. The comparative analysis reveals the transformative potential of AI and ML applications in digital forensics. Therefore, each application area presents unique contributions and difficulties, but collectively, they pave the way for more effective, precise, and proactive investigation techniques. By leveraging AI and ML, digital forensics professionals can not only keep pace with the ever-changing cyber landscape but also foster a culture of continuous development and innovation within the field.
\begin{tabular}{p{56.9pt} p{85.4pt} p{85.4pt} p{85.4pt} p{85.4pt} p{85.4pt} p{85.4pt}} \hline \hline
**Category** & **Research Work** & **Contribution** & **Benefits** & **Drawbacks** & **Integration \& Impact** \\ \hline
**Philosophy and Machine Learning** & (Thagard, 1990) & Laws crucial groundwork for future research into the philosophical and ethical ramifications of employing machine learning. & Automates various tasks, including data collection, analysis, and reporting, yielding time and resource savings for investigators. & Biases in machine learning & Medium \\ \hline
**Digital Forensics** & (Insurance Information Information & Examine the ramifications of identity theft and cybercrcrine for individuals and organisations. & The report offers valuable insights into the scope and characteristics of identity theft and cybercrime. & The report offers valuable insights into the scope and cybercrime. & The report offers valuable insights into the scope and cybercrime. & Low \\
**Data Science and Analysis** & (Cabitza, Campaigner, and Basile, 2023) & The authors distinguish between 'weak' and'strong' perspectivist approaches. & Leverages uncertain and fuzzy data to enhance model performance, generalizability, and robustness. & The absence of a uniquely defined ground truth makes validation and evaluation more complex. & Low \\ \hline
**Digital Forensics Research and Challenges** & (Quick and Cho, 2014) & Identified the challenges posed by the increasing volume of digital impact on the cost, complexity, and speed of digital forensic investigations. & A concise overview of the challenges posed by the increasing volume of digital forensic data and discussed the forensic data. & Does not provide a detailed discussion of the technical or methodological aspects of digital forensic data analysis. & Medium \\ \hline
**Digital Forensics and Cybercrime** & (Mohammed et al., 2019) & Identify several future research challenges to address the rising volume of cybercrime in Nigeria. & Provides overview of the challenges faced by law enforcement officers investigating cybercrime. & Focusing on Nigeria jurisdiction limits its applicability to other countries. & Low \\
**Forensic Trace** & (Stoney and Stoney, 2015) & Propose a hybrid approach to forensic trace evidence analysis, integrating conventional and unconventional methodologies. & Concisely surveys the challenges in forensic trace evidence analysis. & Advocates a new approach to forensic trace evidence analysis. & Medium \\ \hline
**AI Automation in Digital Forensics** & (Jarrett and Cho, 2021) & Present a concise review of research on the impact of automation and AI on digital forensics. & Artificial intelligence and automation are poised to revolutionise digital forensics. & Ethical and legal considerations must be addressed before the full adoption of automation and AI in digital forensics. & Low \\ \hline
**Big Data in Digital Forensics** & (Gairino, 2013) & Concisely survey the challenges of digital forensics posed by the increasing volume of digital data. & Suggests some practical solutions to the challenges of digital forensics that could be beneficial to practitioners. & Limited availability and compatibility of forensic tools and techniques for big data analysis are potential limitations. & Medium \\ \hline
**Online Predator Detection in Digital Forensics** & (Neyine et al., 2021) & Conducted a systematic review of machine learning-based approaches to online sexual predatory chat detection. & Offers a comprehensive overview of machine learning in detecting online sexual predatory chats. & Lacks a detailed discussion of the evaluation process used to assess the proposed machine learning algorithm. & Low \\ \hline
**Introduction to Statistical Learning** & (James et al., 2021) & Discussed the practical applications of statistical learning methods across domains. & Illustrates the material with numerous examples and exercises to facilitate readers' comprehension. & Does not specifically focus on digital forensics, readers may need to conduct further research to apply its concepts. & Low \\ \hline
**Legal and Ethical Issues in Digital Forensics** & (Losario et al., 2018) & Exploration of unique legal issues arising from the deployment of IoT technologies in urban environments. & Provides potential conflicts and tensions between privacy, security, and DF investigations in smart cities. & Limited legal preceedents and frameworks for addressing legal challenges related to IoT, DF, and security in smart cities. & Medium \\ \hline
**Legal and Ethical Issues in Digital Forensics** & (Brian and Bomet, 2020) & Discuss the legal and technical challenges of complying with the explainable AI. & Valuable resource for those interested in the legal concerns and biases in decision-making in DF. & Low \\ \hline
**Deep Learning and Big Data Analytics** & (Zhang et al., 2018) & Deep learning algorithms and methodologies for handling big data. & State-of-the-art deep learning research for big data, which can help readers stay up-to-date on recent advances. & Does not discuss recent knowledge learning deep learning developments for big data, such as GANs and NLP. & Low \\ \hline \hline \end{tabular}
## 5 Discussions
On the basis of the findings of the systematic literature review, it is strongly recommended that the digital forensics community continue to embrace and research the benefits of artificial intelligence (AI) and machine learning (ML). Significant investments in the development and implementation of sophisticated tools and applications are required to fully leverage these technologies. These resources are necessary to effectively support the growth and expansion of digital forensic technologies, processes, and procedures. Digital forensics experts should investigate the use of artificial intelligence techniques such as pattern recognition, expert systems, and knowledge representation. These techniques have the potential to significantly enhance the efficiency and effectiveness of cybercrime investigation capabilities and processes.
It is necessary to resolve a number of obstacles associated with the adoption of AI techniques in order to ensure their optimal application. Due to the vast quantities and intricacy of data generated by online activities, scalability is a crucial concern. It is possible to make AI techniques much more useful in digital forensics by making them better at handling and processing such huge amounts of data. Equally, the admissibility of AI-collected evidence in court proceedings is contingent on its reliability being established. This can be achieved by instituting standardised procedures and guidelines that facilitate the admissibility of digital evidence in legal settings.
There is a need for numerous, distinct, and extensive studies that can help address the ongoing issues and deficits in forensic examinations. These studies should concentrate on the development of more efficient and effective AI techniques capable of addressing the unique challenges faced by professionals in digital forensics. During the analysis and examination phases of the digital forensics lifecycle, it is also crucial to investigate novel AI and ML application domains. Notably, digital forensics must address two major issues: malware infection investigation and Windows registry forensics. Developing comprehensive AI-based strategies to resolve these issues will enable the detection and analysis of malware infections as well as the extraction and interpretation of pertinent information from Windows registry files.
Furthermore, it is crucial to maintain vigilance when monitoring and assessing the use of AI and ML techniques in digital forensics. This governance is required to ensure that their deployment is ethical, transparent, and respectful of data subjects' rights and privacy. Constant evaluation and assessment aid in identifying potential risks or negative outcomes associated with the use of these technologies, thereby enabling prompt mitigation measures.
The systematic literature review concludes by highlighting the importance of AI and ML in digital forensics and recommending their continued application. For AI techniques to attain their full potential in this field, it is necessary to address scalability and evidence validation concerns, conduct exhaustive research, and monitor their ethical application. Also, when it comes to computer forensics, particularly Windows-based investigations, it is imperative to address not only Windows Registry forensics but also the examination of a variety of other artefacts such as System Resource Usage Monitor (SRUM), Prefetch, AmCache, and others. While the Windows Registry is a critical component containing vital system and application settings, these other artefacts offer a wealth of information about system usage, application execution, and file activities. AI and ML techniques can be particularly advantageous in parsing, analysing, and correlating data from these diverse sources. By extending AI's application beyond Windows Registry forensics to include these artefacts, digital forensic professionals can achieve a more comprehensive understanding of system interactions and user activities, leading to more robust investigations. This holistic approach acknowledges the complex nature of Windows forensics and leverages the full spectrum of data available in Windows environments for more effective and efficient investigations. Nonetheless, robust research and collaboration within the digital forensics community are required to advance the field and overcome these obstacles.
## 6 Conclusions and Future Research Directions
Based on the exhaustive survey conducted in this study, a strong recommendation emerges for the integration of artificial intelligence (AI) and machine learning (ML) methodologies in ongoing and future digital forensics research. These techniques hold significant potential for enhancing investigative precision and efficacy, particularly in addressing the escalating prevalence of cybercrime. Nonetheless, the issue of data validity demands careful attention, especially when dealing with diverse data from various individuals, devices, platforms, and cultural contexts. Achieving higher success rates necessitates the formulation of precise objectives, utilising AI systems and data sources tailored to specific regions.
Recent insights from memory forensics underscore the need for cautious consideration when selecting tools for memory dump retrieval, weighing their strengths and limitations. Expanding the scope of research to encompass mobile devices and intelligent home appliances represents a logical progression to address the evolving landscape of digital evidence sources.
By making AI and ML applications more powerful, refining pre-processing techniques and email address matching, and using data from many different sources, potential offenders can be found much more accurately. The incorporation of various analysis experiments for comparison purposes promises improved data mining techniques and reduced errors during the analytical process. Embracing machine learning-based metadata analysis, the utilisation of multiple cluster sizes, and the adoption of self-organising maps can enhance the precision of results and contribute to the development of innovative methodologies.
A pivotal gap exists in malware artefact detection within current forensic investigations. In response, this study
proposes the application of Reinforcement Learning, modelling it as a Markov decision process (MDP), aligning with RL's strength in capturing intricate agent-environment dynamics. Integrating this into a comprehensive framework offers a path towards automated malware detection. The transition matrix diagram serves as a visual representation, aiding comprehension of potential transitions and their probabilities within the MDP and guiding the construction of robust malware detection models.
Finally, this study underscores the vitality and practical applicability of AI in the realm of digital forensics. The adoption of AI techniques promises swifter and more effective investigations, facilitated by the identification of data patterns indicative of cybercrime and potential culprits. While AI techniques such as pattern recognition, expert systems, and knowledge representation contribute significantly to combating cyber threats, the evolving nature of data representation necessitates adaptable methods. Recognising the limitations of existing approaches and addressing scalability concerns within a legal framework is paramount. This demands a concerted focus on the development of tools and applications that harness the full potential of AI in digital forensics while ensuring ethical and legally admissible processes.
|
2309.17277 | Suspicion-Agent: Playing Imperfect Information Games with Theory of Mind
Aware GPT-4 | Unlike perfect information games, where all elements are known to every
player, imperfect information games emulate the real-world complexities of
decision-making under uncertain or incomplete information. GPT-4, the recent
breakthrough in large language models (LLMs) trained on massive passive data,
is notable for its knowledge retrieval and reasoning abilities. This paper
delves into the applicability of GPT-4's learned knowledge for imperfect
information games. To achieve this, we introduce \textbf{Suspicion-Agent}, an
innovative agent that leverages GPT-4's capabilities for performing in
imperfect information games. With proper prompt engineering to achieve
different functions, Suspicion-Agent based on GPT-4 demonstrates remarkable
adaptability across a range of imperfect information card games. Importantly,
GPT-4 displays a strong high-order theory of mind (ToM) capacity, meaning it
can understand others and intentionally impact others' behavior. Leveraging
this, we design a planning strategy that enables GPT-4 to competently play
against different opponents, adapting its gameplay style as needed, while
requiring only the game rules and descriptions of observations as input. In the
experiments, we qualitatively showcase the capabilities of Suspicion-Agent
across three different imperfect information games and then quantitatively
evaluate it in Leduc Hold'em. The results show that Suspicion-Agent can
potentially outperform traditional algorithms designed for imperfect
information games, without any specialized training or examples. In order to
encourage and foster deeper insights within the community, we make our
game-related data publicly available. | Jiaxian Guo, Bo Yang, Paul Yoo, Bill Yuchen Lin, Yusuke Iwasawa, Yutaka Matsuo | 2023-09-29T14:30:03Z | http://arxiv.org/abs/2309.17277v3 | # Suspicion-Agent : Playing Imperfect Information Games with Theory of Mind Aware GPT4
###### Abstract
Unlike perfect information games, where all elements are known to every player, imperfect information games emulate real-world complexities of decision-making under uncertain or incomplete information. GPT-4, a recent breakthrough in large language models (LLMs) trained on massive passive data, demonstrates remarkable knowledge retrieval and reasoning abilities. This paper explores the applicability of GPT-4's learned knowledge for imperfect information games. To achieve this, we introduce **Suspicion-Agent**, an innovative agent that leverages GPT-4's capabilities to perform in imperfect information games. With proper prompt engineering to achieve different functions, Suspicion-Agent based on GPT-4 displays remarkable adaptability across a range of imperfect information card games. Importantly, GPT-4 exhibits a robust high-order theory of mind (ToM) capacity, meaning it can understand others and deliberately influence their behavior. Leveraging this, we design a planning strategy that enables GPT-4 to competently play against various opponents, adapting its gameplay style as needed, while requiring only game rules and descriptions of observations as input. In the experiments, we qualitatively showcase the capabilities of Suspicion-Agent across three different imperfect information games and then quantitatively evaluate it in Leduc Hold'em. The results indicate that Suspicion-Agent has the potential to outperform traditional algorithms designed for imperfect information games without requiring specialized training or examples. To encourage and foster deeper insights within the community, we make our game-related data publicly available.
## 1 Introduction
Recently, large language models (LLMs) (Brown et al., 2020; Chowdhery et al., 2022; Touvron et al., 2023), which are trained extensively on text corpora and code datasets and aligned with instructions (Ouyang et al., 2022; Wei et al., 2021; Longpre et al., 2023), have demonstrated remarkable knowledge retrieval and reasoning capabilities (Kojima et al., 2022; Wei et al., 2022; Wei et al., 2022; ) on natural language benchmarks and exams (Hendrycks et al., 2020; Cobbe et al., 2021). Given few-shot examples or specific instructions as prompts, these models, especially GPT-4 (OpenAI, 2023), can understand human intentions and make informed decisions in open-ended scenarios, and tackle intricate tasks by gathering observations and utilizing the learned prior knowledge, such as Voyager (Wang et al., 2023), ReAct (Yao et al., 2022) and SwiftSage (Lin et al., 2023).
However, most of these methods typically assume that the agent has access to all relevant information, an assumption that is often unrealistic in real-world settings. Take diplomacy (Team et al., 2022; Gray et al., 2020) as an example: representatives must discern the veiled intentions of other countries based on incomplete information and decide accordingly to maximize benefits for their own nation. This challenge is not unique to diplomacy but extends to other domains as well, such as poker (Moravcik et al., 2017; Brown and Sandholm, 2018) and economic simulations (Holmstrom and Myerson, 1983; Harsanyi, 1968). The inherent unpredictability in these games makes it impractical for a learned agent to adopt a single, optimal strategy for every scenario (Brown et al., 2019). This necessitates predictive capabilities for handling incomplete information, along with a theory of mind (ToM) ability (Frith and Frith, 2005) to
comprehend decisions from others' perspectives. Such complexities, both strategic and cognitive, represent ongoing challenges in the field of AI research.
Furthermore, recent advance in imperfect information games, such as ReBel (Brown et al., 2020), DeepStack (Moravcik et al., 2017), and Libratus (Brown and Sandholm, 2018), typically start training from scratch, and thus they normally need millions of data to understand the game rules and learn the adequate strategies for each new game. Such a high sample complexity hampers their ability to generalize across different games and poses challenges when applying them into complex and open-ended imperfect information scenarios. By contrast, as alluded to previously, LLMs have undergone training on massive passive datasets. This leads to an intriguing proposition: Can we harness pre-trained LLMs' knowledge and reasoning capabilities to navigate imperfect information games without extra training or data?
To achieve this, we propose **Suspicion-Agent**, an innovative autonomous agent based on GPT-4. This agent harnesses its extensive prior knowledge and cognitive adaptability to effective strategies against a range of adversaries without any specialized training. Concretely, we first decompose the process of solving such games into multiple sub-modules like observation interpreter and planning module to understand the game rules and game states (as Figure 1 shows) so that GPT-4 can make decisions accordingly. Each module employs different prompts to enable GPT-4 to fulfill specific functions. However, unlike perfect information games, planning strategies in imperfect information games can have varying effectiveness depending on the opponent's behavior (Brown et al., 2020; Moravcik et al., 2017; Brown and Sandholm, 2018). To tackle these challenges, we introduce a theory of mind (ToM) aware planning approach that leverages the higher-order ToM capability (Frith and Frith, 2005) present in LLMs. Specifically, the model utilizes its understanding of human cognition to predict opponents' thought processes, susceptibilities, and actions. This aligns with the idea that individuals use their own minds as models to understand and affect others (Montes et al., 2022). For instance, the model might consider, _"If I execute Plan I, how would this influence my opponent's beliefs about my cards, and what actions might they take based on their behavioral patterns?"_
Concretely, given the gameplay history as the input, we find that GPT-4 can identify an opponent's strategic tendencies and analyze how our actions influence the opponent's behavior, _e.g._ if Suspicion-Agent identifies a weak hand held by the opponent, coupled with the cautious strategy, it might strategically raise the bet to encourage the opponent to fold, even when Suspicion-Agent itself holds a similarly weak hand (as illustrated in Figure 9 and I). Remarkably, by using some simple prompts, _e.g._, GPT-4 can even self-examine its behavior through the lens of the opponent (Refer to E). Leveraging its ToM capabilities, GPT-4 can predict and even influence an opponent's actions effectively (Roska-Hardy, 2008). Integrating these simulated actions into our planning module can mitigate the information asymmetry inherent in imperfect information games and more accurately assess the effectiveness of various strategies. As a result, our Suspicion-Agent can adjust its strategy to play effectively against a range of opponents, as shown in Section 4.1. In the experiments, we first conduct a qualitative assessment of Suspicion-Agent's in 3 two-player imperfect information games, aiming to showcase the generalization capabilities of our method. Subsequently, we perform a quantitative analysis in Leduc Hold'em (Southey et al., 2012). The results reveal that Suspicion-Agent exhibits varying behaviors when interacting with previous works such as CFR (Zinkevich et al., 2007) and NFSP (Heinrich and Silver, 2016) while outperforming them in terms of overall performance. In summary, our contributions are as follows:
1. We introduce Suspicion-Agent, the first agent framework designed to empower GPT-4 with the theory of mind (ToM) ability to compete in various imperfect information games by understanding game rules and observational data without requiring any specialized training or examples. By incorporating the ToM capability into the planning process, Suspicion-Agent captures the inherent uncertainty of opponent behavior in our strategic deliberations. This enables Suspicion-Agent to adapt its tactics dynamically when facing opponents with differing behavioral patterns.
2. We are the first to demonstrate that an agent based on GPT-4 can potentially outperform traditional algorithms in imperfect-information games, such as Leduc Hold'em (Southey et al., 2012) when compared to established methods like CFR (Zinkevich et al., 2007), which may inspire more subsequent use of LLMs in imperfect-information games.
3. We release all interaction data between Suspicion-Agent and traditional algorithms for imperfect-information games in Leduc Hold'em. This will enable the research community to scrutinize the capabilities of GPT-4-based agents and inspire further work, particularly in fine-tuning smaller language models.
## 2 Background
Two-Player Imperfect Information GameIn this paper, we propose to employ LLMs to play imperfect information games. As a preliminary exploration, we concentrate primarily on two-player imperfect information games, such as Leduc Hold'em (Southey et al., 2012), which involves two players, denoted by \(\mathcal{N}=\{1,2\}\), who share the same action
space, \(\mathcal{A}\). Let \(a_{1}\in\mathcal{A}\) and \(a_{2}\in\mathcal{A}\) represent the actions chosen by player 1 and player 2, respectively. Each player has access to two types of observations: a **private observation**, denoted as \(S_{\textsc{pri}(i)}\) where \(i\in\mathcal{N}\) is the player index, and a **public observation**, shared among both players, denoted as \(S_{\textsc{pub}}\).
As the game progresses in discrete timesteps indexed by \(j\), each player \(i\) observes a history \(h\) of the game. This history comprises the series of public and private observations and actions up to timestep \(j-1\) and the result of game \(r^{j}\), formally given as \(h=(S_{\textsc{pub}}^{0},S_{\textsc{pri}(i)}^{0},a_{i}^{0},a_{\textsc{oi}}^{0},r^{0}\ldots,S_{\textsc{pub}}^{j-1},S_{\textsc{pri}(i)}^{j-1},a_{i}^{j-1},r^{j -1})\). Simultaneously, players receive the current private and public observations, \(S_{\textsc{pri}(i)}^{j}\) and \(S_{\textsc{pub}}^{j}\), and select the next action \(a_{i}^{j}\) according to a policy \(\pi_{i}\). All game histories are constructed as a dataset \(D\), denoted as \(D=(h_{1},h_{2},\ldots,h_{M})\), where \(M\) indexes individual games. The goal of each player is to select the next action \(a_{i}^{j}\) with the imperfect observation according to the game rules, aiming for victory over many games. Specifically, the order of players is not fixed and depends on the game rule for each game. For example, the role of the small blind rotates among players in Texas H\(\o\)l'em, dictating the order of play.
## 3 Method
To enable LLMs to play various imperfect information games without specialized training, we break down the overall task into several modules shown in Figure 1, such as the observation interpreter, game pattern analysis, and planning module. In the following sections, we will demonstrate how we craft specific prompts to guide LLMs to use its prior knowledge, reasoning ability, and psychological ability in performing these modular functions and explain how we combine these functions to equip the model with the capability to navigate the intricacies of imperfect information games. All prompts and codes will be made public on our codebase (Please refer to our supplementary material).
### Game Rule & Observation Understanding
While LLMs excel in processing text data, it can be misled in imperfect information games because they normally provide only brief, low-level descriptions. To mitigate this issue, we initially develop structured prompts that assist LLMs in comprehending both the game's rules and its current state. For each type of imperfect information game, one can write a structured rule description as follows:
* **General Rules:** A brief game introduction, the number of rounds, and betting rules;
* **Action Descriptions:** {Description of Action 1}, {Description of Action 2},...;
* **Single Win/Loss Rule:** The conditions for winning, losing, or drawing in a single game;
* **Win/Loss Payoff Rule:** The rewards or penalties for winning or losing a single game;
* **Whole Win/Loss Rule:** The number of games and the overall win/loss conditions.
In most imperfect information game environments (Zha et al., 2019), game states are often represented as low-level numerical values, such as one-hot vectors, to facilitate machine learning. Leveraging LLMs, we can convert these low-level game states into natural language text (Wu et al., 2023; Wang et al., 2023; Guo et al., 2022; Lin et al., 2023), thereby aiding the model's understanding. For each game, it is also essential to define an observation conversion rule. Similar to structuring game rules, we organize the observation conversion rule as follows:
Figure 1: **Left Figure. The illustration of the Suspicion-Agent which trains the ego policy by pairing it with the copied partner policy. Right Figure. The illustration about the first-order ToM planning method, where the texts in yellow blocks are outputs, and green blocks are inputs.**
* **Input Explanation:** The type of inputs received, such as dictionaries, lists, or other formats, and describes the number of elements in the game state along with the name of each element;
* **Element Descriptions:** {Description of Element 1}, {Description of Element 2},...;
* **Conversion Tips:** More guidelines for transforming the low-level game states into text.
By leveraging both the game rule and the observation conversion rule, we can efficiently transform low-level game states into readable text, denoted as \(Obs_{r}\). This readable text serves as the input for LLMs. Using the prompts \(Prompt_{obs}\), the conditional distribution for each element \(Obs_{r}[i]\) in the generated text can be modeled as: \(Obs_{r}\sim\prod_{i=1}^{M}F_{\theta}(Obs_{r}[i]|Prompt_{obs},Rule,Rule_{ obs},Obs_{r}[1,\dots,i-1])\). Here, \(F_{\theta}\) represents the language model parameterized by \(\theta\); \(M\) is the length of the generated text \(Obs_{r}\). The concrete definition can be found in Appendix A. We name this module an **Observation Interpreter**. This formulation allows for a more understandable interaction with the model in imperfect information games.
### Vanilla Planning Module and Reflexion
After understanding the game rules and converting the game states into a readable format, we can craft prompts to guide LLMs in formulating strategies. Inspired by advancements in LLMs-agent and prompt engineering (Ganguli et al., 2023; Wang et al., 2023; Liu et al., 2023; Shinn et al., 2023), we introduce a vanilla planning method which features a **Reflexion** module aimed at automatically scrutinizing game history to enable LLMs to learn and improve planning from the experience of the history, as well as a separate **planning** module dedicated to making decisions accordingly.
**Reflexion** The Reflexion module takes as input the history of games played against the current opponent and outputs a Reflexion. In the \(j\)-th round of the \(i\)-th game, we gather all prior game histories, denoted as \(D^{i}=(h^{1},h^{2},\dots,h^{i-1})\), and prompt LLMs to carry out these analytical functions to get the Reflexion output \(O^{i}_{f}\sim\prod_{i=1}^{M}F_{\theta}(O_{f}[i]|Prompt_{Reflexion},Rule,D^{i},O_{f}[1,\dots,i-1])\), _i.e._, \(O^{i}_{f}\sim F^{Reflexion}_{\theta}\), which covers why we won or lost in specific previous games, and suggests how to improve strategies for future games. Importantly, the Reflexion module empowers Suspicion-Agent to enhance its strategies during gameplay, even without previous examples.
**Planning** After obtaining the Reflexion \(O_{f}\), we proceed to use the game rules, the current game history \(h^{i}\), the current readable observation \(Obs_{r}\), and the set of valid actions \(\{a\}\) in the current game as inputs. We then prompt LLMs to formulate multiple textual plans based on its understanding of the imperfect information, _i.e._, \(O_{plan}\sim\prod_{i=1}^{M}F_{\theta}(O_{plan}[i]|Prompt_{plan},Rule,Obs_{r}, h^{j-1},O_{f},O_{plan}[1,\dots,i-1])\), \(O_{plan}\sim F^{plans}_{\theta}\). Specifically, the vanilla planning method assumes the marginal distribution of the actions of the opponent is uniform, and thus it can be regarded as a special case of planning with the zero-order ToM. In this way, we can further denote \(F^{plans}_{\theta}\) as \(F^{zero-plan}_{\theta}\).
**Evaluator** To assess the likely success of each plan, we introduce an evaluation module. This module takes into account factors such as the game's current state, _i.e._, readable observation \(Obs_{r}\), the Reflexion \(O_{Reflexion}\), the game rule \(Rule\) and estimated plans \(O_{plan}\) as the input, to estimate the win rates for each of the proposed plans and output the next action by prompting LLMs, _i.e._, the next action \(a_{j}=F^{zero-eval}_{\theta}(Obs_{r},O_{Reflexion},Rule,O_{plan})\).
Figure 2: **Left Figure**. The decision-making of the vanilla planning of Suspicion-Agent. **Middle Figure**. The decision-making of the planning with first-order ToM of Suspicion-Agent. **Right Figure**. The decision-making of the planning with second-order ToM of Suspicion-Agent.
### Planning with Theory of Mind (ToM)
However, the vanilla planning method often struggles against the inherent uncertainties that typify imperfect information games, particularly when faced with opponents skilled at exploiting others' strategies. Inspired by this adaptability, we seek to devise a new planning method that capitalizes on LLMs' ToM capabilities (Frith and Frith, 2005; Kosinski, 2023) to understand the opponent's behavior and thus can adjust the strategy accordingly. In the following sections, we will detail how we employ LLMs to analyze the behavior patterns of other agents and predict their subsequent actions in response to various plans using different orders of ToM (results are shown in Table 3), thereby facilitating more informed decision-making. Note that all sample outputs are given in Section I and E.
**Planning with First-Order ToM Modelling**: In the first-order ToM modeling approach (as Figure 7 shows), Suspicion-Agent goes a step further by inferring the probable hand of the opponent based on their actions to that point, _e.g._, if the opponent raised, they likely have a strong hand. Consequently, Suspicion-Agent can adapt their strategy to maximize winnings when holding a strong hand and minimize losses with a weak hand. To forecast the opponent's actions, we first introduce a behavior pattern analysis process. In this process, we feed the game history \(D^{i}=(h^{1},h^{2},\ldots,h^{i-1})\) and the game rules into LLMs, prompting it to analyze the opponent's behavioral pattern. The formulation and the prompts can be expressed as: \(O_{bp}\sim\prod_{i=1}^{M}F_{\theta}(O_{bp}[i]|\text{Prompt}_{\text{pattern}}, \text{Rule},D^{i},O_{bp}[1,\ldots,i-1])\).
**Sample Prompts for First-Order Behaviour Pattern Analysis (Incomplete) :** _From my perspective, please infer several beliefs about the opponent's game pattern/preference for each round when holding different cards and the public card (if have)._
Through this approach, we can deduce the opponent's behavior pattern. Notably, since the input for behavior pattern analysis is the same as that for the Reflexion module, we have integrated them into a single module to reduce inference time, as shown in Figure 1. After identifying the opponent's behavior pattern, LLMs can be prompted to predict the strength of the opponent's current hand or observations in the game. This is expressed as: \(O_{\text{card\_pred}}\sim\prod_{i=1}^{M}F_{\theta}(O_{\text{card\_pred}}[i]| \text{Prompt}_{\text{card\_pred}},\text{Rule},h^{j-1},O_{bp},\text{Obs}^{j}_{ r},O_{\text{card\_pred}}[1,\ldots,i-1])\).
**Sample Prompts for First-Order Cards Prediction (Incomplete) :** _Understanding the game rule, your observation, progress summarization in the current game, the estimated behaviour pattern of the opponent, and your knowledge about the game, please infer the probabilities about the cards of the opponent (number 100% in total) step by step._
With these predictions, we can further augment the previous **Planning** module and **Evaluator** module with \(O_{\text{card\_pred}}\) as the additional input, so that we can further propose better plans considering the opponent's card and estimate the winning rate of each plan, so that we can better make the decision. Because the input of **Planning** module and **Evaluator** module are highly overlapped and our budgets are limited, we combine these two modules together to save the costs:
**Sample Prompts for Planning and Evaluator (Incomplete):** _Make Reasonable Plans: Please plan several strategies according to actions you can play now to win the whole game step by step. Note that you can say something or keep silent to confuse your opponent._
_Potential opponent's actions and Estimate Winning/Lose/Draw Rate: From the perspective of the opponent, please infer what the action opponent with probability would do when the opponent holds different cards based on his behaviour pattern, and then calculate the winning/lose/draw rates when opponent holds different cards step by step. Output in a tree structure:_
The sample outputs are shown in Figure 8 and E.1.
**Planning with Second-Order ToM Modelling**: However, elite players in imperfect information games like poker are also adept at dynamically adjusting their strategies, and they may employ "bluffing" as a tactic, feigning a strong hand when they actually hold a weak one to deceive their opponent. Relying solely on a first-order ToM in such situations could lead to incorrect assumptions and potentially costly mistakes. Recognizing this, we introduce a planning method that incorporates a second-order ToM. In this enhanced model, Suspicion-Agent engages in even more intricate reasoning, where Suspicion-Agent not only considers what the opponent might do (as in first-order ToM) but also what the opponent believes Suspicion-Agent will do as Figure 7 shows. This level of strategic thinking allows Suspicion-Agent to gain an advantage in situations involving tactics like bluffing.
To implement this, Suspicion-Agent needs to not only just consider the current state from its own perspective, but also be capable of role-switching to think his own observation from the opponent's viewpoint. In traditional methods (De Weerd et al., 2013; Tatarchenko et al., 2016), they need to iteratively call the first-order ToM function to estimate the action of the opponent. However, we surprisingly find that we can just add the prompts like below, and get the outputs in Sec E.2.
**Sample Prompts for Second-Order Behaviour Pattern Analysis (Incomplete):** _From my perspective, please infer under what circumstances is the opponent likely to be influenced by my actions? Additionally, in what situations would the opponent make decisions based solely on their own hand?_
_From the perspective of the opponent (he cannot observe my card but only action), please infer several beliefs about my game pattern/preference when holding different cards._
With this, LLMs are able to automatically generate insights into whether the opponent's behavior is likely to be reactive to Suspicion-Agent's actions, or if they are likely to act independently. Then, we can directly reuse the prompts of the first-order Tom to predict the opponent's cards based on the behavior pattern estimated from the second-order ToM, and we can get sample results in Figure 9 and Section E.2. In this way, we can utilize the Planning with second-order ToM to make decisions and adapt the strategies accordingly. The concrete algorithms are given in Section F. Without mentioning otherwise, we use the second-order ToM and GPT-4-0613 by default.
## 4 Experiments
We conduct experiments to answer the following questions:
* Can Suspicion-Agent achieve comparable performance with traditional imperfect information algorithms without any specialized training? (Section 4.1)
* Can Suspicion-Agent adapt its strategies when playing with different opponents? (Section 4.1)
* Can Suspicion-Agent play different imperfect information games without any specialized training? (Section I.1)
* How different orders of ToM improve the performance of Suspicion-Agent? (Section 4.3 and G)
### Quantitative Evaluation
**Environments** To quantitatively assess the performance of LLMs in imperfect information games, we chose the RLCard environment (Zha et al., 2019). Due to budget limits, our quantitative evaluation focuses on Leduc Hold'em 2, a simplified version of Limit Texas Hold'em. The game rules of Leduc Hold'em can be found in Appendix B. Following (Southey et al., 2012), we also add the opponent's observation into the single game history \(h\) after the end of each game, which also conforms with the real-world experience, but we also perform the ablation study about it in Section 4.3 and H.
Footnote 2: [https://rlcard.org/games.html](https://rlcard.org/games.html)
**Competing Methods** We have selected a range of methods commonly used in decision-making, such as NFSP (Heinrich & Silver, 2016), DQN (Mnih et al., 2015), DMC (Deep Monte Carlo Search for imperfect information games) (Zha et al., 2021) and CFR (Zinkevich et al., 2007). Among these, NFSP and DMC are specifically designed for imperfect information games and are based on self-play, while CFR is grounded in game theory. These algorithms typically show different strategies in the imperfect information games, allowing us to evaluate the adaptability of each method. Note that, our Suspicion-Agent does not have any specialized training when compared with these methods.
**Evaluation Methods** To ensure the robustness of our evaluation metrics, we meticulously designed a dual-method evaluation framework aimed at mitigating the randomness intrinsic to imperfect information games. **(1) Variable Random Seeds:** Suspicion-Agent play against different baselines for 100 games utilizing varying random seeds for each game. This tactic is intended to dampen the stochastic variability introduced by the random seed settings. The results are shown in Table 1. **(2) Same Cards with Exchange Position:** We ran a series of 50 games with a fixed random seed, thereby keeping the sequence of cards constant across these games. Suspicion-Agent initially played at position 0 for the first 50 games, then we rerun the 50 games but switched the position of Suspicion-Agent and the baseline model. In this way, Suspicion-Agent and the baseline should have the same card strength over 100 games, and thus we can better evaluate the performance of each. The results of these experiments are presented in Table 2.
Results Analysis(1) **Suspicion-Agent outperforms all baselines:** As illustrated in Table 1, it is evident that our GPT-4-based Suspicion-Agent outperforms all other algorithms specifically trained on Leduc Hold'em environments. Notably, it not only defeats these methods but also secures the highest average chip count in the comparisons. Our approach surpasses the second-best method by an impressive margin of approximately 200%. These findings compellingly showcase the advantages of employing large language models in the realm of imperfect information games, as well as affirm the effectiveness of our proposed framework. (2) **The gap between GPT-3.5 and GPT-4 is large:** While GPT-4 delivers performance that either matches or outperforms other baselines, agents using GPT-3.5 experience a significant drop in performance. Specifically, the winning probability for agents built on GPT-3.5 stands at just 50%, as opposed to 100% for GPT-4-based agents. Additionally, the average chip payoff for GPT-3.5 agents is negative, underlining the stark performance disparity between the two versions of the language model. The further reason analysis can be found in Appendix C. (3) **Suspicion-Agent outperforms baselines in both positions:** Utilizing identical card sequences for both positions, Suspicion-Agent exhibits a consistent winning pattern against various baselines, as evidenced in Table 2. This robust performance serves as compelling evidence to substantiate the claim that Suspicion-Agent outperforms the baseline models when card strength is held constant.
Behaviour Pattern AnalysisWe illustrate the action percentages of Suspicion-Agent and baselines in Figure 3. We can observe that (1) **Suspicion-Agent vs CFR:** CFR algorithm (Zinkevich et al., 2007) demonstrates a conservative strategy, it tends to be conservative and often folds when it holds a weak hand. Your agent successfully identifies this pattern and strategically chooses to raise more often, applying pressure on CFR to fold. This enables Suspicion-Agent to accumulate a larger number of chips, even when its hand is weak or equivalent to CFR's. (2) **Suspicion-Agent vs DMC:** DMC algorithm (Zha et al., 2021) based on the search algorithm DMC employs a more diversified strategy that includes bluffing. It often raises both when it has the weakest and the strongest hands. In response, Suspicion-Agent adapts by raising less frequently and opting to call or fold more often based on its own hand and the observed behavior of DMC. (3) **Suspicion-Agent vs DQN:** DQN appears to have a more aggressive stance, almost always raising with strong or mid-level hands and never folding. Suspicion-Agent identifies this and, in turn, minimizes its own raises (the lowest percentage among all matchups), opting more often to call or fold based on the public cards and DQN's actions. (4) **Suspicion-Agent vs NFSP:** NFSP exhibits a follow-through strategy, opting to always call and never fold. Suspicion-Agent responds to this by raising less frequently (compared to matches against CFR) and choosing to call more (compared to matches against CFR) based on the public card and NFSP's observed actions. The analysis clearly shows that Suspicion-Agent is highly adaptable and capable of exploiting the weaknesses in the strategies employed by various other algorithms. This speaks volumes about the large language model's capability to reason and adapt in imperfect information games.
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline & \multicolumn{8}{c}{Opponent Model} \\ \cline{2-9} & NFSP & DQN & DMC & CFR & \begin{tabular}{c} Ours \\ (GPT-3.5) \\ \end{tabular} &
\begin{tabular}{c} Ours \\ (GPT-4) \\ \end{tabular} & Avg. & Win \\ \hline NFSP & - & -33 & -22 & -45 & -3 & -142 & -61.25 & 0\% \\ DQN & +33 & - & -55 & -20 & +200 & -44 & +22.8 & 40\% \\ DMC & +22 & +55 & - & +16 & -49 & -24 & +4 & 60\% \\ CFR & +45 & +20 & -16 & - & +73 & -37 & +17 & 60\% \\ Ours (GPT-3.5) & +3 & -200 & +49 & -73 & - & - & -55 & 50\% \\ Ours (GPT-4) & **+142** & +45 & +24 & **+37** & - & - & **+62** & **100\%** \\ \hline \hline \end{tabular}
\end{table}
Table 1: The comparison results of Suspicion-Agent when playing with different algorithms trained on Leduc Hold’em environments. The results are the win/lose chips after 100 games with different seeds, and the number of win/lose chips ranges from 1 to 14.
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline & \multicolumn{8}{c}{Opponent Model} \\ \cline{2-9} & CFR & CFR & DMC & DMC & Ours & Ours & Avg. & Win \\ (pos 0) & (pos 1) & (pos 0) & (pos 1) & (pos 0) & (pos 1) & (pos 1) & Avg. & Probability \\ \hline DMC & -21 & -6 & -10 & +10 & -36 & -4 & -11.17 & 16.7\% \\ CFR & +49 & -49 & +6 & +21 & -37 & -17 & -4.53 & 50\% \\ Ours & +11 & **+37** & +4 & **+36** & - & - & **+21** & **100\%** \\ \hline \hline \end{tabular}
\end{table}
Table 2: The comparison results of Suspicion-Agent when playing with CFR and DMC trained in Leduc Hold’em environments. These results are quantified over 50 games, and pos denote the position of the opponent model. For example, CFR (pos 0) denotes the opponent model is in the position 0, and the model is located in the position 1.
### Qualitative Evaluation
In the qualitative evaluation, we assess Suspicion-Agent on three imperfect information games: Coup, Texas Hold'em Limit, and Leduc Hold'em(Southey et al., 2012). For each game, we provide only the rules and observation rules as described in Section 3.1. Importantly, Suspicion-Agent is able to play these games without any additional training or sampling. Qualitative examples from these games are presented in the subsequent sections.
**Leduc Hold'em** We present qualitative samples showcasing Suspicion-Agent's behaviour under different strategies: Vanilla Planning, Planning with First-Order ToM, and Planning with Second-Order ToM in Leduc Hold'em. These samples can be viewed in Figure 6, 7, 8, and 9, respectively, and the concrete analysis is given in Appendix I.1.
**Game Coup and Texas Hold'em Limit** As illustrated in Figure 12 and 11 in Appendix, when provided solely with the rules and observation guidelines of Texas Hold'em Limit and Coup, and keeping the agent prompts consistent, Suspicion-Agent is still kept at discerning the opponent's game patterns. It analyzes the strength of its hand and subsequently makes informed decisions to accumulate others. This is the strong evidence to demonstrate the generalization ability of Suspicion-Agent based on GPT-4 on different imperfect information games, and thus outperforming the algorithms need to be re-trained for every new imperfect information game. Specifically, In the game of Coup, without any prior training, **Suspicion-Agent** rapidly discerns which character the opponent lacks. It then strategically bluffs as that character to block the opponent's actions. This ability to bluff successfully gives **Suspicion-Agent** a consistent advantage throughout multiple rounds.
losses. In this way, vanilla planning has the lowest chip gain as Table 3 shows. (2) **Planning with First-Order ToM:** Utilizing First-Order ToM, Suspicion-Agent is capable of making decisions based on its own and estimates of the opponent's card strength. As a result, it will raise more than vanilla planning but it tends to fold more frequently than other strategies, aiming to minimize unnecessary losses. However, this cautious approach can be exploited by savvy opponent models. For example, DMC often raises when holding the weakest hand, and CFR may occasionally raise even with a mid-level hand to exert pressure on Suspicion-Agent. In these instances, Suspicion-Agent's tendency to fold can lead to losses. (3) **Planning with Second-Order ToM:** In contrast, Suspicion-Agent excels at identifying and capitalizing on the behavioural patterns of opponent models. Specifically, when CFR chooses to check--often indicating a weak hand--or when DMC checks--suggesting its hand doesn't align with the public cards--Suspicion-Agent will raise as a bluff to induce folds from the opponents. As a result, Suspicion-Agent exhibits the highest raise rate among the three planning methods evaluated. This aggressive strategy allows Suspicion-Agent to accumulate more chips even when holding a weak hand, thereby maximizing its chip gains.
**Ablation Study on the Effect of Hindsight Observation** Following (Southey et al., 2012), we assume that Suspicion-Agent has access to observations of the opponent after the end of each game, _i.e._, Hindsight Observation. To assess the impact of it, we conduct an ablation study in which hindsight observations are not incorporated into the current game. Without hindsight observations, we augment the **Reflexion** module with additional prompts to enable it to infer the opponent's cards based on game outcomes and Suspicion-Agent's own observations. As demonstrated in Table 5 and 4, Suspicion-Agent retains its performance advantage over the baseline methods without the benefit of hindsight observations. Specifically, we observe that Suspicion-Agent adopts a more conservative strategy under the increased uncertainty that comes without hindsight observations. This leads to reduced bluffing, resulting in fewer gains when playing against CFR. However, it also minimizes the risk of over-bluffing when facing DMC, thus yielding higher chip earnings.
Figure 4: The qualitative sample of planning with second-order ToM Suspicion-Agent about **Strategic Bluffing** on Leduc Hold’em. More samples are given in Appendix I.
**Imperfect Information Game** Imperfect information games, exemplified by poker (Brown & Sandholm, 2018, 2019; Moravcik et al., 2017; Southey et al., 2012), have emerged as captivating subjects of research due to the inherent absence of complete information concerning player states and potential actions (Frank & Basin, 2001). In contrast to perfect information games, these settings allow players to employ strategies that encompass elements of deception and exploration, often leaning towards stochastic approaches over deterministic ones (Montes et al., 2022; Kreps & Wilson, 1982). Previous investigations into imperfect information games have explored a plethora of dimensions, including principles from game theory (Lu et al., 2023b), techniques from reinforcement learning (Ouyang & Zhou, 2023; Roughgarden, 2016), the integration of deep reinforcement learning (Brown et al., 2020a), strategies based on observation (Chatterjee et al., 2007), considerations for limited lookahead (Kroer & Sandholm, 2020), and methodologies for abstracting away the intricacies of imperfect information (Sokota et al., 2023). Nonetheless, these approaches often demand extensive computational training and entail the collection of copious behavioral data during gameplay. Recent studies have also delved into the application of LLMs within the context of imperfect information games (Gupta, 2023). LLMs have not only demonstrated strong performance in decoding and predicting player behaviors (Akata et al., 2023) but have also excelled in simplifying natural language interactions within these gaming environments (Xu et al., 2023). Notably, the zero-shot capabilities of LLMs obviate the necessity for exhaustive pre-training or the accumulation of action data, distinguishing LLM-based methods from the traditional techniques mentioned aforementioned.
**Reasoning and Planning of LLMs** LLMs have recently shown remarkable progress in reasoning and planning across various downstream tasks (Brown et al., 2020b; Chowdhery et al., 2022; Touvron et al., 2023). They exhibit the ability to employ evidence, arguments, and logic to draw conclusions or make informed judgments (Huang & Chang, 2022). The introduction of the Chain-of-Thought (CoT) approach, which prompts LLMs to generate intermediate reasoning steps, has led to enhanced performance in arithmetic, commonsense, and symbolic reasoning tasks (Wei et al., 2022b). Furthermore, the zero-shot capability of LLMs has proven its potency by simply incorporating a straightforward prompt phrase (Kojima et al., 2022). Subsequently, the Tree-of-Thought (ToT) framework was proposed, enabling exploration over coherent units of text that serve as intermediary steps toward problem-solving, thereby generalizing the popular CoT approach to prompt language models (Yao et al., 2023). Additionally, the Algorithm of Thought (AoT) leverages the inherent recurrence dynamics of LLMs, expanding their idea exploration with minimal queries (Sel et al., 2023). Drawing inspiration from recent developments such as Voyager (Wang et al., 2023a), BabyAGI (Nakajima, 2023), ReAct (Yao et al., 2022), SwiftSage (Lin et al., 2023), Auto-GPT (Richards et al., 2023) and Agent-GPT (Reworkd, 2023), we posit that the reasoning and planning capabilities of LLMs could prove invaluable in supporting agents in imperfect information games, utilizing only the game rules and observations for interpretation. More specifically, when provided with well-crafted prompts, LLM-based agents can autonomously generate a wide array of text, which can be harnessed to facilitate reflection and planning in the context of imperfect information games (Schuurmans, 2023; Kim et al., 2023). Nevertheless, our paper focuses on integrating theory-of-mind (ToM) capacities into the planning process, whereas others do not use them.
**Theory of Mind (ToM)** In the domain of imperfect information games, classical methodologies often draw from game theory. However, these approaches recognize that human decision-making follows a "cognitive hierarchy" of strategies rather than strictly adhering to the hyper-rational Nash equilibrium solution concept (Wunder et al., 2011). The concept of Theory of Mind (ToM) is crucial in understanding human cognitive abilities. ToM involves comprehending and predicting the behaviors of oneself and others by attributing internal mental states like beliefs, knowledge, desires, and intentions (Premack & Woodruff, 1978; Frith & Frith, 2005; Davies, 1994; Nichols & Stich, 2003; Hurley, 2008). In the context of imperfect information games, ToM has been employed to anticipate opponents' actions and observations, enhancing decision-making effectiveness (De Weerd et al., 2013). Level-k thinking theory within cognitive hierarchy theory posits that players in strategic games rely on predictions of other players' likely actions, and it categorizes these players based on the depth of their strategic thinking, a dimension potentially intertwined with ToM (Crawford, 2018). Within the framework of simulating ToM, a player adopts the perspective of their opponent to infer their own potential actions in similar situations. Moreover, ToM can employ recursively nested beliefs through higher-order ToM (Frith & Frith, 2005), enabling not only the assessment of the counterpart's thoughts but also self-reflection on observations and how one's actions may influence the counterpart's future actions. Recent studies have also evaluated ToM capabilities in LLMs (Frith & Frith, 2005; Kosinski, 2023). Nevertheless, there remains a notable gap in the existing literature concerning the integration of ToM within LLMs for imperfect information games.
Limited by our capacity, we are unable to include all relevant literature in our work. If you find any missing relevant references, please feel free to tell us. We appreciate your efforts to improve our paper quality.
## 5 Limitations
Robustness of ResultsDue to budgetary constraints, our experiments are limited to running 100 games for each comparison with baseline methods. (1) Although this sample size may not be extensive, the superior performance observed over four different baseline algorithms with varying behavioral patterns can still serve as a preliminary demonstration of the cognitive capabilities and potential of large language models like GPT-4 in imperfect information games. (2) Given the same game sequences, Suspicion-Agent can outperform baselines in both positions. This consistency highlights the adaptability and robustness of Suspicion-Agent even when faced with varied strategies that give the same card strength. Considering the limited budgets and the experimental results we get, it is safe to claim that Suspicion-Agent based on GPT-4 can potentially outperform previous methods designed for imperfect information games.
Hallucination Problem of Large Language ModelHallucination problem (Zhang et al., 2023; McKenna et al., 2023; Bang et al., 2023)--generating outputs that are nonsensical or unfaithful to the provided source content (Ji et al., 2023)--poses a significant challenge in LLMs. In our experiments, we found that when given only simple instructions, LLMs can produce outputs that are either meaningful and rigorous or less rigorous and even invalid. This variability compromises the reliability of LLMs, particularly when they interact with models trained for specialized tasks. In addition, the outputs of LLMs are very sensitive to the prompts. To mitigate this issue, **we developed multiple output templates to improve the quality of the outputs of LLMs**, the effectiveness of which is empirically demonstrated in our main results (these templates will be made publicly available in our code repository). However, further work is needed to better align LLM-generated outputs with given instruction prompts. Enhancing this alignment is a critical area of research for improving the reliability and real-world applicability of LLMs.
Long Reasoning ProblemThe limitations of LLMs like GPT-4 manifest in two ways when applied to complex tasks in imperfect information games:
1) Long Prompts Problem: To adapt LLMs for different imperfect information games, it is necessary to input both game rules and game conversion rules. When these are combined with the specialized prompts designed for our Suspicion-Agent, the resulting language model prompts become excessively long. We have observed a rapid decline in the quality of the model's output as the length of these prompts increases. Limited by the budgets, we implement the planning and evaluator model into a single function, which results in a quite long sequence generation, leading to the performance decline to some extent.
2) Complex Reasoning/Calculation Problem: When tasked with conducting intricate calculations--such as computing the average win, lose, or draw rate when an opponent holds different cards--GPT-4 struggles to consistently generate accurate mathematical equations and results.
Expensive Inference Cost and Slow Inference TimeAs demonstrated in Table 1, only GPT-4 is capable of performing well in the game of Leduc Hold'em. Due to the extensive prompts and inference tokens, the cost per game reaches nearly one dollar. Additionally, the large model size of GPT-4 leads to a longer inference time for Suspicion-Agent, requiring several minutes to complete a single game of Leduc Hold'em. These two limitations underscore the importance of developing a specialized local language model for this task, which also serves as our motivation for releasing all associated data.
Planning DepthIn our paper, we only focus on single-step planning, but note that our planning method is orthogonal to recently proposed approaches that leverage large language models for planning with depth, such as Tree-of-Thoughts (Yao et al., 2023), Graph-of-Thoughts (Besta et al., 2023), and Algorithm-of-Thoughts (Sel et al., 2023). While these approaches offer promising directions for future research, they come with high computational costs, and thus we do not incorporate them into our current methods.
More Language Model EvaluationIn this paper, the evaluation is confined to the performance of GPT-3.5 and GPT-4 on imperfect information games, which represent only a fraction of the large language models in the contemporary research landscape. For future work, we aim to expand the scope of our evaluation to include other state-of-the-art large language models, such as PaLM2 (Anil et al., 2023), Claude2 (Models, 2023), and LLaMA2 (Touvron et al., 2023), among others. This broader evaluation will not only offer a more comprehensive understanding of the capabilities and limitations of these models in imperfect information games but also facilitate a nuanced comparative analysis. Such an approach is expected to yield richer insights into the adaptability and generalizability of large language models in complex, real-world scenarios, thereby contributing to the field's collective understanding of their potential applications and limitations.
## 6 Future Work
**Tool Use** As outlined in the section 5, Suspicion-Agent suffers from hallucination problems and struggles with long context reasoning. This can lead to calculation inaccuracies and sometimes produce responses that deviate from factual information or are out of context. Such issues considerably degrade the performance in final decision-making. A natural solution is to break down the problem into multiple sub-problems and employ specialized smaller models or tools (Wang et al., 2023; Schick et al., 2023; Wang et al., 2023; Patil et al., 2023; Lu et al., 2023; Patil et al., 2023), for better task completion.
**Multi-Modality** In the present paper, the analytical scope is limited to text-based imperfect information games. However, it is important to recognize that real-world interactions often encompass more than just textual information. For instance, human communication frequently involves a variety of modalities such as facial expressions and vocal tones, which can serve as additional cues for interpreting beliefs or intentions. Given the increasing advancements in multi-modal large language models--e.g., InstrucBLIP (Dai et al., 2023), LLaVa (Liu et al., 2023)--we aim to extend our research to incorporate these multi-modal aspects. By doing so, we aspire to develop AI agents capable of navigating imperfect information games that more closely mimic real-world complexities. Integrating multi-modal observations into our model will not only enrich the agents' understanding of the game environment but also broaden the applicability of our methods. This will potentially lead to a more nuanced and comprehensive understanding of the strategic behavior of LLMs in scenarios that more accurately reflect real-world conditions.
**Multi-Player Setting** In the paper, our focus is restricted to two-player imperfect information games. However, it is worth acknowledging that real-world scenarios often involve multi-player settings, which introduce additional complexities and nuances that are not captured in a two-player framework. Recent developments have given rise to novel multi-player game environments, such as AgentVerse (Chen et al., 2023) and Mind Agent (Gong et al., 2023). These environments present more realistic settings for evaluating the applicability and efficacy of large language models in game theory scenarios. Therefore, a natural extension of our research will be to adapt our methods to these multi-player environments.
## 7 Conclusion
In this paper, we introduce Suspicion-Agent, the first prompting system designed to enable large language models to engage in various imperfect information games using only the game rules and observations for interpretation. By incorporating first-order ToM and second-order ToM capabilities, we show that a GPT-4-based Suspicion-Agent can outperform traditional algorithms such as CFR and NFSP, even without specialized training or examples. Additionally, we identify and discuss the current limitations of utilizing LLMs in the context of imperfect information games. We make all our code and interactive data publicly available to the research community. This will help in better understanding the capabilities of large language models, particularly GPT-4, and we hope our data will encourage the development of more efficient models for imperfect information games. In addition, we also present the limitations of Suspicion-Agent in Appendix 5.
|
2309.16169 | Subsurface cosmogenic and radiogenic production of ^{42}Ar | Radioactive decays from ^{42}Ar and its progeny ^{42}K are potential
background sources in large-scale liquid-argon-based neutrino and dark matter
experiments. In the atmosphere, ^{42}Ar is produced primarily by cosmogenic
activation on ^{40}Ar. The use of low radioactivity argon from cosmogenically
shielded underground sources can expand the reach and sensitivity of
liquid-argon-based rare event searches. We estimate ^{42}Ar production
underground by nuclear reactions induced by natural radioactivity and
cosmic-ray muon-induced interactions. At 3,000 mwe, ^{42}Ar production rate is
1.8E-3 atoms per ton of crust per year, 7 orders of magnitude smaller than the
^{39}Ar production rate at a similar depth in the crust. By comparing the
calculated production rate of ^{42}Ar to that of ^{39}Ar for which the
concentration has been measured in an underground gas sample, we estimate the
activity of ^{42}Ar in gas extracted from 3,000 mwe depth to be less than 2
decays per ton of argon per year. | Sagar S. Poudel, Ben Loer, Richard Saldanha, Brianne R. Hackett, Henning O. Back | 2023-09-28T04:50:13Z | http://arxiv.org/abs/2309.16169v1 | # Subsurface cosmogenic and radiogenic production of \({}^{42}\)Ar
###### Abstract
Radioactive decays from \({}^{42}\)Ar and its progeny \({}^{42}\)K are potential background sources in large-scale liquid-argon-based neutrino and dark matter experiments. In the atmosphere, \({}^{42}\)Ar is produced primarily by cosmogenic activation on \({}^{40}\)Ar. The use of low radioactivity argon from cosmogenically shielded underground sources can expand the reach and sensitivity of liquid-argon-based rare event searches. We estimate \({}^{42}\)Ar production underground by nuclear reactions induced by natural radioactivity and cosmic-ray muon-induced interactions. At 3,000 mwe, \({}^{42}\)Ar production rate is \(1.8\times 10^{-3}\) atoms per ton of crust per year, 7 orders of magnitude smaller than the \({}^{39}\)Ar production rate at a similar depth in the crust. By comparing the calculated production rate of \({}^{42}\)Ar to that of \({}^{39}\)Ar for which the concentration has been measured in an underground gas sample, we estimate the activity of \({}^{42}\)Ar in gas extracted from 3,000 mwe depth to be less than 2 decays per ton of argon per year.
## I Introduction
Liquid argon is commonly used as a detection medium for ionizing radiation. It has a high scintillation and ionization yield and allows for the propagation of the scintillation photons and ionization electrons over large distances, making it an ideal target for large neutrino detectors [1; 2; 3; 4; 5], direct-detection dark matter experiments [6; 7; 8; 9], and active scintillation vetos [8; 10].
Argon is the third-most abundant gas in the Earth's atmosphere, comprising roughly 0.93% of the atmosphere by volume. Atmospheric argon consists primarily of the stable isotopes \({}^{40}\)Ar (99.6 %), \({}^{36}\)Ar, and \({}^{38}\)Ar. However, due to interactions of high-energy particles produced by cosmic-ray interactions, atmospheric argon also contains three long-lived radioactive isotopes: \({}^{37}\)Ar, \({}^{39}\)Ar, and \({}^{42}\)Ar.
\({}^{37}\)Ar decays purely through electron capture and is relatively short-lived (T\({}_{1/2}\sim 35\) days[11]), quickly decaying below measurable levels after the argon is taken underground for use in detectors shielded from cosmic rays. \({}^{39}\)Ar is a pure \(\beta\)-emitter with an endpoint energy of 565 keV and a half-life of 268 years. Atmospheric argon contains \({}^{39}\)Ar at an abundance of \(8.2\times 10^{-16}\), corresponding to a specific activity of 1 Bq/kg\({}_{\rm Ar}\)[12], which is a significant source of background for low-energy experiments such as dark matter detectors. To reduce the background from decay of \({}^{39}\)Ar, the next generation of argon-based dark matter detectors propose to use argon extracted from deep underground. While the concentration of \({}^{39}\)Ar in atmospheric argon is maintained by interactions of cosmogenic neutrons and other high-energy particles [13], production rates underground are significantly reduced [14; 15]. The DarkSide-50 collaboration has demonstrated that the underground argon they used as their dark matter target has an \({}^{39}\)Ar rate of \(7.3\times 10^{-4}\) Bq/kg\({}_{\rm Ar}\)[7], a factor \(\sim 1400\) below atmospheric levels.
\({}^{42}\)Ar is a radioactive isotope of argon that undergoes beta decay with a half-life of 32.9 years and endpoint energy (Q\({}_{\beta}\)) of 599 keV[11]. Despite having a similar endpoint energy to \({}^{39}\)Ar, the decay of \({}^{42}\)Ar is typically not a concern as the specific activity in atmospheric argon is on the order of 100 \(\mu\)Bq/kg\({}_{\rm Ar}\)[16; 17; 18], four orders of magnitude lower than that of \({}^{39}\)Ar. However, \({}^{42}\)Ar decays to \({}^{42}\)K, whose energetic decay can be a concern, especially in liquid argon-based neutrino experiments. \({}^{42}\)K (T\({}_{1/2}\)= 12 h) has two major decay modes: i) direct beta decay (\(Q_{\beta}\)= 3525 keV, BR=81 %); and ii) beta decay (\(Q_{\beta}\)= 2001 keV) to an excited state of \({}^{42}\)Ca followed by a prompt 1524 keV gamma emission from the \({}^{42}\)Ca [11]. GERDA, an experiment searching for the neutrinoless double beta decay of \({}^{76}\)Ge at 2039 keV, used an array of germanium detectors surrounded by a liquid argon veto. The energetic betas and gammas resulting from the \({}^{42}\)K decay in the argon were a critical background, leading the GERDA collaboration to launch an extensive \({}^{42}\)K background measurement and mitigation campaign [19]. Further, in a large-scale liquid argon detector like the DUNE far detector [3], \({}^{42}\)K decay can cause pileup and event reconstruction issues. In addition, \({}^{42}\)K decay would limit the MeV-scale physics reach of DUNE, particularly to solar neutrinos and supernova core-collapse neutrinos [20; 21].
\({}^{42}\)Ar is produced in the atmosphere primarily by cosmogenic activation of \({}^{40}\)Ar. The dominant production channel is through interactions of energetic alpha particles with \({}^{40}\)Ar: \({}^{40}\)Ar(\(\alpha\),2p)\({}^{42}\)Ar [22]. This reaction has an energy threshold of 13.7 MeV [11] and occurs primarily in the upper atmosphere where cosmic-ray interactions can produce a high flux of energetic \(\alpha\)'s. \({}^{42}\)Ar can also be produced by a two-step neutron capture process on \({}^{40}\)Ar, but this process is subdominant due to the short half-life of the intermediate \({}^{41}\)Ar (109 min), which requires very high neutron flux (like that produced in nuclear tests and explosions) for any significant production [22; 23].
As with \({}^{39}\)Ar, next-generation experiments for which \({}^{42}\)Ar is a significant background concern are looking to use argon drawn directly from underground sources, with the assumption that underground argon will have a sig
nificantly lower rate of \({}^{42}\)Ar than atmospheric argon [10; 20; 21].
## II Underground production mechanisms
There has been little study on the nuclear reactions that can produce \({}^{42}\)Ar underground and on the \({}^{42}\)Ar content of underground argon [24]. \({}^{42}\)Ar is not produced as the decay product of any naturally occurring primordial isotopes and so we focus our attention on production mechanisms involving particle interactions with nuclei present underground. There are two main sources of energetic particles deep underground: particles produced by cosmic-ray muons as they pass through the upper crust and particles produced by radioactive decay of unstable isotopes in the crust. The production rate of \({}^{42}\)Ar underground depends on the composition of the crustal target, flux of these particles, and the cross-sections for producing \({}^{42}\)Ar.
We estimate the rate of these processes in two ways. For cosmogenic production, we perform a particle transport simulation tracking cosmic-ray muons and the particles produced by the muon interactions through a large volume of crust, and count the number of \({}^{42}\)Ar atoms produced (residual isotope production). We also obtain the secondary particle fluxes produced by the cosmic-ray muon interactions and the radiogenic particle flux in the modeled crust and use TALYS [25][26]-given cross-sections to estimate \({}^{42}\)Ar production. The latter method is also used to estimate the production rate of other short-lived isotopes such as \({}^{41}\)Ar, and thereby calculate \({}^{42}\)Ar production by two-step channels.
\({}^{42}\)Ar can also be produced by reactions on \({}^{40}\)Ar, in a gas-filled void in the rock, or within the rock itself. In solid rock, the \({}^{42}\)Ar must first diffuse out of the rock grain into a gas pocket before it can be extracted. Due to the short half-life, a significant fraction of the \({}^{42}\)Ar decays during this process. The diffusion and bulk transport times are difficult to estimate and depend strongly on details of the rock composition and structure, so we do not attempt to estimate them in this work. Instead, we separately calculate and report rates for production in rock (\({}^{42}\)Ar atoms per ton of rock per year) and directly on \({}^{40}\)Ar in gas pockets (\({}^{42}\)Ar atoms per ton of argon gas per year). To estimate the total \({}^{42}\)Ar decay rate in underground argon, we compare to the \({}^{39}\)Ar rate measured by DarkSide-50. This yields an upper limit since (a) the measured value is likely the result of an atmospheric incursion [27] and (b) much more \({}^{42}\)Ar will decay before escaping the solid rock than \({}^{39}\)Ar due to the shorter half-life.
In Section III we present the crustal composition assumed for this work. In Section IV and V, we discuss the cosmic-ray muon-induced and radiogenic particle flux in the Earth's crust. In Section VI, FLUKA-based particle transport and simulations settings are discussed. In Section VII and VIII, we present our evaluation of cosmogenic and radiogenic \({}^{42}\)Ar production rates respectively. Expected \({}^{42}\)Ar/\({}^{40}\)Ar in the crust and \({}^{42}\)Ar activity in underground argon are discussed in Section IX.
## III Crust composition
We assume a standard continental crust composition with elemental abundances taken from [28] and implemented down to 10 ppm level in the simulations. The pie-chart (in Figure 1) shows distribution of the elemental abundances that are implemented in the modeled rock. Natural isotopic abundances are considered for all elements. The continental crust density is taken as 2.7 g/cm\({}^{3}\).
## IV Cosmogenic muon flux
Cosmic-ray muons can reach great depths in the Earth's crust. As muons propagate through the crustal material, the total flux falls but the mean energy of muons increases as low-energy muons get removed from the spectrum. The underground muon spectrum spans several orders of magnitude (extending up to thousands of GeV). Cosmic-ray muons can produce secondary particles and isotopes primarily through the following processes[29; 30]:
* Negative muon capture (subdominant beyond 100 meter-water-equivalent (mwe))
Figure 1: Fractional elemental abundances considered for the modeled crust [28].
* Direct muon spallation on nuclei
* Muon-induced electromagnetic and hadronic showers.
Cosmic-ray muons are generated for a given depth using MUSIC (MUon SImulation Code)[31]-given muon energy spectra for standard rock and propagated into our simulated crust. MUSIC is a package for muon transport through material. Muon flux attenuation in a rock primarily depends on the rock thickness, density, and composition.
For this study, the MUSIC muon flux and the muon energy spectra for a standard rock and for depths of 500 mwe and 3,000 mwe were used as the inputs. The choice of 500 mwe depth was done as it was computationally feasible to transport the surface muons (generated by using EXPACS code [32]) to that depth in our modeled crust, and compare the results with the MUSIC results for standard rock. A comparison of the muon flux and energy spectrum at \(\sim 500\) mwe obtained by transport of the EXPACS-generated muons through our modeled crust and the one obtained by propagating MUSIC-given muons at 500 mwe depth (for standard rock) is shown in Appendix B. The choice of 3,000 mwe depth was made so the results could be compared to data that are available at a similar depth.
Using the muon flux associated with standard rock introduces some systematics in the cosmogenic particle flux and the \({}^{42}\)Ar production rates. Major systematics, including that from muon flux normalization, are briefly discussed in Section XI.
## V Radiogenic activity
Alphas and neutrons produced by radioactive decays in the natural uranium and thorium decay chains can produce radioactive isotopes through interactions with elements in the Earth's crust. Looking at Table XI in Appendix A, \(\alpha\)-induced reactions have energy thresholds \(>\) 10 MeV. This is greater than the energy of \(\alpha\)'s originating from the uranium and thorium decay chains (maximum energy of 8.9 MeV from the \({}^{212}\)Po \(\alpha\) in the thorium decay chain) and therefore \({}^{42}\)Ar production from radiogenic \(\alpha\) is not considered. We have considered spontaneous fission and (\(\alpha\),n)-neutrons. The spontaneous fission neutron spectrum was obtained by following the parameterization in [15]. Fission neutrons from \({}^{235}\)U and \({}^{232}\)Th decay chains were not considered since their yield is several orders of magnitude smaller than that from the \({}^{238}\)U decay chain.
We use the NeuCBOT [33; 34] code to obtain (\(\alpha\),n) neutron yield and energy spectrum in the crust from uranium and thorium decay chains. NeuCBOT uses the SRIM-generated alpha stopping power data for elements [35; 36], TALYS (\(\alpha\),n) cross-section data, ENSDF [37]\(\alpha\) decay data, and natural isotopic abundance data. With NeuCBOT, we obtain total neutron yield and the energy spectrum of \((\alpha,n)\) neutrons from \({}^{238}\)U, \({}^{235}\)U, and \({}^{232}\)Th decay chains in the crust. The crustal composition provided as an input to NeuCBOT is the same as the one discussed in Section III, but only includes the most abundant elements that make up 99% (by mass fraction) of the crust.
The energy spectrum of the \((\alpha,n)\) neutrons and the spontaneous fission neutrons are shown in Figure 2. The differential neutron yield (on the y-axis) is expressed per decay of the parent isotope in the respective decay chains, assuming secular equilibrium between the \(\alpha\)-emitting isotopes within individual chains. Total radiogenic neutron yields are listed in Table 1. The radiogenic neutron yield of \((\alpha,n)\) neutrons is greater than that from spontaneous fission neutrons in the crust. This is expected because the Earth's crust has a high abundance of light elements, and energy thresholds for \((\alpha,n)\) reactions for light isotopes are relatively small. The largest neutron yields are from \(\alpha\) interactions on the light and relatively abundant isotopes \({}^{27}\)Al, \({}^{23}\)Na, \({}^{29}\)Si, \({}^{30}\)Si, \({}^{18}\)O, \({}^{26}\)Mg, and \({}^{25}\)Mg.
As a point of comparison, the upper continental crust composition discussed in Sramek et al.'s paper [15] is similar to the crust composition considered in this work. Taking the same uranium and thorium content as reported for the upper continental crust, we find the neutron production rate in our continental crust composition is 20% higher. This is reasonable as the neutron production rate is particularly sensitive to the assumed elemental abundances of the lighter elements, which are slightly different.
## VI Particle transport
The FLUKA particle physics package (INFN-version) [38; 39] is used to simulate particle interaction and transport, and to record particle fluence and isotope production. FLUKA is a multiparticle transport code that can simulate with high accuracy all relevant particle interactions from keV to GeV scale. The physics models in
Figure 2: Radiogenic neutron yield spectra for a continental crust composition.
FLUKA are fully integrated into the code, as discussed in [38; 40]. A user can incorporate material, geometry, and physics models by calling appropriate FLUKA cards in the input file. FLAIR, a graphical interface for FLUKA, was used in this work to build and edit input files, link user routines, and construct appropriate executables.
The FLUKA simulation settings adopted in this work are briefly described here. The simulations were performed with PRECISIO(n) settings, that activate most of the physics processes (electromagnetic, hadronic processes and low-energy neutron interactions) relevant to our interest. Photonuclear interactions were enabled using the PHOTONUC card and detailed treatment of nuclear excitation was enabled through EVAPORAT(ion) and COALESCE(nce) cards. Full transport of light and heavy ions is enabled in the IONTRANS card. Further, delayed reactions and decay products were enabled using RADDECAY. Neutron interactions at higher energies are handled by FLUKA nuclear models. The interaction and transport of \(<\) 20 MeV neutrons are handled by FLUKA's dedicated low-energy neutron libraries that use evaluated neutron data files or measurement data if available. Low-energy neutrons in FLUKA are by default transported all the way to eV energies and lower. The USRTRACK card recorded differential fluence (as a function of energy) of the secondary particles produced from the cosmic-ray muon-induced interactions in the modeled crust. Neutron fluence was obtained by recording the neutrons crossing the layers in the simulated rock. The RESNUCEi card was used to record the residual nuclei produced by the cosmic-ray muon-induced interactions in the simulated crust. A modified user routine usrmc.f also recorded information about the nuclear reaction leading to isotope production.
## VII Cosmogenic Production
We calculate \({}^{42}\)Ar production rates by recording \({}^{42}\)Ar isotopes produced in the simulated volume of crust using FLUKA's nuclear models and low-energy neutron cross-section libraries. In addition, with FLUKA's residual isotope recording, we include all the cosmic-ray muon-induced interactions on all the relevant isotopes present in the crust, including direct muon spallation and heavy-ion collision.
As a cross-check, we also calculate the \({}^{42}\)Ar production rate by combining the TALYS \({}^{42}\)Ar production cross-sections ( calculated for selected nuclear reactions) and the cosmogenic secondary particle flux obtained by particle transport in FLUKA. For this estimate, only the nuclear reactions from cosmic-ray produced light secondary particles (neutrons, protons, deuterons, tritons, and alphas) are considered.
### Muon Propagation
Secondary particles resulting from muon interactions are mainly produced in particle showers, primarily in hadronic showers [29; 30]. Our simulations show that 3-6 m of crustal thickness is enough to ensure full development of hadronic showers without significant attenuation of the muon flux. The cosmic-ray muon-induced particle flux is approximately constant within that thickness. For 500 mwe runs, we allow muons to propagate through a larger thickness (15 m) to also account for neutrons from negative muon-capture and direct muon spallation, and then apply muon-flux correction. The crust is modeled as a cuboidal solid of 20 m x 20 m x 6 m (15 m) dimensions, with a density 2.7 g/cm\({}^{3}\), composed of a homogeneous material of elemental isotopes with natural isotopic abundance. The muon propagation and particle transport in the modelled crust is done using FLUKA simulations. MUSIC-given muon spectra and the muon flux for a standard rock composition were used as inputs. Muon energies were sampled from the energy distributions and propagated into the simulated crust. Only vertical muons were considered, and the total muon flux is assumed to be entirely that of vertical muons. Separate simulations were run for depths 500 mwe and 3,000 mwe using MUSIC-given muon spectra for respective depths. For each depth, separate simulations were run with positive and negative muons. The flux normalization is done assuming a positive-to-negative muon flux ratio of 1.3:1 [41].
### Cosmogenic Particle Flux
The fluence of the cosmic-ray muon-induced particles generated in the muon-induced shower was recorded using FLUKA. The USRTRACK card recorded the particle fluence (counts cm\({}^{-2}\) GeV\({}^{-1}\) per incident primary muon) as a function of particle kinetic energy. Using the muon flux at a given depth, the muon-induced particle flux was then obtained. Only the flux of neutrons, protons, deuterons, tritons, alphas, and \({}^{3}\)He were recorded, since the flux of these particles is highest in the muon-induced showers. USRTRACK cannot be used to estimate the low-energy (keV-scale) neutron fluence unless the energy binning is very fine. Instead, the neutron flux was obtained by recording neutrons on an event-by-event basis across multiple crustal layers using the modified FLUKA
\begin{table}
\begin{tabular}{|l|l|l|} \hline Source & Reaction & Neutron \\ & & yield/decay \\ \hline \({}^{232}\)Th chain & (\(\alpha\),n) & 5.279 \(\times\) 10\({}^{-6}\) \\ \hline \({}^{235}\)U chain & (\(\alpha\),n) & 4.819 \(\times\) 10\({}^{-6}\) \\ \hline \({}^{238}\)U chain & (\(\alpha\),n) & 3.524 \(\times\) 10\({}^{-6}\) \\ \hline \({}^{238}\)U & spont. fission & 1.13 \(\times\) 10\({}^{-6}\) \\ \hline \end{tabular}
\end{table}
Table 1: Total radiogenic neutron yield per equilibrium parent isotope decay from various decay chains in the crust
user routine mgdraw.f. With the input muon spectrum and flux at each depth, we obtained the neutron flux and energy spectrum in the Earth's crust. The total cosmogenic neutron flux at 3,000 mwe in continental crust is calculated to be \(2.02\times 10^{-9}\) n/cm\({}^{2}\)/sec, similar to the one estimated at Gran Sasso (\(2.72\times 10^{-9}\) n/cm\({}^{2}\)/sec) reported in [14]). However, it is important to note that neutron flux strongly depends on the muon flux and spectrum as well as on the composition of the rock. The total cosmogenic neutron flux at depths 500 mwe and 3,000 mwe in the continental crust are given in Table 2.
The cosmogenic particle flux for all particles considered (as a function of kinetic energy) is shown in Figure 3.
### Cosmogenic Production Rates
With FLUKA simulations, \({}^{42}\)Ar isotopes were recorded in selected volume of the modeled crust. The RESNU-CLEi card was used to obtain the \({}^{42}\)Ar production yield (number of \({}^{42}\)Ar isotopes per cubic cm of crust per primary muon). Alternatively, we also investigated the nuclear reactions, at the individual event-level, that resulted in \({}^{42}\)Ar production using a customized FLUKA user routine usrnc.f. Production yield given by RESNU-CLEi accounts for isotope production by all nuclear reactions including the ones produced by low-energy neutrons (\(<20\) MeV neutrons in FLUKA) interactions. However, usrrnc.f output does not account for isotope production by low-energy neutron interaction, as information on low-energy neutron-induced residual isotope production is not available in FLUKA at the event level. Muon flux provides the basis for normalization to obtain the production rates from yields obtained by both methods.
Cosmogenic \({}^{42}\)Ar production rates, obtained by recording the \({}^{42}\)Ar isotopes produced by cosmic-ray muon-induced showers in a volume of simulated crust, are listed in Table 3 for depths of 500 mwe and 3,000 mwe. In this case, the FLUKA nuclear models are at work and \({}^{42}\)Ar production from all cosmic-ray muon-induced interactions, including muon spallation and heavy-ion collision, are considered.
The production rates obtained from RESNULEi output (given in number of isotopes produced per cubic cm of the crust per primary muon) followed by normalization with respect to MUSIC-given muon flux are shown in the Table 3. \({}^{42}\)Ar production as recorded as an output by
\begin{table}
\begin{tabular}{|l|l|l|} \hline Depth & Muon flux & Cosmogenic neutron flux \\ & (muons/cm\({}^{2}\)/s) & (neutrons/cm\({}^{2}\)/s) in the crust \\ \hline
500 mwe & \(2.07\times 10^{-5}\) & \(7.67\times 10^{-7}\) \\ \hline
3,000 mwe & \(3.09\times 10^{-8}\) & \(2.02\times 10^{-9}\) \\ \hline \end{tabular}
\end{table}
Table 2: Cosmogenic neutron flux at depths 500 mwe and 3,000 mwe in the crust. Muon flux in the standard rock for corresponding depths are taken from [31]
Figure 3: Cosmic-ray muon-induced particle flux at (a) depth 500 mwe and (b) 3,000 mwe for continental crust (obtained using FLUKA USRTRACK). Only the particles with KE \(<10\) GeV particles were recorded. (c) Muon-induced neutron flux at 500 mwe and 3000 mwe for the same composition (obtained from event-by-event tracking with FLUKA).
usrnc.f agree within 90%. The agreement is expected as \({}^{42}\)Ar production by \(<\) 20 MeV neutrons, which is not recorded by usrnc.f, is smaller given the high-energy thresholds of direct neutron-induced \({}^{42}\)Ar production.
At 3,000 mwe depth, the cosmogenic \({}^{42}\)Ar production rate is \(1.8\times 10^{-3}\) atoms per ton of crust per year. The primary channels of \({}^{42}\)Ar argon production and their corresponding rates obtained for 3,000 mwe as obtained from TALYS and FLUKA are shown in Table 9. Also, combining the statistics from both 500 mwe and 3,000 mwe runs are combined to produce the Table 9 which lists the major reactions and their relative contribution to the total cosmogenic \({}^{42}\)Ar production rate. In the table, X represents products of the nuclear reaction other than \({}^{42}\)Ar. Identity of the heavy ions (represented as \(H^{*}\)) was not accessible through the usrrnc.f routine, and no attempt was made to identify the heavy ions through other FLUKA user routines given the high computational cost to do so.
From Table 9, one can observe that neutron and heavy-ion induced interactions on isotopes of calcium and iron are dominant contributors to the \({}^{42}\)Ar production in the Earth's crust. The results in the table use the information of all the \({}^{42}\)Ar produced in both positive and negative muon runs for 3,000 mwe depths.
### TALYS Cross-check of Selected Nuclear Reactions
We also use the TALYS \({}^{42}\)Ar production cross-sections to estimate \({}^{42}\)Ar production from selected nuclear reactions [25; 26]. For a given particle projectile and target isotope, TALYS can give the residual nuclei production cross-section as a function of kinetic energy of the particle projectile.
The production rate of a particular channel is given by:
\[P_{i,j}=n_{i}\int\frac{d\phi_{j}(E)}{dE}\sigma_{i,j}(E)dE \tag{1}\]
where \(P_{i,j}\) is the contribution to the production rate from source \(j\) on target isotope \(i\), \(n_{i}\) is the number density of target isotope \(i\), \(\frac{d\phi_{j}(E)}{dE}\) is the differential (in kinetic energy) flux of particle \(j\), and \(\sigma_{i,j}\) is the cross-section for the \(i(j,X)^{42}\)Ar reaction. The total production rate is the sum of \(P_{i,j}\) over all sources and target isotopes.
All sources (particle projectiles) considered are described in Table 10 and all the target isotopes considered are presented in Table 11 in Appendix A.
Unlike the residual-nuclei recording with FLUKA, TALYS does not simulate the nuclear reactions induced by muons and heavy ions. Also, TALYS can only simulate nuclear reactions in the energy range of 1 keV to 1 GeV. Energetic cosmic-ray muon interactions can produce secondary particles of energies above 1 GeV. However, the \({}^{42}\)Ar production rate from nuclear reactions induced by particle projectiles with energies \(>\) 1 GeV is expected to be small as the cosmogenic particle flux falls quickly at high energies.
The description of the selected reactions and the full list of the reactions considered are given in Table 12 in Appendix A. For all those reactions, the integral in Eq. 1 was evaluated.
The TALYS-based cosmogenic production rate of \({}^{42}\)Ar was calculated for both the 500 mwe and 3,000 mwe depth in the modeled crust. The sum production of \({}^{42}\)Ar from the selected neutron-, proton-, deuteron-, and triton-induced reactions in the crust at 500 mwe and 3,000 mwe are found to be \(7.97\times 10^{-2}\) and \(2.53\times 10^{-4}\) atoms per ton
\begin{table}
\begin{tabular}{c|c} Origin & Particle Flux \\ \hline \hline Natural Radioactivity & Neutron \\ & Alpha \\ \hline & Neutron \\ & Proton \\ Cosmic rays & Deuteron \\ & Triton \\ & Alpha \\ \hline \end{tabular}
\end{table}
Table 5: The particle projectiles considered for TALYS-based estimate of \({}^{42}\)Ar argon production. List of the all reactions considered in this case are in Appendix A
\begin{table}
\begin{tabular}{l|c} \hline Depth & Muon flux & Cosmogenic \({}^{42}\)Ar produced \\ (mwe) & (m muons /cm\({}^{2}\)/s) & (atoms/ton of crust/yr) \\ \hline
500 & 2.07\(\times\) 10\({}^{-5}\) & 0.73 \\ \hline
3,000 & 3.09 \(\times\) 10\({}^{-8}\) & 1.8 \(\times\) 10\({}^{-3}\) \\ \hline \end{tabular}
\end{table}
Table 3: Cosmogenic \({}^{42}\)Ar production rates in the crust at depths of 500 mwe and 3,000 mwe. The rates are obtained by recording the \({}^{42}\)Ar isotopes in the modeled crust using FLUKA simulations and normalizing the production yield with respect to the muon flux.
\begin{table}
\begin{tabular}{|l|c|} \hline Nuclear reactions & Contribution to \({}^{42}\)Ar production \\ \hline \({}^{44}\)Ca(n,3He)\({}^{42}\)Ar & 9 \% \\ \({}^{44}\)Ca(H\({}^{*}\),X)\({}^{42}\)Ar & 12 \% \\ \({}^{44}\)Ca(\(\gamma\),X)\({}^{42}\)Ar & 6 \% \\ \({}^{48}\)Ca(H\({}^{*}\),X)\({}^{42}\)Ar & 9 \% \\ \hline \({}^{56}\)Fe(H\({}^{*}\),X)\({}^{42}\)Ar & 19 \% \\ \({}^{56}\)Fe(\(\pi^{-}\),X)\({}^{42}\)Ar & 6 \% \\ \hline \end{tabular}
\end{table}
Table 4: Major reactions that produce \({}^{42}\)Ar in the crust.Contribution to the \({}^{42}\)Ar production from various reactions in the modeled crust based on 3,000 mwe simulation runs. The results are based on 32 \({}^{42}\)Ar atoms produced for 1.93 \(\times\) 10\({}^{4}\) ton-year exposure. As reactions from low energy (\(<\) 20 MeV) neutrons were not available on an event-by-event basis, those reactions and their contribution (\(\approx\) 10 % of the total)are not included in the table. In the table, \(H^{*}\) represents heavy ion and \(X\) represents a products other than \({}^{42}\)Ar.
of crust per year, respectively. Figure 4 shows nuclear reactions that TALYS identifies as the major contributors to the \({}^{42}\)Ar production in the Earth's crust. Figure 4 shows the cross-sections for those reactions. Figure 4 gives the list of the reactions and their contribution to the \({}^{42}\)Ar production rates at 3,000 mwe. Looking at the cross-sections in Figure 4 and the list of major reactions in Figure 4, it can be seen that it is primarily due to the high abundance of calcium, titanium, and iron in the crust that the nuclear reactions involving isotopes of these elements dominate. All other reactions not included in Figure 4 contribute less than 1% to the total \({}^{42}\)Ar production rate.
The TALYS-based estimate of the cosmogenic \({}^{42}\)Ar production rates are an order of magnitude less than the ones obtained by FLUKA's residual isotope recording. A primary reason for this is, unlike in TALYS, where only a limited set of nuclear reactions were considered, full-fledged simulation with FLUKA includes also the contribution from additional nuclear interactions including heavy-ion collisions and direct muon-spallations. Looking selectively at the n, p, \(\alpha\), t, and d induced nuclear reactions (shown in Appendix A, Table 9), the production rates in FLUKA's residual nuclei output are within the same order of magnitude of the TALYS-based production rates (FLUKA's estimate from those reactions are \(\sim\) 70% higher), which is not surprising given the differences in the nuclear models used by those tools.
## VIII Radiogenic production
Radiogenic production of \({}^{42}\)Ar in the Earth's crust is expected to be significantly suppressed with respect to production in the atmosphere. Figure 5 shows the isotopes neighboring \({}^{42}\)Ar with respect to mass number. These neighboring isotopes are either short-lived and so are not abundant enough, or are stable so the reactions producing \({}^{42}\)Ar from those isotopes have energy thresholds higher than the energies available to neutrons and \(\alpha\)'s of radiogenic origin.
Radiogenic production of \({}^{42}\)Ar in the Earth's crust, however small it may be, could occur through two-step reactions, with intermediate production of the isotopes \({}^{41}\)Ar [\(\tau_{1/2}\) = 109 min], \({}^{42}\)K [\(\tau_{1/2}\) = 12 hr], and \({}^{45}\)Ca [\(\tau_{1/2}\) = 163 d]. Since these isotopes are radioactive and short-lived, and hence trace in concentration, the data on their abundance in the Earth's crust are scarce. We first estimate the cosmogenic and radiogenic production of \({}^{41}\)Ar, \({}^{42}\)K, and \({}^{45}\)Ca, using the equilibrium concentration of these isotopes in Eq. (1), and then estimate the radiogenic \({}^{42}\)Ar production from neutron-induced reactions on those isotopes.
### Radiogenic Neutron Flux
We obtain radiogenic neutron flux by propagating the neutrons with energies drawn from the neutron-yield distributions in our modeled crust (shown in Figure 6). The neutrons are generated homogeneously and isotropically across a spherical volume of a simulated crust of radius 200 m and density 2.7 g/cm\({}^{3}\). The elemental composition of the simulated crust is the same as the one discussed in Section III. Separate FLUKA simulations were run by drawing the neutron energies from the neutron yield distributions shown in Figure 2. FLUKA user routine mgdraw.f tracked the neutrons on an event-by-event ba
Figure 4: Major reactions that produce \({}^{42}\)Ar in the crust, and the corresponding production rates obtained using Eq. (1) for selected reactions (listed in Table 9 in Appendix A). The TALYS-given production rate at 3,000 mwe is \(2.53\times 10^{-4}\) atoms per ton of crust per year (Note: The FLUKA-based estimate that includes all reactions is an order of magnitude higher). The sum of contributions from all the other considered reactions (not included in this figure) is \(\sim\) 1%. (a) the TALYS-given \({}^{42}\)Ar production cross-sections for those reactions. (b) The production rates from those reactions in the crust at 3,000 mwe. Only n, p, \(\alpha\), t, and d induced reactions are included here.
sis across multiple rock surfaces. On the surfaces with radii \(>50\) m, the neutron fluence was constant (within statistical uncertainties). Neutron counts and energies were recorded as neutrons exited one of those large rock surfaces. FLUKA-given neutron fluence (as a function of neutron kinetic energy) was used to obtain radiogenic neutron flux spectra by assuming activity of uranium (U) and thorium (Th) corresponding to U:\(2.7\times 10^{-6}\) g/g and Th: \(1.05\times 10^{-5}\) g/g respectively, which are taken from Ref. [15] as reported for upper continental crust composition.
The neutron flux spectra in the crust from spontaneous fission and \((\alpha,n)\) neutrons are shown in the Figure 6. For comparison, the cosmic-ray muon-induced neutron flux spectrum at 500 mwe is also plotted in the same figure. Reported in Table 6 is the total neutron flux in the crust from each of the considered radiogenic neutron sources. The results show that the cosmic-ray muon-induced flux, even at 500 mwe depth in the crust, is over two orders of magnitude smaller than the assumed radiogenic neutron flux. However, it should be noted that the radiogenic neutron flux depends strongly on the uranium and thorium content in the rock, as well as on the composition of the rock.
### Radiogenic Production Rates
Using Eq. (1) for selected (n,\(\gamma\)), (n,p) and (n,\(\alpha\)) reactions, we estimate the cosmogenic and radiogenic production rates of isotopes \({}^{41}\)Ar, \({}^{42}\)K, and \({}^{45}\)Ca in the Earth's crust. The TALYS-given cross-sections for the reactions considered are shown in Figure 7a. \(E_{th}\) represents the energy thresholds for the reactions. The production rates of the isotopes \({}^{41}\)Ar, \({}^{42}\)K, and \({}^{45}\)Ca are shown in the Table 7. (n,\(\gamma\)) reactions are found to be the dominant production channels for those isotopes.
Radiogenic \({}^{42}\)Ar production is calculated by using Eq.
\begin{table}
\begin{tabular}{|l|c|} \hline Neutron Source & Total neutron flux \\ & (neutrons/cm\({}^{2}\)/sec) \\ \hline
238U spont. fission & \(3.21\times 10^{-5}\) \\ \hline
238U (\(\alpha\),n) & \(9.92\times 10^{-5}\) \\ \hline
235U (\(\alpha\),n) & \(4.56\times 10^{-8}\) \\ \hline
232Th \(\alpha\),n) & \(1.91\times 10^{-4}\) \\ \hline \end{tabular}
\end{table}
Table 6: Radiogenic neutron flux from spontaneous fission and \((\alpha,n)\) neutrons from uranium and thorium decay chains in the crust.
\begin{table}
\begin{tabular}{|l|l|l|l|l|} \hline Isotopes & Isotope & Cosmogenic & Radiogenic & Dominant \\ & half-life & production & production & production \\ & & rate & rate & channel \\ & & (/ton/yr) & (/ton/yr) & \\ & & at 500 mwe & & \\ \hline \({}^{41}\)Ar & 109 min & 39.6 & \(1.82\times 10^{4}\) & \({}^{40}\)Ar(n,\(\gamma\))\({}^{41}\)Ar \\ \hline \({}^{42}\)K & 12 hr & \(4.89\times 10^{4}\) & \(1.56\times 10^{7}\) & \({}^{41}\)K(n,\(\gamma\))\({}^{42}\)K \\ \hline \({}^{45}\)Ca & 163 d & \(4.62\times 10^{4}\) & \(1.47\times 10^{7}\) & \({}^{45}\)Ca(n,\(\gamma\))\({}^{45}\)Ca \\ \hline \end{tabular}
\end{table}
Table 7: \({}^{41}\)Ar, \({}^{42}\)K, \({}^{45}\)Ca production in the earth’s crust obtained using TALYS for selected nuclear reactions.
Figure 5: Isotopes directly neighboring \({}^{42}\)Ar in mass number table. The short-lived isotopes (white) directly neighboring \({}^{42}\)Ar (light blue), one long-lived (light green) and several stable isotopes (dark grey) are shown in the table.
Figure 6: Simulated neutron flux and energy spectra from spontaneous fission neutron, (\(\alpha\),n) neutrons, and muon-induced neutrons (at 500 mwe) in the Earth’s crust.
1 for neutron-induced reactions on isotopes \({}^{41}\)Ar, \({}^{42}\)K, and \({}^{45}\)Ca. The equilibrium concentration of these isotopes in the crust were calculated from their production rates (shown in Table 7) and used as an input for Eq. 1 to estimate the radiogenic \({}^{42}\)Ar production from nuclear reactions on these isotopes. The TALYS-given cross-sections for the relevant reactions are given in the Figure (b)b.
Radiogenic production rates of \({}^{42}\)Ar are given in Table 8. The production of \({}^{42}\)Ar for a crustal composition is estimated to be \(4.79\times 10^{-18}\) atoms per ton per year. The rate is many orders of magnitude smaller than the cosmogenic production rates obtained for 500 mwe and 3,000 mwe on the Earth's crust. From the table, one can observe the \({}^{42}\)Ar production rate is highest for the reaction \({}^{41}\)Ar(n,\(\gamma\))\({}^{42}\)Ar, and by several orders of magnitude. The radiogenic production rate, however, is very sensitive to the concentration of \({}^{40}\)Ar assumed in the estimate of \({}^{41}\)Ar production. The assumed concentration of argon is 3 ppm [28]. The results show \({}^{41}\)Ar, \({}^{42}\)K, and \({}^{45}\)Ca are not produced enough to generate any significant level of \({}^{42}\)Ar radiogenically. A similar calculation for \({}^{42}\)Ar production from neutron-induced interaction on \({}^{43}\)K [\(\tau_{1/2}=22\) hr] showed the \({}^{42}\)Ar production rate from that channel to be over 10 orders of magnitude smaller.
As mentioned before, one-step radiogenic neutron-induced reactions are expected to have extremely low production rates because the radiogenic neutron energies are typically well below the reaction thresholds required to produce \({}^{42}\)Ar. Though not fully explored in this work, it may be possible to extract an upper limit on the \({}^{42}\)Ar production from direct one-step reactions by extrapolating the tail of the radiogenic neutron flux spectrum. One relevant reaction would be \({}^{43}\)Ca(n,p)\({}^{42}\)Ar, for which TALYS gives non-zero cross-sections above 15 MeV. With the sampling of the radiogenic neutron energies in our simulations, at 10 MeV, the radiogenic neutron flux is \(1.6\times 10^{-9}\) neutrons/MeV/cm\({}^{2}\)/sec. However, the radiogenic neutron yield falls rapidly beyond 10 MeV (the (\(\alpha\),n) neutron yield output of NeuCBOT shows the radiogenic neutron yield drops by 30 orders of magnitude between 10 MeV and 15 MeV.
## IX \({}^{42}\)Ar activity in underground argon
In the previous sections we have estimated the underground production rate of \({}^{42}\)Ar per unit mass of crustal rock. For dark matter and neutrino experiments that aim to utilize underground argon, the critical value of interest
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline Nuclear & Reaction & Equilibrium & Production \\ reaction & threshold & concentration & rate \\ & (MeV) & of \({}^{42}\)Ar & (/ton/yr) \\ & & (atoms/target & \\ & & atom) & \\ \hline \({}^{41}\)Ar(n,\(\gamma\))\({}^{42}\)Ar & 0 & 4.13 \(\times\) 10\({}^{-17}\) & \(4.75\times 10^{-18}\) \\ \hline \({}^{42}\)K(n,p)\({}^{42}\)Ar & 0 & \(5.00\times 10^{-23}\) & \(3.35\times 10^{-20}\) \\ \hline \({}^{43}\)Ca(n,\(\alpha\))\({}^{42}\)Ar & 0.7 & 1.84 \(\times 10^{-26}\) & 3.68 \(\times 10^{-21}\) \\ \hline \end{tabular}
\end{table}
Table 8: Radiogenic \({}^{42}\)Ar production in the crust. Two-step reactions passing through intermediate radioactive isotopes \({}^{41}\)Ar, \({}^{42}\)K, and \({}^{45}\)Ca are only considered as direct production may not be possible for radiogenic particles considering the high-energy threshold of the reactions.
Figure 7: (a)\({}^{41}\)Ar, \({}^{42}\)K, and \({}^{45}\)Ca production cross-sections for (n,\(\gamma\)), (n,p) and (n,\(\alpha\)) reactions. (b)
is the concentration of \({}^{42}\)Ar in argon extracted from deep underground sources such as the one used by DarkSide-50. Estimating the \({}^{42}\)Ar content in underground argon based on the production rate in rock is difficult as it involves estimating the diffusion of argon out of the rock, which can depend on many geological factors and is beyond the scope of this work. However, we expect \({}^{42}\)Ar and \({}^{39}\)Ar to have very similar diffusion constants and so we can use the ratio of the measured \({}^{39}\)Ar activity in underground argon to the calculated underground production rate of \({}^{39}\)Ar by Sramek et al. [15] to estimate the corresponding activity of \({}^{42}\)Ar in underground argon.
Below a few hundred meters-water-equivalent depth in the crust, \({}^{39}\)Ar production is primarily from \({}^{39}\)K(n,p)\({}^{39}\)Ar reactions induced by radiogenic neutrons [14; 15]. Using very similar calculations to the ones presented in this paper, Sramek et al. [15] estimated the \({}^{39}\)Ar production rate in K-Th-U-rich upper continental crust to be \(2.9\times 10^{4}\)\({}^{39}\)Ar atoms per ton of crust per year. We therefore have the following set of measurements and estimates for the production of argon radioisotopes in underground argon:
* Estimated \({}^{39}\)Ar production rates in K-Th-U-rich upper continental crust [15]: \(2.9\times 10^{4}\)\({}^{39}\)Ar atoms per ton of crust per year.
* Measured \({}^{39}\)Ar activity in underground argon [7]: \(7.3\times 10^{-4}\) Bq per kg of argon which corresponds to a steady-state production rate of \(2.3\times 10^{7}\)\({}^{39}\)Ar atoms per ton of argon per year.
* Estimated \({}^{42}\)Ar production rate in the Earth's crust at 3,000 mwe depth (from this work): \(1.8\times 10^{-3}\)\({}^{42}\)Ar atoms per ton of crust per year.
If one assumes the same ratio of concentrations for \({}^{42}\)Ar in the crustal rock to that in underground argon as in the above measurements and estimates for \({}^{39}\)Ar, then one would predict 1.4 \({}^{42}\)Ar atoms per ton of argon per year at 3,000 mwe. Given this production rate, at equilibrium, \({}^{42}\)Ar-specific radioactivity in argon is estimated to be 1.4 decays per ton of argon per year. A summary of the \({}^{42}\)Ar and \({}^{39}\)Ar production rates in crust and activities in argon are shown in the Table 10.
The above estimate is expected to be conservative for a number of reasons:
* The percentage of \({}^{42}\)Ar atoms diffusing out of the rock and getting collected in the underground gas fields is likely smaller, given the shorter half-life of \({}^{42}\)Ar compared to \({}^{39}\)Ar (factor of 8).
* The Doe Canyon gas wells (9,000 feet, 2.7 km depth) [42] from where DarkSide extracted the underground argon are much deeper than 3,000 mwe. Argon in the gas wells likely originated in the mantle where the \({}^{42}\)Ar production is even more suppressed relative to \({}^{39}\)Ar (due to the primary \({}^{42}\)Ar-production mechanism being cosmogenic).
* The true \({}^{39}\)Ar activity in the underground argon could be significantly smaller, with the measured value likely due to an air incursion [27].
## X \({}^{42}\)Ar in isolated argon-containing gas pocket
In addition to considering production of \({}^{42}\)Ar in crustal rock, we also considered production on argon gas that may be trapped deep underground. Assuming the particle flux in the isolated argon gas pocket is the same as in the crust, we calculate the equilibrium ratio of \({}^{42}\)Ar/\({}^{40}\)Ar using Eq. (1) for selected reactions. The relevant reactions for \({}^{42}\)Ar production in argon are i) \({}^{40}\)Ar(n,\(\gamma\))\({}^{41}\)Ar, \({}^{41}\)Ar(n,\(\gamma\))\({}^{42}\)Ar; ii) \({}^{40}\)Ar(\(\alpha\),2p)\({}^{42}\)Ar; and iii) \({}^{40}\)Ar(t,p)\({}^{42}\)Ar. The TALYS cross-sections for the reactions are shown in the Figure 8.
In argon, \({}^{40}\)Ar(t,p)\({}^{42}\)Ar is found to be the dominant production channel, with contribution from \({}^{40}\)Ar(\(\alpha\),2p)\({}^{42}\)Ar an order of magnitude smaller. This may not appear surprising given the smaller threshold of the \({}^{40}\)Ar(t,p)\({}^{42}\)Ar reaction and cosmic-ray muon-induced triton flux comparable to the \(>\) 10 MeV \(\alpha\) flux. Radio-genic production through \({}^{41}\)Ar(n,\(\gamma\))\({}^{42}\)Ar is insignificant compared to other channels mentioned above. For large gas pockets or gas fields, which may even have other gas species, particle (particularly triton and alpha) production has to be simulated in the rock as well as in the modeled gas fields.
We obtain an \({}^{42}\)Ar/\({}^{40}\)Ar ratio of \(4.64\times 10^{-28}\) (\(8.38\times 10^{-32}\)) at 500 mwe (3,000 mwe) depth in the crust. This is equivalent to 0.147 (1.09 \(\times\) 10\({}^{-5}\)) \({}^{42}\)Ar atoms per ton of argon at 500 mwe (3,000 mwe) depth. This estimated rate of \({}^{42}\)Ar production from \({}^{40}\)Ar is roughly three orders of magnitude smaller than the estimate from production
Figure 8: \({}^{42}\)Ar production cross-sections as a function of kinetic energy.
on crustal rock given in Section IX. However, above rate does not include the contribution from argon produced in the rock that potentially can diffuse out and and collected in the argon reservoir.
## XI Uncertainties
The total production rates reported in the paper have a statistical uncertainty of roughly 10%, with systematic uncertainties expected to dominate. Our results show that \({}^{42}\)Ar production is primarily cosmogenic. The \({}^{42}\)Ar production rates are expected to be correct within a factor of 3 to 5. Major uncertainty is expected to be from systematics associated with hadronic processes. Use of MUSIC-code-generated muon spectra and flux for a standard rock for our crustal composition is expected to introduce a systematic uncertainty of \(\sim\) 60-100% based on the rock composition dependence on the total muon flux reported in [43] and our comparison of differences in muon flux for a standard rock versus continental crust shown in Figure 9. Since all muons were propagated vertically, we have underestimated the particle and isotope production yield. Assuming the cosmic-ray muon-induced secondary particle yield and isotope production yield dependence to mean muon energy follows the simple parameterisation, \(\bar{E}_{\mu}^{-0.7}\) reported in [44], the yields are underestimated by \(<20\%\). Since particle interactions on isotopes of calcium, mainly \({}^{44}\)Ca and \({}^{48}\)Ca, contribute significantly to \({}^{42}\)Ar production, any significant presence of a calcium-rich mineral like limestone in the assumed rock composition can drive the production rates by a few factors.
The \({}^{42}\)Ar production rates at a given depth, to a first order, can be calculated by scaling the rates obtained in this work by a factor x \(\bar{E}_{\mu}^{-0.7}\) / \(\mu_{flux}\), with \(\mu_{flux}\) and \(\bar{E}_{\mu}\) for different depths for a standard rock [31]. However, at larger depths, the cosmic-ray muon energy losses and muon flux attenuation becomes increasingly composition dependent, systematics associated with the muon flux get larger, and the production rates obtained with this scaling are expected to be correct only in an order of magnitude.
Our study suggests that the \({}^{42}\)Ar production rate decreases with depth, as its production is primarily cosmogenic. However, at very large crustal/mantle depths, it is likely that the production of \({}^{42}\)Ar is primarily due to the interaction of muons that are generated by neutrinos [45].
## XII Results and Discussion
We have estimated the \({}^{42}\)Ar production rates at 500 mwe and 3,000 mwe depth in the Earth's crust. We find that radiogenic production is insignificant and that cosmogenic production is expected to dominate up to large crustal depths. At a depth of 3,000 mwe, the expected \({}^{42}\)Ar production rate is 1.8 \(\times\) 10\({}^{-3}\) atoms per ton of the crust per year, seven orders of magnitude smaller than the \({}^{39}\)Ar production rate calculated in [14; 15].
The activity of \({}^{42}\)Ar in underground argon from the Doe Canyon wells was estimated using the \({}^{39}\)Ar activity measured by DarkSide and calculations of the \({}^{39}\)Ar production in crustal rock, under the assumption that diffusion of \({}^{42}\)Ar out of the rock is similar to \({}^{39}\)Ar. The \({}^{42}\)Ar activity in underground argon extracted is estimated to be 1.4 decays per ton of argon per year. For reasons discussed in Section IX, this is expected to be an upper limit.
Based on the estimates reported in this paper, argon extracted from underground sources is expected to be significantly depleted of \({}^{42}\)Ar, with a larger deple
\begin{table}
\begin{tabular}{|l|c|c|} \hline \begin{tabular}{l} Isotope \\ & \begin{tabular}{l} Production rate in crust \\ (atoms/ton (rock)/yr) \\ \end{tabular} &
\begin{tabular}{l} Specific radioactivity in argon \\ (decays/ton (argon)/yr) \\ \end{tabular} \\ \hline \({}^{39}\)Ar & \(2.9\times 10^{4}\)[15] & \(2.3\times 10^{7}\)[7] \\ \hline \({}^{42}\)Ar & \(1.8\times 10^{-3}\) & 1.4 \\ \hline \end{tabular}
\end{table}
Table 10: Estimated production rates of \({}^{42}\)Ar at 3,000 mwe compared to \({}^{39}\)Ar. The \({}^{42}\)Ar activity in gas from atoms produced in the rock is estimated by scaling the measured \({}^{39}\)Ar activity in gas by the ratio of calculated production rates in rock. See discussion in Section IX.
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline Reactions & TALYS-based production rate & FLUKA & Major \({}^{42}\)Ar \\ & \begin{tabular}{l} based production rate \\ \end{tabular} & \begin{tabular}{l} residential- \\ nuclei- \\ \end{tabular} & \begin{tabular}{l} \(2.5\times 10^{-4}\) \\ \end{tabular} & \begin{tabular}{l} \(4.2\times 10^{-4}\) \\ \end{tabular} & \begin{tabular}{l} \(4\)Ca(n,3He)\({}^{42}\)Ar \\ \end{tabular} \\ & & & \\ \hline \begin{tabular}{l} Heavy-ion \\ collisions \\ \end{tabular} & – & \(8.3\times 10^{-4}\) & \({}^{59}\)Fe(H\({}^{+}\),X)\({}^{42}\)Ar \\ & & & \({}^{44}\)Ca(H\({}^{+}\),X)\({}^{42}\)Ar \\ \hline \begin{tabular}{l} Photon- \\ induced \\ reactions \\ \end{tabular} & – & \(1.6\times 10^{-4}\) & \({}^{48}\)Ca(\(\gamma\),X)\({}^{42}\)Ar \\ \hline \begin{tabular}{l} Pion-induced \\ reactions \\ \end{tabular} & – & \(1.6\times 10^{-4}\) & \({}^{59}\)Fe(\(\Pi^{-}\),X) \\ \hline \begin{tabular}{l} Other \\ cosmic-ray \\ muon-induced \\ reactions \\ \end{tabular} & – & \(2.0\times 10^{-4}\) & \({}^{44}\)Ca(\(\mu^{-}\),2p)\({}^{42}\)Ar \\ & & & \({}^{42}\)Cl \(\beta^{-}\) decay \\ \hline \hline \begin{tabular}{l} Radiogenic \\ reactions \\ \end{tabular} & \(4.8\times 10^{-18}\) & – & \({}^{42}\)Ar(n,\(\gamma\))\({}^{42}\)Ar \\ \hline
\begin{tabular}{l} All reactions \\ (sum) \\ \end{tabular} & \(2.5\times 10^{-4}\) & \(1.8\times 10^{-3}\) & – \\ \hline \end{tabular}
\end{table}
Table 11: Estimated production rates of \({}^{42}\)Ar at 3,000 mwe compared to \({}^{39}\)Ar. The \({}^{42}\)Ar activity in gas from atoms produced in the rock is estimated by scaling the measured \({}^{39}\)Ar activity in gas by the ratio of calculated production rates in rock. See discussion in Section IX.
tion factor than \({}^{39}\)Ar. Use of underground argon will greatly enhance the physics capabilities of kton-scale argon-based neutrino measurements like the ones proposed in [46; 47; 21]. The LEGEND experiment [10] could also greatly benefit from the use of underground argon. With atmospheric argon, the background rate from \({}^{42}\)Ar/\({}^{42}\)K (before analysis cuts) is expected to be 0.72 cts/kg/yr/keV [10]. With the use of underground argon, as per our estimate, the \({}^{42}\)Ar/\({}^{42}\)K background suppression could be a factor of \(10^{7}\) or higher. More concerning for large-scale argon-based experiments could be a possible infiltration of atmospheric argon or/and cosmogenic production of \({}^{42}\)Ar in the extracted underground argon during storage, transport, or extended cosmic-ray exposure above ground [48].
## XIII Acknowledgements
We would like to extend our thanks to Prof. Vitaly A. Kudryavtsev for the insightful discussions we had on the MUSIC code and muon propagation. Additionally, we extend our appreciation to Prof. Shawn West-erdale for his guidance on the NeuCBOT code and for engaging discussions regarding radiogenic (\(\alpha,n\)) neutron yields. This work was funded by Laboratory Directed Research and Development (LDRD) at Pacific Northwest National Laboratory (PNNL). PNNL is operated by Battelle Memorial Institute for the U.S. Department of Energy (DOE) under Contract No. DE-AC05-76RL01830. Parts of this study at PNNL were supported by the DOE, USA Office of High Energy Physics Advanced Technology R&D subprogram.
|
2309.11606 | Decycling cubic graphs | A set of vertices of a graph $G$ is said to be decycling if its removal
leaves an acyclic subgraph. The size of a smallest decycling set is the
decycling number of $G$. Generally, at least $\lceil(n+2)/4\rceil$ vertices
have to be removed in order to decycle a cubic graph on $n$ vertices. In 1979,
Payan and Sakarovitch proved that the decycling number of a cyclically
$4$-edge-connected cubic graph of order $n$ equals $\lceil (n+2)/4\rceil$. In
addition, they characterised the structure of minimum decycling sets and their
complements. If $n\equiv 2\pmod4$, then $G$ has a decycling set which is
independent and its complement induces a tree. If $n\equiv 0\pmod4$, then one
of two possibilities occurs: either $G$ has an independent decycling set whose
complement induces a forest of two trees, or the decycling set is
near-independent (which means that it induces a single edge) and its complement
induces a tree. In this paper we strengthen the result of Payan and Sakarovitch
by proving that the latter possibility (a near-independent set and a tree) can
always be guaranteed. Moreover, we relax the assumption of cyclic
$4$-edge-connectivity to a significantly weaker condition expressed through the
canonical decomposition of 3-connected cubic graphs into cyclically
$4$-edge-connected ones. Our methods substantially use a surprising and
seemingly distant relationship between the decycling number and the maximum
genus of a cubic graph. | Roman Nedela, Michaela Seifrtová, Martin Škoviera | 2023-09-20T19:40:49Z | http://arxiv.org/abs/2309.11606v1 | # Decycling cubic graphs1
###### Abstract
A set of vertices of a graph \(G\) is said to be decycling if its removal leaves an acyclic subgraph. The size of a smallest decycling set is the decycling number of \(G\). Generally, at least \(\lceil(n+2)/4\rceil\) vertices have to be removed in order to decycle a cubic graph on \(n\) vertices. In 1979, Payan and Sakarovitch proved that the decycling number of a cyclically 4-edge-connected cubic graph of order \(n\) equals \(\lceil(n+2)/4\rceil\). In addition, they characterised the structure of minimum decycling sets and their complements. If \(n\equiv 2\pmod{4}\), then \(G\) has a decycling set which is independent and its complement induces a tree. If \(n\equiv 0\pmod{4}\), then one of two possibilities occurs: either \(G\) has an independent decycling set whose complement induces a forest of two trees, or the decycling set is near-independent (which means that it induces a single edge) and its complement induces a tree. In this paper we strengthen the result of Payan and Sakarovitch by proving that the latter possibility (a near-independent set and a tree) can always be guaranteed. Moreover, we relax the assumption of cyclic 4-edge-connectivity to a significantly weaker condition expressed through the canonical decomposition of 3-connected cubic graphs into cyclically 4-edge-connected ones. Our methods substantially
use a surprising and seemingly distant relationship between the decycling number and the maximum genus of a cubic graph.
keywords: cubic graph, decycling set, feedback vertex set, cyclic connectivity, maximum genus +
Footnote †: journal: Computer Science
## 1 Introduction
Destroying all cycles of a graph by removing as few vertices as possible is a natural and extensively studied problem in graph theory and computer science, which can be traced back at least as far as to Kirchhoff's work on spanning trees [18]. The minimum number of vertices whose deletion eliminates all cycles of a graph \(G\) is its _decycling number_, denoted by \(\phi(G)\), and the corresponding set is a _minimum decycling set_. (Here we follow the terminology of [1; 2]. In the literature, decycling number is also known as _feedback vertex number_ and the corresponding set of vertices is a _feedback vertex set_, see [36] for instance.)
In contrast to a similar problem of eliminating cycles by removing edges, which amounts to determining the cycle rank (or the Betti number) \(\beta(G)\) of a graph \(G\), computing the decycling number is long known to be a difficult problem. In 1972, it was proven by Karp [15; Problem 7] that finding a minimum decycling set, or in other words, establishing the decycling number, is an NP-complete problem. The problem remains NP-complete even when restricted to planar graphs, bipartite graphs, or perfect graphs [5; 6]. On the other hand, it becomes polynomial for a number of families, including permutation graphs (Liang [22]), interval graphs, cocomparability graphs, and also for cubic graphs (Li and Liu [21], Ueno et al. [38]).
Evaluating the decycling number of a cubic graph is, in a sense, well understood. It is not difficult to see that \(\phi(G)\geq\lceil(n+2)/4\rceil\) for every cubic graph \(G\) of order \(n\). In 1988, Speckenmayer [36] derived the equation \(\phi(G)=n/2-z(G)+1\), where the parameter \(z(G)\) stands for the size of a maximum nonseparating independent set. Surprisingly, the parameter \(z(G)\) has an interpretation in topological graph theory: it coincides with the maximum orientable genus \(\gamma_{M}(G)\) of \(G\). The connection between the decycling number and the maximum genus of a cubic graph was revealed in 1997 by Huang and Liu [11] through the formula \(\phi(G)+\gamma_{M}(G)=n/2+1\) and was further developed by Li and Liu [21]. Their results have a number of important consequences. In particular, they imply that the decycling number of a
when \(G\) is _upper-embeddable_, that is, when \(G\) has a \(2\)-cell embedding into an orientable surface with at most two faces. The following result follows from results of Huang and Liu [11] and Long and Ren [23]. An independent proof can be found in Section 3 (Theorem 3.10).
**Theorem 1.1**.: (Huang and Liu [11], Long and Ren [23]) _If \(G\) is a connected cubic graph of order \(n\), then \(\phi(G)\geq\lceil(n+2)/4\rceil\), and the equality holds if and only if \(G\) is upper-embeddable._
In 1975, Payan and Sakarovitch [32] proved that the equality \(\phi(G)=\lceil(n+2)/4\rceil\) holds for all cyclically \(4\)-edge-connected graphs. In addition, they determined the structure of the corresponding decycling sets. They showed that every minimum decycling set \(J\) of such a graph is either independent or _near-independent_ (meaning that it induces a subgraph with exactly one edge), and its complement \(A=V(G)-J\) induces a forest with at most two components. Following the terminology established by Payan and Sakarovitch we say that the partition \(\{A,J\}\) of \(V(G)\) is a _stable decycling partition_ of \(G\). If, moreover, \(A\) induces a tree, we say that \(\{A,J\}\) is _coherent_, otherwise it is _incoherent_.
Figure 1 shows three simple examples of stable decycling partitions. The vertices are coloured black or white, with black vertices representing \(A\) and white vertices representing the minimum decycling set \(J\); the edges within \(A\) are solid, those within \(J\) are dashed, and the edges between the sets are dotted. Out of the two partitions of the cube, one is coherent but the other is not.
The theorem of Payan and Sakarovitch can be formulated as follows.
**Theorem 1.2**.: (Payan and Sakarovitch [32]) _Every cyclically \(4\)-edge-connected
Figure 1: Examples of stable decycling partitions.
cubic graph has a stable decycling partition. More precisely, if \(G\) has \(n\) vertices, then the following hold:_
1. _If_ \(n\equiv 2\pmod{4}\)_, then_ \(G\) _has a partition_ \(\{A,J\}\) _where_ \(A\) _induces a tree and_ \(J\) _is independent._
2. _If_ \(n\equiv 0\pmod{4}\)_, then_ \(G\) _has a partition_ \(\{A,J\}\) _where either_ 1. \(A\) _induces a tree and_ \(J\) _is near-independent, or_ 2. \(A\) _induces a forest of two trees and_ \(J\) _is independent._
It is not difficult to realise that stable decycling partitions exist even beyond the class of cyclically \(4\)-edge-connected cubic graphs. It is therefore natural to ask which cubic graphs admit a stable decycling partition. We answer this question in Theorems 3.5 and 3.6 stated in Section 3 which, put together, yield the following statement.
**Theorem 1.3**.: _A connected cubic graph admits a stable decycling partition if and only if it is upper-embeddable._
In this context it is important to note that the concepts of maximum orientable genus and upper-embeddability of graphs have been extensively studied and are fairly well understood. There exist results (for example in [13, 17, 27, 41]) which provide min-max characterisations of maximum genus in purely combinatorial terms, and there is a polynomial-time algorithm to determine the maximum genus of an arbitrary connected graph [4, 10]. As a consequence, the Payan-Sakarovitch theorem readily follows from Theorem 1.3 (more precisely from the more detailed Theorems 3.5 and 3.6) and the long known fact that all cyclically \(4\)-edge-connected graphs are upper-embeddable [33, 16, 26].
Stable decycling partitions provide a useful insight into the structure of cubic graphs, which is the reason why they naturally emerge in several different contexts beyond decycling of cubic graphs or embedding cubic graphs into orientable surfaces with high genus. For example, Glover and Marusic [9] used Theorem 1.2 to construct Hamilton cycles or Hamilton paths in cubic Cayley graphs for a rich family of quotients of the modular group \(\mathrm{PSL}(2,\mathbb{Z})\). Their technique was subsequently used in a few other papers dealing with the hamiltonicity of cubic Cayley graphs [8, 7, 20]. Stable decycling partitions of cubic graphs play an essential role also in a recent computer-assisted proof of
the Barnette-Goodey conjecture due to Kardos [14], which states that simple planar cubic graphs with faces of size at most \(6\) are hamiltonian.
Proofs of these results share the following idea of topological nature. First, a cubic graph \(G\) in question is cellularly embedded into a closed surface. Next, a suitable tree \(T\) in the dual graph \(G^{*}\) is identified. Under certain conditions, the vertex set of \(T\) extends to a stable decycling partition of a cubic subgraph \(H\subseteq G^{*}\). If this partition is coherent, then, depending on the parity of the Betti number of \(H\), it is used to construct either a Hamilton cycle or a Hamilton path of \(G\), which traverse the boundary of the union of faces of \(G\) corresponding to the vertices of \(T\subseteq G^{*}\) (with possible exception of one edge).
Unfortunately, this idea does not work well if the recycling partition is not coherent. From this point of view, coherent decycling partitions are more valuable than incoherent ones. Recall that for cyclically \(4\)-edge-connected cubic graphs the theorem of Payan and Sakarovitch guarantees the existence of a coherent decycling partition only when \(n\equiv 2\pmod{4}\) (that is, when \(\beta(G)\) is even). If \(n\equiv 0\pmod{4}\), two possibilities can occur, only one of which is a coherent decycling partition (namely the one in Item (ii)-1 of Theorem 1.2). In this situation one has to ask under what conditions it is possible to ensure that a cubic graph admits a coherent decycling partition if its Betti number is odd (that is, if \(n\equiv 0\pmod{4}\)). Nedela and Skoviera [29] answered this question in the positive provided that \(G\) is cyclically \(5\)-edge-connected. The main result of this paper replaces the assumption of cyclic \(5\)-connectivity with cyclic \(4\)-connectivity, and thus improves Theorem 1.2 by eliminating the possibility of an incoherent decycling decomposition stated in Item (ii)-2.
**Theorem 1.4**.: _Every cyclically \(4\)-edge-connected cubic graph \(G\) admits a coherent decycling partition. More precisely, the vertex set of \(G\) has a partition \(\{A,J\}\) such that \(A\) induces a tree and \(J\) is independent or near-independent, \(J\) being independent if and only if the order of \(G\) equals \(2\pmod{4}\)._
What happens when the graph in question is not cyclically \(4\)-edge-connected? First of all, there are many cubic graphs with cyclic connectivity \(3\) (including those with an odd Betti number) that do admit a coherent decycling partition. In order to get a deeper insight into the situation in \(3\)-connected cubic graphs it is helpful to use the fact that every such graph \(G\) has a "canonical" decomposition \(\{G_{1},G_{2},\ldots,G_{r}\}\) into cyclically \(4\)-edge-connected cubic graphs. In Section 6 we prove that if at most one \(G_{i}\) has an odd
Betti number, then \(G\) admits a coherent decycling partition (Theorem 6.2). The family of 3-connected cubic graphs having at most one factor with an odd Betti number includes the so-called odd-cyclically 4-connected graphs. Their characteristic property is that every edge cut whose removal leaves a component with an odd Betti number has size at least 4. It may be interesting to mention that the concept of odd cyclic 4-connectivity comes from the study of maximum orientable genus where it was independently introduced by Khomenko and Glukhov [16] and Nebesky [26] in 1980 and 1983, respectively.
Nevertheless, there exist plenty of cubic graphs that do not admit a coherent decycling partition. Theorem 1.3 tells us that a necessary condition for a connected cubic graph to admit a coherent decycling partition is to be upper-embeddable. This condition is, however, not sufficient. Examples of upper-embeddable cubic graphs of connectivity 1 with no coherent decycling partition are not difficult to find: the example in Figure 2 can easily be turned into an infinite family. Two-connected examples are less apparent. Figure 3 displays two graphs which are upper-embeddable, but none of them admits a coherent decycling partition. An obvious generalisation of these two graphs to an infinite family does not work because it produces non-upper-embeddable graphs. In spite of that, an infinite family does exist and is described in Section 7.
The situation in 3-connected cubic graphs remains open: we have no example of a 3-connected upper-embeddable cubic graph which would not have a coherent decycling partition. Motivated by the difficulty of finding examples of 2-connected upper-embeddable cubic graphs with no coherent
Figure 2: An upper-embeddable graph with no coherent partition. Dashed lines represent the cotree edges of a Xuong tree (see Section 3 for the definition).
decycling partition we propose the following conjecture.
**Conjecture 1.5**.: A \(3\)-connected cubic graph admits a coherent decycling partition if and only if it is upper-embeddable.
There is yet another important aspect of decycling cubic graphs. It concerns random graphs. As is well known [35], a random cubic graph is hamiltonian (and therefore \(2\)-connected). Bau et al. [1] used this fact to prove that a random cubic graph of order \(n\) has decycling number equal to \(\lceil(n+2)/4\rceil\) almost surely. When combined with Theorem 1.1, this result states that almost all cubic graphs are upper-embeddable. By contrast, there is a positive probability that a random cubic graph contains a triangle, which means that it cannot be cyclically \(4\)-edge-connected (see Theorem 9.5 in [12]). Since all upper-embeddable cubic graphs admit a stable decycling partition, the following question suggests itself.
**Problem 1.6**.: Is it true that almost all cubic graphs admit a coherent decycling partition? Equivalently, is it true that almost all cubic graphs contain an induced tree such that the removal of its vertices leaves a subgraph with at most one edge?
Our paper is organised as follows. In the next section we collect the basic definitions required for understanding this paper. In Section 3 we reexamine the relationship between minimum decycling partitions and maximum orientable genus in cubic graphs. We prove Theorem 1.3 about stable decycling
Figure 3: Two-connected upper-embeddable graphs \(F_{1}\) (left) and \(F_{2}\) (right) with no coherent decycling partition. Dashed lines represent the cotree edges of a Xuong tree.
partitions and derive the result of Payan and Sakarovitch (Theorem 1.2) as a consequence. In Section 4 we focus on coherent decycling partitions. We introduce the concept of ample upper embeddability and prove that a cubic graph with an odd Betti number admits a coherent decycling partition if and only if it is amply upper-embeddable (Theorem 4.3). In Sections 5 and 6 we prove our main results about coherent decycling partitions in cyclically 4-edge connected and certain cyclically 3-edge-connected cubic graphs. In Section 7 we present an infinite family of 2-connected cubic graphs in which every stable decycling partition is incoherent. The paper closes with a few remarks and problems.
## 2 Preliminaries
Graphs in this paper will be mostly cubic (3-valent) or their subgraphs (subcubic). Loops and multiple edges are permitted. Note that a 3-connected cubic graph is simple, which means that it has neither parallel edges nor loops. We use the standard notation \(V=V(G)\) for the vertex set and \(E=E(G)\) for the edge set of \(G\). The symbol \(n\) is reserved for the _order_ of \(G\), the number of vertices of \(G\).
If a vertex \(u\) of a cubic graph is incident with a loop, then the other edge incident with \(u\) is a bridge. In this paper, both endvertices of the latter edge are regarded as cutvertices, that is, including \(u\). This useful convention comes from topological graph theory where graphs are defined as finite 1-dimensional CW-complexes and graph connectivity corresponds to the connectivity of the underlying topological space.
For a connected graph \(G\), let \(\beta(G)=|E|-|V|+1\) denote its _Betti number_, that is, the dimension of the cycle space of \(G\) (the _cycle rank_), or in other words, the number of edges of a cotree. Recall that a _cotree_ of a spanning tree \(T\) is the spanning subgraph \(G-E(T)\) of \(G\). A graph whose Betti number is even is said to be _cyclically even_, otherwise it is _cyclically odd_. Observe that if \(G\) is cubic, then \(G\) is cyclically even whenever \(n\equiv 2\pmod{4}\) and \(G\) is cyclically odd whenever \(n\equiv 0\pmod{4}\).
For a subset \(A\subseteq V\), let \(G[A]\) denote the subgraph of \(G\) induced by \(A\). If \(A\) is a proper subset of vertices, then \(\delta_{G}(A)\) will denote the set of edges that join a vertex in \(A\) to a vertex in \(V-A\). In other words, \(\delta_{G}(A)\) is the edge cut that separates \(G[A]\) from \(G[V-A]\). A 3-edge-cut \(\delta_{G}(A)\) where \(A\) or \(V-A\) consists of a single vertex is called _trivial_. If \(H=G[A]\), we usually write \(\delta_{G}(H)\) instead of \(\delta_{G}(A)\).
We say that a set \(B\subseteq E(G)\) is _cycle-separating_ if \(G-B\) is disconnected, and at least two of its components contain cycles. Note that in a cubic graph \(G\) a minimum cycle separating subset of \(E(G)\) is independent (that is a matching), and that an independent edge cut is always cycle separating. A connected graph \(G\) is _cyclically \(k\)-edge-connected_ if no set of fewer than \(k\) edges is cycle-separating in \(G\). It can be shown that deleting any set of \(k\geq\beta(G)\) edges yields either a disconnected graph or a graph without cycles. Thus, if \(G\) contains a cycle-separating set of edges, then it contains one with no more than \(\beta(G)-1\) elements. We therefore define the _cyclic connectivity_ of \(G\), or more precisely, _cyclic edge connectivity_, denoted by \(\zeta(G)\), as the largest integer \(k\leq\beta(G)\) for which \(G\) is cyclically \(k\)-edge-connected. For instance, \(K_{3,3}\) is cyclically \(k\)-edge-connected for every positive integer \(k\), but its cyclic connectivity equals \(4\). In fact, \(\zeta(G)=\beta(G)\) if and only if \(G\) has no cycle-separating edge cut. If \(G\) is cubic, then it happens only for three graphs, namely for \(G\cong K_{4}\), \(G\cong K_{3,3}\), and the dipole \(D_{2}\) (the only bridgeless cubic graph on two vertices).
Cyclic vertex connectivity can be defined in a similar manner; for cubic graphs see [24, 28]. It can be shown that in cubic graphs, for \(k\leq 3\), \(k\)-connectivity, \(k\)-edge-connectivity, cyclic edge-connectivity, and cyclic vertex-connectivity are all equal [24, Theorem 1.2]. In other words, for cubic graphs cyclic connectivity is a natural extension of the classical connectivity invariants.
## 3 Decycling partitions and maximum genus in cubic graphs
The purpose of this section is to discuss a link between the existence of a stable decycling partition of a cubic graph and its maximum orientable genus. Our main aim is to prove that a connected cubic graph has a stable decycling partition if and only if it is upper-embeddable, that is, if it has a cellular embedding on an orientable surface which has at most two faces. The importance of this relationship is underscored by the fact that the required partitions can be efficiently constructed because the maximum orientable genus of every graph is polynomially computable [4, 10]. This fact provides an alternative proof of the result, proved in [21], that the decycling number of a cubic graph can be determined in polynomial time.
We begin our exposition by reviewing basic facts about maximum genus. For a deeper account we refer the reader to [25] and [39] and for a fairly recent survey to [3, Chapter 3].
Throughout this paper, all embeddings will be cellular and surfaces will be closed and orientable. Recall that a _cellular embedding_ of a connected graph \(G\) on a closed surface \(S\) is, roughly speaking, a drawing of \(G\) on \(S\) without edge-crossings such that each component of \(S-G\), a _face_, is homeomorphic to a disk (a 2-cell). If a graph with \(n\) vertices and \(m\) edges has a cellular embedding on an orientable surface of genus \(g\), then the number of faces, denoted by \(r\), must satisfy the Euler-Poincare equation
\[n-m+r=2-2g.\]
The _maximum genus_ of a connected graph \(G\), denoted by \(\gamma_{M}(G)\), is the largest genus of an orientable surface into which \(G\) has a cellular embedding. It is an immediate consequence of the Euler-Poincare formula that \(\gamma_{M}(G)\leq\lfloor\beta(G)/2\rfloor\), where \(\beta(G)\) is the Betti number of \(G\). The quantity \(\xi(G)=\beta(G)-2\gamma_{M}(G)\) measures the distance of \(\gamma_{M}(G)\) from the natural upper bound of \(\beta(G)/2\) and is called the _deficiency_ of \(G\). It is easy to see that \(\xi(G)=r_{\min}(G)-1\) where \(r_{\min}(G)\) is the minimum number of faces among all cellular embedings of \(G\). The graphs for which \(\gamma_{M}(G)=\lfloor\beta(G)/2\rfloor\) are called _upper-embeddable_. Clearly, a graph is upper-embeddable if and only if \(\xi(G)\leq 1\), or equivalently, if it has a cellular embedding into an orientable surface with one or two faces. In this context it may be worth mentioning that every connected graph admits a cellular embedding with a single face into some closed surface, possibly non-orientable [34, 37].
Upper-embeddable graphs thus fall into two types: _orientably one-face embeddable_ graphs (whose Betti number is even) and _orientably two-face embeddable_ graphs (whose Betti number is odd); for brevity the adverb "orientably" will be omitted. In cubic graphs these two types of upper-embeddability can easily be distinguished by the number of vertices. A one-face embeddable cubic graph has order 2 (mod 4) whereas a two-face embeddable cubic graph has order 0 (mod 4).
One of the most remarkable facts about maximum genus is that this topological invariant can be characterised in a purely combinatorial manner. As early as in 1973, Khomenko et al. [17] proved that the maximum genus of an arbitrary connected graph \(G\) equals the maximum number of pairs of adjacent edges whose removal leaves a connected spanning subgraph of \(G\). A similar characterisation was later established by Xuong [41]. It states that \(\xi(G)\) equals the minimum number of components with an odd number of edges in a cotree of \(G\). For brevity, such components will be called _odd_. Analogously, those with an even number of edges will be called _even_.
A spanning tree of \(G\) whose cotree has exactly \(\xi(G)\) odd components is called a _Xuong tree_ of \(G\). By a result of Kotzig [19, Theorem 4], every connected graph with an even number of edges can be decomposed into pairs of adjacent edges. Since a component with an odd number of edges can be decomposed into pairs of adjacent edges and one singleton, the cotree of a Xuong tree can be partitioned into \(\gamma_{M}(G)\) pairs of adjacent edges and \(\xi(G)\) singletons.
For the sake of completeness we present a proof of Kotzig's result.
**Lemma 3.1**.: (Kotzig [19]) _The edge set of every connected graph with an even number of edges can be partitioned into pairs of adjacent edges._
Proof.: Let \(G\) be a connected graph with an even number of edges. It suffices to prove that \(G\) can be oriented in such a way that each vertex has an even in-degree. Indeed, for each vertex of \(G\) we simply distribute the incoming edges into disjoint pairs, thereby producing a partition of the entire edge set into pairs of adjacent edges.
Let us start with an arbitrary orientation \(D\) of \(G\). If \(D\) has the required property, we are done. Otherwise, there are at least two vertices with an odd in-degree, since \(|E(G)|\) is even. At one of them, say \(v\), pick an edge directed outward and continue to produce a directed walk \(W\) starting at \(v\) which extends as long as possible. Clearly, \(W\) terminates at another vertex with an odd in-degree, say \(w\). After reversing the orientation of all edges on \(W\) the in-degree of \(v\) and \(w\) becomes even while the other vertices of \(G\) keep the parity of their in-degree. We continue this process until all vertices of \(G\) obtain an even in-degree.
We end this brief introduction to the theory of maximum genus of a graph by stating two, in a sense complementary, combinatorial characterisations of upper-embeddable graphs. The first of them is due to Jungerman [13] and Xuong [41].
**Theorem 3.2**.: (Jungerman [13], Xuong [41]) _A connected graph is one-face embeddable if and only if it has a spanning tree with no odd cotree components; it is two-face embeddable if and only if it has a spanning tree whose cotree has precisely one odd component._
The second characterisation of upper-embeddable graphs is due to Nebesky [27, Theorem 3]. Let \(G\) be a graph and \(X\subseteq E(G)\) an arbitrary set of edges. Let \(\operatorname{ec}(G-X)\) and \(\operatorname{oc}(G-X)\) denote the number of cyclically even and cyclically odd components of \(G-X\), respectively.
**Theorem 3.3**.: (Nebesky [27]) _A connected graph \(G\) is upper-embeddable if and only if for an arbitrary set \(X\subseteq E(G)\) one has_
\[\mathrm{ec}(G-X)+2\mathrm{oc}(G-X)-2\leq|X|. \tag{1}\]
In the rest of this section we show that the two types of upper-embeddability of cubic graphs lead to three different types of decycling vertex partitions \(\{A,J\}\), two of which exist when the Betti number is odd, and one exists when the Betti number is even. We will first treat the case of even Betti number, but before stating the result we present a lemma which provides a convenient tool for dealing with cubic upper-embeddable graphs in general.
Let \(G\) be a connected cubic graph and \(T\) a spanning tree of \(G\). Observe that each component of \(G-E(T)\) is a path or a cycle. A component of \(G-E(T)\) will be called _heavy_ if the number of its edges is at least three and has the same parity as the Betti number of \(G\). A component which is not heavy will be called _light_.
**Lemma 3.4**.: _Let \(G\) be a connected loopless cubic graph of order at least \(4\). If \(G\) is upper-embeddable, then it has a Xuong tree with acyclic cotree. If, in addition, \(G\) has a Xuong tree with a heavy cotree component, then it also has a Xuong tree with acyclic cotree that contains a heavy component of the same parity._
Proof.: Take any Xuong tree \(T\) of \(G\). If \(G-E(T)\) is acyclic, then there is nothing to prove. If not, \(G-E(T)\) has a component \(Q\) which is a cycle. Let \(Q=(u_{1}u_{2}\ldots u_{k})\), where \(u_{1},u_{2},\ldots,u_{k}\) are vertices of \(G\) listed in cyclic ordering. Since \(G\) is loopless, we have \(k\geq 2\). Each \(u_{i}\) is adjacent to some vertex \(v_{i}\) such that the edge \(u_{i}v_{i}\) belongs to \(T\). Note, however, that \(v_{i}\) and \(v_{j}\) need not be distinct for \(i\neq j\). Furthemore, the valency of each \(v_{i}\) in \(G-E(T)\) is at most \(1\) for otherwise \(u_{i}v_{i}\) would be the only edge of \(T\) and \(G\) would have only two vertices. In particular, each \(v_{i}\) belongs to an acyclic component of \(G-E(T)\). Set \(T^{\prime}=T+u_{1}u_{2}-u_{1}v_{1}\); clearly, \(T^{\prime}\) is a spanning tree of \(G\). The cycle \(Q\) in \(G-E(T)\) now transforms into a \(u_{2}\)-\(v_{1}\)-path \(P\) in \(G-E(T^{\prime})\) of length \(k\). If \(Q\) is an odd cycle, then \(v_{1}\) must belong to an even component of \(G-E(T)\), because \(T\) is a Xuong tree. Therefore \(P\) becomes part of an acyclic cotree component whose number of edges is odd and at least \(k\). If \(Q\) is even, then \(P\) becomes part of an acyclic component of \(G-E(T^{\prime})\) which has the same parity as the component of \(G-E(T)\) containing \(v_{1}\) and has at least \(k\) edges again. In other words, the transformation of \(T\) into \(T^{\prime}\) turns a
heavy component of \(G-E(T)\) into a heavy acyclic component of \(G-E(T^{\prime})\). By repeating this process as many times as necessary we arrive at a Xuong tree \(T^{\prime\prime}\) of \(G\) which has an acyclic cotree, and has a heavy cotree component whenever the cotree of \(T\) had.
The following two theorems are the main results of this section.
**Theorem 3.5**.: _For a connected cubic graph \(G\) the following statements are equivalent._
* \(G\) _has a cellular embedding on an orientable surface with one face._
* _The vertex set of_ \(G\) _can be partitioned into two sets_ \(A\) _and_ \(J\) _such that_ \(A\) _induces a tree and_ \(J\) _is independent._
Proof.: (i) \(\Rightarrow\) (ii): By the Jungerman-Xuong Theorem, \(G\) has a spanning tree \(T\) with all cotree components even. Lemma 3.1 implies that \(E(G)-E(T)\) has a partition \(\mathcal{P}\) into pairs of adjacent edges. For each pair \(\{e,f\}\) from \(\mathcal{P}\) pick a vertex shared by \(e\) and \(f\), and let \(J\) be the set of all vertices obtained in this way. Set \(A=V(G)-J\). Since \(G\) is cubic, each vertex of \(J\) must be a pendant vertex of \(T\). Observe that the vertices chosen from distinct pairs of \(\mathcal{P}\) must be non-adjacent. So \(J\) is an independent set of vertices and \(T-J\) is a tree with \(V(T-J)=A\). Since every cotree edge with respect to \(T\) is incident with a vertex in \(J\), the tree \(T\) is an induced subgraph. In other words, \(\{A,J\}\) is the required vertex partition.
(ii) \(\Rightarrow\) (i): Assume that the vertex set of \(G\) has a partition \(\{A,J\}\) where \(A\) induces a tree and \(J\) is an independent set. Let \(S\) be the tree induced by \(A\). For any vertex \(v\in J\) choose an arbitrary edge \(e_{v}\) incident with \(v\) and form a spanning subgraph \(S^{+}\) of \(G\) by adding to \(S\) all the edges \(e_{v}\) with \(v\in J\). Since \(J\) is independent, each edge \(e_{v}\) has its other end on \(S\). Therefore \(S^{+}\) is a connected spanning subgraph of \(G\). In fact, \(S^{+}\) must be acyclic, because each vertex \(v\in J\) is joined to \(S\) only by the edge \(e_{v}\). Thus \(S^{+}\) is a spanning tree of \(G\) in which all vertices of \(J\) are pendant. By the definition of \(S^{+}\), each cotree edge is incident with a single vertex of \(J\), and each vertex of \(J\) is incident with precisely two cotree edges. Hence the set of cotree edges can be partitioned into pairs of adjacent edges and, consequently, each cotree component is even. By Theorem 3.2, \(G\) is one-face embeddable.
If a cubic graph \(G\) has a vertex partition \(\{A,J\}\) such that \(A\) induces a tree and \(J\) is independent, then by Theorem 3.5 its Betti number must be
even, and hence its order is \(2\pmod{4}\). The next theorem shows that if the order of \(G\) is \(0\pmod{4}\), then there exists a similar partition \(\{A,J\}\) of the vertices of \(G\), however, it is either the independence of \(J\) or the connectivity of the subgraph induced by \(A\) that fails.
**Theorem 3.6**.: _Let \(G\) be a connected cubic graph. The following statements are equivalent._
1. \(G\) _has a cellular embedding on an orientable surface with two faces._
2. _The vertex set of_ \(G\) _can be partitioned into two sets_ \(A\) _and_ \(J\) _such that either_ 1. \(A\) _induces a tree and_ \(J\) _is near-independent, or_ 2. \(A\) _induces a forest with two components and_ \(J\) _is independent._
Proof.: (i) \(\Rightarrow\) (ii): First assume that \(G\) contains a loop incident with a vertex \(v\), and let \(u\) be the vertex adjacent to \(v\). Then \(uv\) is a bridge and since deficiency is clearly additive over bridges, \(G-v\) is a one-face embeddable graph. Let \(G^{\prime}\) be the cubic graph homeomorphic to \(G-v\). By Theorem 3.5, the vertex set of \(G^{\prime}\) has a vertex partition \(\{A,J\}\) where \(A\) induces a tree and \(J\) is independent. Then \(\{A\cup\{u\},J\cup\{v\}\}\) is a partition of \(V(G)\) such that \(A\) induces a tree and \(J\) is near-independent.
Now let \(G\) be loopless. Obviously, \(G\) has at least four vertices, so we can employ Lemma 3.4 to conclude that \(G\) has a Xuong tree \(T\) with acyclic cotree. Let \(B\) be the unique odd component of \(E(G)-E(T)\). Choose a pendant edge \(g=w_{1}w_{2}\) of \(B\), where \(w_{2}\) denotes a pendant vertex of \(B\). By applying Lemma 3.1 we decompose the set \(E(G)-(E(T)\cup\{g\})\) into pairs of adjacent edges. For each pair \(\{e,f\}\) from \(\mathcal{P}\) we pick a vertex shared by \(e\) and \(f\) and denote by \(J^{\prime}\) the resulting set of vertices. Clearly, \(J^{\prime}\) is an independent set. For \(i\in\{1,2\}\) set \(J_{i}=J^{\prime}\cup\{w_{i}\}\) and \(A_{i}=V(G)-J_{i}\). We show that both \(\{A_{1},J_{1}\}\) and \(\{A_{2},J_{2}\}\) are vertex partitions satisfying (ii) of the theorem.
First assume that \(B\) is a heavy odd component of \(E(G)-E(T)\), and let \(h=w_{1}w_{3}\) be the edge of \(B\) incident with \(w_{1}\) and different from \(g\). All the vertices of \(J_{1}\) are now pendant in \(T\), so \(T-J_{1}\) is a tree. Since every cotree edge is incident with a vertex in \(J_{1}\), the tree \(T-J_{1}\) is induced by \(A_{1}\). Observe that \(\{w_{1},w_{3}\}\subseteq J_{1}\) while \(w_{2}\in A_{1}\). As \(J^{\prime}\) is independent, we conclude that \(h=w_{1}w_{3}\) is the only edge joining a pair of vertices of \(J_{1}\). Thus \(J_{1}\) is near-independent and the partition \(\{A_{1},J_{1}\}\) satisfies (ii)-1.
If \(B\) has only one edge, namely \(g=w_{1}w_{2}\), then the vertices of \(J_{1}\) except \(w_{1}\) are pendant in \(T\), and \(w_{1}\) is a 2-valent vertex of \(T\). Hence \(T-J_{1}\) is a forest with two components. Further, each cotree edge with respect to \(T\) joins a vertex in \(J_{1}\) to a vertex in \(A_{1}\), which implies that \(J_{1}\) is independent.
We remark that the partition \(\{A_{2},J_{2}\}\) is always of type (ii)-2, no matter whether the unique odd cotree component is heavy or not. In any case, the partitions \(\{A_{1},J_{1}\}\) and \(\{A_{2},J_{2}\}\) fulfil (ii) of the theorem.
(ii) \(\Rightarrow\) (i): Assume that the vertex set of \(G\) has a partition \(\{A,J\}\) satisfying the properties stated in (ii). We distinguish two cases.
Case 1. _A induces a tree and \(J\) is a near-independent set._ Let \(h\) be the unique edge of the subgraph \(G[J]\), and let \(S=G[A]\). Since \(J\) is near-independent, for each vertex \(v\) of \(J\) there exists at least one edge joining \(v\) to \(S\). Choose one of them and denote it by \(t_{v}\). Form a spanning subgraph \(S^{+}\) of \(G\) by adding to \(S\) all the edges \(t_{v}\) where \(v\in J\). Clearly, \(S^{+}\) is connected and acyclic, so \(S^{+}\) is a spanning tree of \(G\) in which all the vertices of \(J\) are pendant.
If \(h\) is not a loop, then \(h=xy\) where \(x\) and \(y\) are distinct vertices. It follows that each vertex \(v\) of \(J\) is incident with two distinct cotree edges \(e_{v}\) and \(f_{v}\). If \(v\) and \(w\) are not adjacent in \(G\), then the pairs \(\{e_{v},f_{v}\}\) and \(\{e_{w},f_{w}\}\) are disjoint. However, for the ends of \(h\) we have \(\{e_{x},f_{x}\}\cap\{e_{y},f_{y}\}=\{h\}\). Assuming that \(h=e_{x}=e_{y}\) we must conclude that \(f_{x}\neq f_{y}\), because otherwise \(G[J]\) would contain both \(h\) and \(f_{x}=f_{y}\), contrary to the assumption that \(J\) is near-independent. Therefore the pairs \(\{e_{v},f_{v}\}\), for \(v\in J-\{x\}\), and the singleton \(\{f_{x}\}\) form a partition of \(E(G)-E(S^{+})\) which shows that \(S^{+}\) is a Xuong tree of \(G\). In fact, the cotree component containing \(h\) is heavy.
If \(h\) is a loop incident with a vertex \(x\), then each vertex \(v\) of \(J-\{x\}\) is incident with two distinct cotree edges \(e_{v}\) and \(f_{v}\), and the pairs corresponding to distinct vertices of \(J-\{x\}\) are again disjoint. It follows that the pairs \(\{e_{v},f_{v}\}\) for \(v\in J-\{x\}\) and the singleton \(\{h\}\) form a partition of \(E(G)-E(S^{+})\) proving that \(S^{+}\) is a Xuong tree of \(G\).
Case 2. _A induces a forest with two components and \(J\) is independent._ Let \(S_{1}\) and \(S_{2}\) be the components of the forest \(G[A]\). Because \(G\) is connected and \(J\) is independent, there must be a vertex \(x\in J\) with a neighbour \(y_{1}\) in \(S_{1}\) and a neighbour \(y_{2}\) in \(S_{2}\). For each vertex \(v\in J-\{x\}\) choose an arbitrary edge \(t_{v}\) incident with \(v\), and form a spanning subgraph \(T\) of \(G\) extending \(S_{1}\cup S_{2}\) with the edges \(xy_{1}\) and \(xy_{2}\), and with all the edges \(t_{v}\) where \(v\in J-\{x\}\). Clearly, \(T\) is a spanning tree of \(G\). Observe that each vertex \(v\in J-\{x\}\) is incident with two cotree edges \(e_{v}\) and \(f_{v}\) while for \(x\) there is a single cotree
edge \(g\) incident with it. It follows that the pairs \(\{e_{v},f_{v}\}\) for \(v\in J-\{x\}\) together with the singleton \(\{g\}\) form a partition of \(E(G)-E(T)\) proving that \(T\) is a Xuong tree of \(G\).
**Remark 3.7**.: For loopless cubic graphs, the proof of Theorem 3.6 shows that the equivalence (i) \(\Leftrightarrow\) (ii) stated in the theorem holds true even when Item (ii)-1 is removed from the statement. This fact, combined with the theorem of Payan and Sakarovitch (Theorem 1.2), implies that for a cyclically \(4\)-edge-connected cubic graph of order \(n\equiv 0\pmod{4}\) one can always guarantee the existence of a vertex partition \(\{A,J\}\) where \(A\) induces a forest of two trees and \(J\) is independent. If we take into account Theorem 3.5, we can conclude that in a cyclically \(4\)-edge-connected cubic graph \(G\) there always exists a minimum decycling set \(J\) which is independent. In Section 5 we show that \(J\) can also be chosen in such a way that \(G-J\) is a tree (see Theorem 5.4). By Theorem 3.5, both properties can be achieved at the same time only when \(n\equiv 2\pmod{4}\).
As an immediate corollary of the previous two theorems we obtain the following statement which is identical with Theorem 1.3.
**Corollary 3.8**.: _A connected cubic graph admits a stable decycling partition if and only if it is upper-embeddable._
The result of Payan and Sakarovitch [32] stated as Theorem 1.2 now follows from the well known fact [33, 16, 26] that all cyclically \(4\)-edge-connected graphs are upper-embeddable.
**Corollary 3.9**.: _Every cyclically \(4\)-edge-connected cubic graph admits a stable decycling partition._
We finish this section by providing a different proof of the result of Long and Ren [23] stated as Theorem 1.1.
**Theorem 3.10**.: _If \(G\) is a connected cubic graph of order \(n\), then \(\phi(G)\geq\lceil(n+2)/4\rceil\), and the equality holds if and only if \(G\) is upper-embeddable._
Proof.: Let \(J\) be an arbitrary decycling set for \(G\) and let \(A=V(G)-J\) be its complement. Denote by \(e_{A}\) and \(e_{J}\) the numbers of edges of the induced subgraphs \(G[A]\) and \(G[J]\), respectively. Since \(G[A]\) is acyclic, we obtain
\[|\delta_{G}(A)|=3|A|-2e_{A}\geq 3(n-|J|)-2(n-|J|-1)=n-|J|+2.\]
On the other hand, \(|\delta_{G}(A)|=|\delta_{G}(J)|=3|J|-2e_{J}\), so
\[3|J|\geq 3|J|-2e_{J}=|\delta_{G}(A)|\geq n-|J|+2, \tag{2}\]
implying that \(|J|\geq\lceil(n+2)/4\rceil\). Therefore \(\phi(G)\geq\lceil(n+2)/4\rceil\).
We now prove that \(\phi(G)=\lceil(n+2)/4\rceil\) if and only if \(G\) is upper-embeddable. First assume that \(\phi(G)=\lceil(n+2)/4\rceil\) and let \(\{A,J\}\) be a decycling partition where the decycling set \(J\) has \(\lceil(n+2)/4\rceil\) vertices. If we insert \(|J|=\lceil(n+2)/4\rceil\) into (2), we obtain \(e_{J}=0\) if \(n\equiv 2\pmod{4}\), and \(e_{J}\leq 1\), if \(n\equiv 0\pmod{4}\). An easy counting argument now shows that \(G[A]\) has exactly one component either if \(n\equiv 2\pmod{4}\) or if \(e_{J}=1\) and \(n\equiv 0\pmod{4}\). If \(n\equiv 0\pmod{4}\) and \(e_{J}=0\), we derive that \(G[A]\) has exactly two components. Hence, in either case, the decycling partition \(\{A,J\}\) is stable. By Corollary 3.8, \(G\) is upper-embeddable.
Conversely, if \(G\) is upper-embeddable, then, by Corollary 3.8, it admits a stable decycling partition \(\{A,J\}\). Direct counting yields that \(\phi(G)\leq|J|=\lceil(n+2)/4\rceil\) for each of the three types of stable partitions. As shown before, \(\phi(G)\geq\lceil(n+2)/4\rceil\), so \(\phi(G)=\lceil(n+2)/4\rceil\), and the proof is complete.
## 4 Coherent decycling partitions and ample upper-embeddability
Theorem 3.5 tells us that a connected cubic graph on \(2\pmod{4}\) vertices has a coherent decycling partition if and only if is one-face embeddable. By contrast, two-face-embeddability is not sufficient for a cubic graph of order \(0\pmod{4}\) to have a coherent decycling partition, as confirmed by the graphs in Figures 2 and 3. In this section we show that a cubic graph of order \(0\pmod{4}\) admits a coherent decycling partition exactly when it possesses a stronger form of upper-embeddability, one that allows removing a pair of adjacent vertices without affecting upper-embeddability.
Let \(G\) be an upper-embeddable cubic graph. A pair \(\{x,y\}\) of distinct vertices of \(G\) will be called _removable_ if \(x\) and \(y\) are simply adjacent (that is, not connected by parallel edges), the edge \(xy\) is not a bridge, and \(G-\{x,y\}\) remains upper-embeddable; otherwise a pair of adjacent vertices is said to be _non-removable_. It follows from the definition that \(G-\{x,y\}\) is connected. If \(\{x,y\}\) is a removable pair of vertices, then the graphs \(G\) and \(G-\{x,y\}\) have different Betti numbers. Therefore if a pair \(\{x,y\}\) of vertices is removable, one of \(G\) and \(G-\{x,y\}\) is one-face-embeddable and the other one is two-face-embeddable.
We say that a cubic graph \(G\) is _amply upper-embeddable_ if it is upper-embeddable and contains a removable pair of adjacent vertices. An upper-embeddable cubic graph is _tightly upper-embeddable_ if it is not amply upper-embeddable. In other words, in a tightly upper-embeddable cubic graph the removal of every pair of simply adjacent vertices produces either a disconnected graph or a graph with deficiency at least \(2\).
In this section we focus on amply upper-embeddable cubic graphs with an odd Betti number, that is, with order \(0\pmod{4}\). For obvious reasons we refer to them as _amply two-face-embeddable_ cubic graphs. A two-face-embeddable cubic graph that is not amply two-face-embeddable is _tightly two-face-embeddable_.
Our main goal is to prove that a connected cubic graph of order \(0\pmod{4}\) admits a coherent decycling partition if and only if it is amply two-face-embeddable, and to characterise such graphs by providing a Jungerman-Xuong-type theorem for them.
We begin our study of amply upper-embeddable graphs with a lemma showing that such graphs are loopless.
**Lemma 4.1**.: _If an upper-embeddable cubic graph contains a loop, then it is tightly two-face-embeddable._
Proof.: Let \(G\) be an upper-embeddable graph with a loop at a vertex \(u\) and let \(v\) be the neighbour of \(u\). Since the deficiency is additive over bridges, the component \(H\) of \(G-uv\) containing \(v\) is one-face embeddable, and hence the entire \(G\) is two-face-embeddable. Suppose, to the contrary, that \(G\) contains a removable pair of vertices, say \(\{x,y\}\). Taking into account that \(uv\) is a bridge and \(v\) is a cut-vertex of \(G\) we conclude that \(\{x,y\}\cap\{u,v\}=\emptyset\). In particular, both \(x\) and \(y\) are contained in \(H\). We wish to prove that \(G-\{x,y\}\) has deficiency at least \(2\). Since \(G-\{x,y\}\) is connected, so is \(H-\{x,y\}\), given the position of \(x\) and \(y\) within \(G\). Recall that \(\xi(H)=0\). It follows that \(H-\{x,y\}\) has an odd Betti number, and therefore \(\xi(H-\{x,y\})\geq 1\). Using the additivity of \(\xi\) over bridges again we eventually obtain that \(\xi(G-\{x,y\})\geq 2\), which contradicts the assumption that \(\{x,y\}\) is removable. Hence, \(G\) is tightly upper-embeddable.
The next lemma explores the relation of ample upper-embeddability to the existence of heavy components in the cotree of a Xuong tree.
**Lemma 4.2**.: _Let \(G\) be an upper-embeddable cubic graph. If \(G\) contains a Xuong tree whose cotree has a heavy component, then \(G\) is amply upper-embeddable._
Proof.: Let \(T\) be a Xuong tree of \(G\) with a heavy cotree component \(K\). By Lemma 3.4 we may assume that \(K\) is a path. Since the length of \(K\) is at least \(3\), \(K\) contains an edge \(e=uv\) adjacent to a pendant edge of \(K\). Consider the graph \(G^{\prime}=G-\{u,v\}\). The vertices \(u\) and \(v\) are pendant vertices of \(T\), so \(T^{\prime}=T-\{u,v\}\) is a spanning tree of \(G^{\prime}\); in particular, \(G^{\prime}\) is connected. If \(\beta(G)\) is even, then \(K-\{u,v\}\) has a single non-trivial odd component. Hence \(G^{\prime}\) is upper-embeddable and \(G\) is amply upper-embeddable. If \(\beta(G)\) is odd, then \(K\) has an odd number of edges, and \(K-\{u,v\}\) consists of even components. It follows that all components of \(G^{\prime}-E(T^{\prime})\) are even, so \(G^{\prime}\) is upper-embeddable, and therefore \(G\) is again amply upper-embeddable.
Now we are ready for a characterisation of amply upper-embeddable graphs with an odd Betti number.
**Theorem 4.3**.: _The following statements are equivalent for every connected cubic graph \(G\)._
1. \(G\) _is amply two-face embeddable._
2. \(G\) _has a Xuong tree with a single odd cotree component, which is heavy._
3. \(G\) _admits a coherent decycling bipartition._
Proof.: (i) \(\Rightarrow\) (ii): Let \(G\) be an amply two-face-embeddable graph, and let \(u\) and \(v\) be a removable pair of vertices of \(G\). By Lemma 4.1, none of \(u\) and \(v\) is incident with a loop. Since \(u\) and \(v\) are not doubly adjacent, \(u\) has a neighbour different from \(v\), and vice versa. Let \(u_{1}\) and \(u_{2}\) be the other two neighbours of \(u\) (possibly \(u_{1}=u_{2}\)) and let \(v_{1}\) and \(v_{2}\) be the other two neighbours of \(v\) (possibly \(v_{1}=v_{2}\)). Since \(G\) has an odd Betti number, the Betti number of \(G^{\prime}=G-\{u,v\}\) must be even, and therefore \(G^{\prime}\) is one-face embeddable. By Xuong's Theorem, \(G^{\prime}\) has a Xuong tree \(T^{\prime}\) with even cotree components. Extend \(T^{\prime}\) by adding the edges \(u_{1}u\) and \(v_{1}v\) to \(T^{\prime}\). The resulting subgraph \(T\) is clearly a spanning tree of \(G\). Note that both \(u_{2}\) and \(v_{2}\) were contained in even components of \(G^{\prime}-E(T^{\prime})\). In \(G-E(T)\) the vertices \(u_{2}\) and \(v_{2}\) lie in the same component which includes the path \(u_{2}uvv_{2}\). This component is odd and heavy, while the remaining cotree components remain even. Thus \(T\) is the required spanning tree of \(G\).
(ii) \(\Rightarrow\) (iii): Assume that \(G\) has a Xuong tree \(T\) with a single odd cotree component, which is heavy. Clearly, \(G\) must be loopless and of order at least \(4\). By Lemma 3.4 we can assume that \(E(G)-E(T)\) is acyclic, so the only odd component \(B\) of \(G-E(T)\) is a path of length at least three. We now construct a partition \(\mathcal{P}\) of \(E(G)-E(T)\) into pairs of adjacent edges and one singleton in such a way that the singleton is a pendant edge of \(B\). For each pair \(\{e,f\}\) from \(\mathcal{P}\) pick a vertex shared by both edges \(e\) and \(f\), while for the singleton \(\{g\}\) pick the end-vertex which is not a pendant vertex of \(B\). Let \(J\) be the set of vertices obtained from \(\mathcal{P}\) in this way, and set \(A^{\prime}=V(G)-J\). Observe that each vertex in \(J\) is a pendant vertex of \(T\), so the subgraph induced by the set \(A^{\prime}\) is the tree \(T-J\). Furthermore, the subgraph induced by \(J\) is near-independent and its single edge is the edge of \(B\) adjacent to \(g\). This shows that \(\{A^{\prime},J\}\) is the the required partition of \(G\).
The implication (iii) \(\Rightarrow\) (ii) was proved in Case 1 of the proof of Theorem 3.6, so it remains to prove only the implication (ii) \(\Rightarrow\) (i). Assume that \(G\) has a Xuong tree with a single odd cotree component, which is heavy. Then \(G\) is upper-embeddable, and by Lemma 4.2 it is amply upper-embeddable.
**Corollary 4.4**.: _Let \(G\) be a connected cubic graph on \(n\) vertices where \(n\equiv 0\pmod{4}\). If \(G\) admits a Xuong tree with a unique odd cotree component which has at least three edges, then \(G\) admits a coherent stable decycling partition._
## 5 Coherent partitions of cyclically 4-edge-connected graphs
In this section we prove that every cyclically 4-edge-connected cubic graph admits a coherent decycling partition. This result strengthens Theorem 1.2 by eliminating the second possibility in its item (ii). The proof uses two important results, proved by Payan and Sakarovitch [32] and by Wormald [40], respectively. The former guarantees the existence of a stable decycling partition \(\{A,J\}\) where the decycling set \(J\) contains an arbitrary preassigned vertex. The latter result enables us to create cyclically 4-edge-connected cubic graphs from smaller ones by adding an edge and increasing the order by \(2\). The corresponding operation is called an _edge extension_ and is executed as follows. In a given cubic graph \(G\) we take two nonadjacent edges, subdivide each of them by a new vertex and join the two resulting 2-valent vertices by a new edge. The reverse operation is an _edge reduction_.
**Theorem 5.1**.: (Payan and Sakarovitch [32]) _Let \(G\) be a cyclically \(4\)-edge-connected cubic graph and let \(v\) be an arbitrary vertex of \(G\). Then \(G\) admits
a stable decycling partition \(\{A,J\}\) such that \(v\) belongs to the decycling set \(J\)._
**Theorem 5.2**.: (Wormald [40]) _Every cyclically \(4\)-edge-connected cubic graph other than \(K_{4}\) and \(Q_{3}\) can be obtained from a cyclically \(4\)-edge-connected cubic graph with fewer vertices by an edge-extension._
**Remark 5.3**.: In [32], Payan and Sakarovitch observed that Theorem 5.1 does not hold for cubic graphs with cyclic connectivity \(3\).
We are now ready for the main result of this section.
**Theorem 5.4**.: _Every cyclically \(4\)-edge-connected cubic graph \(G\) has a coherent stable decycling partition. More precisely, if \(G\) has \(n\) vertices, then_
* _for_ \(n\equiv 2\pmod{4}\) _the vertex set of_ \(G\) _has a partition_ \(\{A,J\}\) _where_ \(A\) _induces a tree and_ \(J\) _is independent;_
* _for_ \(n\equiv 0\pmod{4}\) _the vertex set of_ \(G\) _has a partition_ \(\{A,J\}\) _where_ \(A\) _induces a tree and_ \(J\) _is near-independent._
Proof.: From Section 3 we already know that in a cubic graph \(G\) of order \(n\equiv 2\pmod{4}\) every stable decycling partition is coherent. Thus we can assume that \(G\) is a cyclically \(4\)-edge-connected cubic graph of order \(n\equiv 0\pmod{4}\). If \(G\) is either \(K_{4}\) and \(Q_{3}\), then a coherent decycling partition is easily found: one for \(Q_{3}\) is shown in Figure 1, and for \(K_{4}\) any partition of vertices into two \(2\)-element subsets is good. Now, let \(G\) be different from \(K_{4}\) and \(Q_{3}\). By Theorem 5.2, \(G\) arises from a cyclically \(4\)-edge-connected cubic graph \(G^{\prime}\) of order \(n-2\) by an edge extension. Two independent edges \(xy\) and \(wz\) of \(G^{\prime}\) are thus subdivided by vertices \(u\) and \(v\), respectively, and \(G\) is then created by adding the edge \(e=uv\). Since \(G^{\prime}\) is cyclically \(4\)-edge-connected and its order is \(2\pmod{4}\), it has a coherent decycling partition \(\{A^{\prime},J^{\prime}\}\) such that \(A^{\prime}\) induces a tree and \(J^{\prime}\) is independent. Furthermore, according to Theorem 5.1, we can assume that \(x\in J^{\prime}\). Let \(T^{\prime}\) denote the tree induced by \(A^{\prime}\) in \(G^{\prime}\).
We show that \(\{A^{\prime}\cup\{v\},J^{\prime}\cup\{u\}\}\) is a coherent decycling partition of \(G\). Let us look at the distribution of the vertices \(y\), \(w\), and \(z\) with respect to the partition \(\{A^{\prime},J^{\prime}\}\). Since \(x\) is in \(J^{\prime}\), \(xy\) is an edge of \(G^{\prime}\), and \(J^{\prime}\) is an independent set, we conclude that \(y\) belongs to \(A^{\prime}\). As regards the edge \(wz\), either both its end-vertices are contained in \(A^{\prime}\), or they belong to different sets of the partition, say, \(w\in J^{\prime}\) and \(z\in A^{\prime}\). Observe that in either case the
induced subgraph \(G[J^{\prime}\cup\{u\}]\) has a single edge, namely \(xu\), so \(J^{\prime}\cup\{u\}\) is a near-independent set. Furthermore, the subgraph \(T=G[A^{\prime}\cup\{v\}]\) is a tree: in the former case it arises from \(T^{\prime}\) by subdividing its edge \(wz\) with \(v\), and in the latter case \(T=T^{\prime}+vz\). In both cases \(\{A^{\prime}\cup\{v\},J^{\prime}\cup\{u\}\}\) is a coherent decycling partition of \(G\), as claimed.
## 6 Coherent partitions of 3-connected cubic graphs
In this section we strengthen Theorem 5.4 by showing that coherent decycling partitions can be guaranteed in rich classes of cubic graphs whose cyclic connectivity equals 3. The main idea behind is the existence of a canonical decomposition of 3-connected cubic graphs into cyclically 4-edge-connected factors. Although such a decomposition is likely to belong to the area of mathematical folklore, we need to work out the necessary details before they can be applied to our main result.
Let \(G_{1}\) and \(G_{2}\) be bridgeless cubic graphs. Pick a vertex \(v_{1}\) in \(G_{1}\) and a vertex \(v_{2}\) in \(G_{2}\), remove the two vertices, retain the dangling edges (more precisely, free edge-ends), and identify each dangling edge of \(G_{1}-v_{1}\) with a dangling edge of \(G_{2}-v_{2}\) to obtain a cubic graph \(G_{1}*G_{2}\). We say that \(G_{1}*G_{2}\) is a _\(3\)-sum_ of \(G_{1}\) and \(G_{2}\) with respect to \(v_{1}\) and \(v_{2}\), the two vertices being the _root vertices_ for the 3-sum. The three edges that connect the neigbours of \(v_{1}\) to the neighbours of \(v_{2}\) form a 3-edge-cut which we call the _principal edge cut_ of \(G_{1}*G_{2}\). Observe that if \(G_{1}\) and \(G_{2}\) are 3-connected, so is \(G_{1}*G_{2}\).
Conversely, let \(G\) be a 3-connected cubic graph containing a 3-edge-cut \(S\) whose removal leaves two nontrivial components \(H_{1}\) and \(H_{2}\); note that \(S\) is necessarily cycle-separating, and vice versa. We can turn each \(H_{i}\) to a cubic graph \(\bar{H}_{i}\) by taking a new vertex \(x_{i}\) and attaching the three dangling edges of \(H_{i}\) to it. Clearly, both \(\bar{H}_{1}\) and \(\bar{H}_{2}\) are 3-connected. Moreover, \(G=\bar{H}_{1}*\bar{H}_{2}\). We have thus decomposed \(G\) into a 3-sum of two smaller 3-connected graphs \(\bar{H}_{1}\) and \(\bar{H}_{2}\). If any of \(\bar{H}_{1}\) and \(\bar{H}_{2}\) contains a cycle-separating 3-edge-cut, we can repeat the process and obtain a set of three cubic graphs such that \(G\) can be reconstructed from them by using 3-sum twice. After a finite number of steps we produce a collection \(\{G_{1},G_{2},\ldots,G_{r}\}\) of 3-connected cubic graphs such that \(G\) can be reconstructed from them by repeated application of 3-sum, but none of the graphs \(G_{i}\) can be expressed as a 3-sum of two smaller cubic graphs. In other words, each \(G_{i}\) is cyclically 4-edge-connected. The collection \(\{G_{1},G_{2},\ldots,G_{r}\}\) is called a _decomposition_ of \(G\) into a 3-sum of cyclically 4-edge-connected graphs, and the graphs \(G_{i}\) are called the _factors
of the decomposition. As we confirm in following theorem, the collection \(\{G_{1},G_{2},\ldots,G_{r}\}\) is determined uniquely up to isomorphism and permutation of factors. In this sense, it is appropriate to call it a _canonical decomposition_ of \(G\).
**Theorem 6.1**.: _Every \(3\)-connected cubic graph \(G\) has a decomposition \(\{G_{1},\)\(G_{2},\ldots,G_{r}\}\) into cyclically \(4\)-edge-connected graphs such that \(G\) can be reconstructed from them by repeated \(3\)-sums. The decomposition is unique up to isomorphism and order of factors._
Proof.: The fact that every \(3\)-connected cubic graph can be decomposed into cyclically \(4\)-edge-connected factors has been explained prior to the formulation of the theorem. As regards the uniqueness, it is not difficult to see that if \(S_{1}\) and \(S_{2}\) are two distinct cycle-separating \(3\)-edge-cuts, then the result of the decomposition into three \(3\)-connected cubic graphs by using the two cuts does not depend on the order in which they are taken. Indeed, if \(S_{1}\cap S_{2}=\emptyset\), the conclusion is obvious. If \(S_{1}\cap S_{2}\neq\emptyset\), then the intersection consists of a single edge, and again both ways in which the decomposition is performed produce the same set of \(3\)-connected cubic graphs up to isomorphism.
We are now ready for the main result of this section.
**Theorem 6.2**.: _Every \(3\)-connected cubic graph whose canonical decomposition into cyclically \(4\)-edge-connected cubic graphs contains at most one cyclically odd factor admits a coherent decycling partition._
Proof.: Let \(G\) be a \(3\)-connected cubic graph whose canonical decomposition \(\{G_{1},\ldots,G_{r}\}\) into cyclically \(4\)-edge-connected graphs contains at most one cyclically odd factor, say \(G_{1}\). We may assume that \(r\geq 2\) for otherwise the result directly follows from Theorem 5.4. Now, \(G\) can be reconstructed from \(\{G_{1},\ldots,G_{r}\}\) by a repeated use of a \(3\)-sum. Furthermore, for cubic graphs \(H\) and \(K\) one has \(\beta(H*K)=\beta(H)+\beta(K)-2\), so \(G\) is cyclically odd if and only if \(G_{1}\) is cyclically odd. Therefore, if we start the reconstruction of \(G\) from \(G_{1}\), the statement of the theorem will follow immediately from following the claim.
Claim. _Let \(G=H*K\) be a \(3\)-sum of cubic graphs where \(H\) admits a coherent decycling partition and \(K\) is cyclically \(4\)-edge-connected with \(\beta(K)\) even. Then \(G\) also admits a coherent decycling partition._
Proof of Claim. Let \(x\) be the root vertex of \(H\), let \(x_{1}\), \(x_{2}\), and \(x_{3}\) be its neighbours, and let \(y\) be the root vertex of \(K\) with neighbours \(y_{1}\), \(y_{2}\), and \(y_{3}\). We assume that the principal \(3\)-edge-cut of \(H*K\) consists of the edges \(x_{1}y_{1}\), \(x_{2}y_{2}\), and \(x_{3}y_{3}\).
Theorem 3.5 and the fact that every cyclically \(4\)-edge-connected cubic graph is upper-embeddable imply that \(K\) has a coherent decycling partition \(\{A_{K},J_{K}\}\) where \(J_{K}\) is independent. Let \(\{A_{H},J_{H}\}\) be an arbitrary coherent decycling partition for \(H\); recall that the induced subgraph \(H[J_{H}]\) has at most one edge. If such an edge exists, we call it a _surplus edge_. Let \(T_{H}\subseteq H\) and \(T_{K}\subseteq K\) denote the trees induced by the sets \(A_{H}\) and \(A_{K}\), respectively. Note that both trees are non-trivial, because each of \(H\) and \(K\) has at least four vertices and their decycling number is given by the formula \(\lceil n+2/4\rceil\), where \(n\) is the number of vertices.
We distinguish two main cases depending on whether \(x\in J_{H}\) or \(x\in A_{H}\).
Case 1. _The root vertex of \(H\) belongs to \(J_{H}\)._
We choose the partition \(\{A_{K},J_{K}\}\) in such a way that \(y\in A_{K}\). This is indeed possible, because Theorem 5.1 tells us that we can pick a neighbour \(y^{\prime}\) of \(y\) and require that \(y^{\prime}\in J_{K}\); as a consequence, \(y\) lies \(A_{K}\). Now we can define a vertex partition \(\{A,J\}\) of \(G\) by setting \(A=A_{H}\cup(A_{K}-\{y\})\) and \(J=(J_{H}-\{x\})\cup J_{K}\). To show that \(\{A,J\}\) is a coherent decycling partition we consider two subcases.
Subcase 1.1. _The root vertex of \(H\) is incident with the surplus edge._
Without loss of generality we may assume that the surplus edge of \(H\) is \(xx_{3}\). We now chose the partition \(\{A_{K},J_{K}\}\) in such a way that \(y_{3}\in J_{K}\). Since \(T_{K}\) is non-trivial, at least one of the edges incident with \(y\) belongs to \(T_{K}\), say \(yy_{1}\), but possibly also \(yy_{2}\). It follows that \(T_{K}-y\) is either a tree or it consists of two trees, each containing a neighbour of \(y\) different from \(y_{3}\). It is easy to see that the induced subgraph \(T=G[A]\) is a tree: depending on the structure of \(T_{K}-y\) either \(T=T_{H}\cup\{x_{1}y_{1}\}\cup(T_{K}-y)\) or \(T=T_{H}\cup\{x_{1}y_{1},x_{2}y_{2}\}\cup(T_{K}-y)\). In the former case, \(J=(J_{H}-x)\cup J(K)\), while in the latter case \(J=(J_{H}-x)\cup(J(K)\cup\{y_{2}\})\). By the assumptions, if \(e\) is an edge joining two vertices of \(J\), then \(e\in\{x_{1}y_{1},x_{2}y_{2},x_{3}y_{3}\}\). Clearly, \(x_{3}y_{3}\) is an edge of \(G[J]\). In the former case, \(x_{1}y_{1}\in E(T)\) and \(x_{2}y_{2}\) joins a vertex of \(T\) to a vertex of \(J\). In the latter case both \(x_{1}y_{1}\) and \(x_{2}y_{2}\) belong to \(T\). In either case, \(G[J]\) has precisely one edge, namely \(x_{3}y_{3}\). Hence, \(\{A,J\}\) is a coherent decycling partition of \(G\).
Subcase 1.2. _The root vertex of \(H\) is not incident with the surplus edge._
As before, we choose a coherent decycling partition \(\{A_{K},J_{K}\}\) where \(y\in A_{K}\), but now we are not restricted to choose a suitable neighbour \(y^{\prime}\) of \(y\) which should be in \(J_{K}\). For simplicity we again choose \(y_{3}\in J_{K}\). We may also assume that either \(xx_{1}\) only, or both \(xx_{1}\) and \(xx_{2}\), belong to \(T_{K}\). As in the previous subcase, the induced subgraph \(G[A]\) is a tree (with identical description as above), and \(G[J]\) has at most one edge. If such an edge exists, then it coincides with the surplus edge of \(H\). We have thus proved that \(\{A,J\}\) is a coherent decycling partition of \(G\).
Case 2. _The root vertex of \(H\) belongs to \(A_{H}\)._
In this case we choose the coherent partition \(\{A_{K},J_{K}\}\) in such a way that \(y\in J_{K}\); the existence of such a partition is guaranteed by Theorem 5.4. Let \(Y\) be the set of all edges of \(G\) having the form \(x_{i}y_{i}\) where \(x_{i}x\) belongs to \(T_{H}\). Since \(T_{H}\) is non-trivial, \(1\leq|Y|\leq 3\). Let us define a vertex partition \(\{A,J\}\) of \(G\) by setting \(A=(A_{H}-\{x\})\cup A_{K}\) and \(J=J_{H}\cup(J_{K}-\{y\})\). Observe that \(T_{H}-x\) is a forest whose number of components equals \(|Y|\). It follows that \(T=G[A]=(T_{H}-x)\cup Y\cup T_{K}\) is a tree. Moreover, \(G[J]\) has at most one edge, which, if it exists, coincides with the surplus edge of \(H\). Hence, \(\{A,J\}\) is a coherent decycling partition of \(G\).
The proof is complete.
In the remainder of this section we show that Theorem 6.2 applies to the family of cubic graphs defined by a weaker form of cyclic \(4\)-connectivity known as odd cyclic \(4\)-connectivity. A graph \(G\) is said to be _odd-cyclically \(k\)-connected_, for an integer \(k\geq 2\), if every induced subgraph \(H\subseteq G\) such that \(\delta_{G}(H)\) is cycle-separating and \(\beta(H)\) is odd has \(|\delta_{G}(H)|\geq k\). The concept of odd cyclic \(4\)-connectivity was independently introduced by Khomenko and Glukhov [16] and Nebesky [26] in 1980 and 1983, respectively, for the study of the maximum genus of a graph.
Every cyclically \(k\)-edge-connected cubic graph is clearly odd-cyclically \(k\)-connected, but the converse is false. Figure 4 depicts an example of a cubic graph which is odd-cyclically \(4\)-connected while its standard cyclic connectivity equals only \(3\).
**Theorem 6.3**.: _Let \(\{G_{1},G_{2},\ldots,G_{r}\}\) be the canonical decomposition of an odd-cyclically \(4\)-connected cubic graph with cyclic connectivity \(3\). Then each \(G_{i}\) has an even Betti number, and so has \(G\). In particular, \(G\) has a coherent decycling partition \(\{A,J\}\) where \(A\) induces a tree and \(J\) is an independent set._
Proof.: It is sufficient to prove that if \(H\) is a \(3\)-connected odd-cyclically \(4\)-connected cubic graph which can be expressed as a \(3\)-sum \(K*L\) of cubic graphs \(K\) and \(L\), then both \(K\) and \(L\) are odd-cyclically \(4\)-connected and have an even Betti number. By symmetry, it suffices to prove it for, say, \(K\).
Since \(H=K*L\), there exists a vertex \(u\) in \(K\) such that \(K-u\) is an induced subgraph of \(H\) with \(|\delta_{H}(K-u)|=3\). By the definition of odd-cyclic \(4\)-connectivity, \(\beta(K-u)\) is even. It follows that \(\beta(K)\) is even, similarly \(\beta(L)\) is even, and consequently \(\beta(H)=\beta(K)+\beta(L)-2\) is also even.
Next we prove that \(K\) is odd-cyclically \(4\)-connected. Suppose not. Then \(K\) contains an induced subgraph \(Q\) such that \(\beta(Q)\) is odd and \(|\delta_{K}(Q)|\leq 3\). Since \(K\) is \(3\)-connected, we infer that \(|\delta_{K}(Q)|=3\). Let \(Q^{\prime}\) be the other component of the graph \(K-\delta_{K}(Q)\), that is, \(Q^{\prime}=K-V(Q)\). Further, let \(u\) be the root vertex of \(K\) used for the \(3\)-sum producing \(H\). We can clearly assume that \(u\) belongs to \(Q^{\prime}\). Note that \(\beta(Q^{\prime})\) is odd, too. Now, at most one edge of \(\delta_{K}(Q)\) is incident with \(u\), because \(\delta_{K}(Q)\) is a minimum cycle-separating edge cut. However, irrespectively of whether \(\delta_{K}(Q)\) does or does not contain such an edge, it is easy to see that \(Q\) is an induced subgraph of \(H\) with \(\beta(Q)\) odd and \(|\delta_{H}(Q)|=3\), contradicting the assumption that \(H\) is odd-cyclically \(4\)-connected. This completes the proof.
## 7 Cubic graphs with no coherent decycling partitions
The aim of this section is to present an infinite family of \(2\)-connected cubic graphs which have a stable decycling partition but no coherent one. By Theorem 4.3, this amounts to displaying an infinite family \(\mathcal{F}\) of \(2\)-connected tightly upper-embeddable graphs. For convenience, the topological language will be in the foreground throughout.
The family \(\mathcal{F}\) will be built from two graphs displayed in Figure 3. An important fact about them is that they are _claw-free_, which means that they
Figure 4: An odd-cyclically \(4\)-connected cubic graph which is not cyclically \(4\)-edge-connected.
do not contain an induced subgraph isomorphic to the complete bipartite graph \(K_{1,3}\). In fact, the entire family \(\mathcal{F}\) consists of claw-free graphs.
Simple \(2\)-connected claw-free cubic graphs were characterised by Palmer et al. [31] (see [30, Proposition 1] in 2002). Their characterisation requires two operations. The first of them is the well-known _inflation_ of a vertex to a triangle. It replaces a vertex \(v\) with a triangle \(W_{v}\) and attaches the edges formerly incident with \(v\) to distinct vertices of \(W_{v}\). The second operation is a _diamond insertion_, where a _diamond_ means a graph isomorphic to \(K_{4}\) minus an edge. This operation is performed by replacing an edge \(e=uv\) with a path \(P_{e}=uu^{\prime}v^{\prime}v\) of length \(3\) and substituting the inner edge \(u^{\prime}v^{\prime}\) of \(P_{e}\) with a diamond \(D_{e}\) in such a way that its \(2\)-valent vertices are identified with \(u^{\prime}\) and \(v^{\prime}\), respectively. More specifically, this is an _insertion of a diamond_ into \(e\), see Figure 5
We can repeat the operation of inserting a diamond into the edge \(e=uv\) by inserting a diamond into the edge \(v^{\prime}v\), and so on, which results in replacing \(e\) with a _string of diamonds_. To be more explicit, we say that \(e\) is replaced with a _string of \(k\) diamonds_, where \(k\geq 0\), if \(e\) is replaced with an alternating sequence \(uu_{1},D_{1},v_{1}u_{2},D_{2},\ldots,v_{k-1}u_{k},D_{k},v_{k}v\) of edges and diamonds such that the \(2\)-valent vertices of the diamond \(D_{i}\) are \(u_{i}\) and \(v_{i}\) for each \(i\in\{1,2,\ldots,k\}\). The number \(k\) is the _length_ of the string. The string collapses to the edge \(e=uv\) if \(k=0\).
To finish the series of definitions needed to characterise simple \(2\)-connected claw-free cubic graphs we define a _ring of diamonds_ to be a graph obtained from an even cycle by substituting every second edge with a diamond.
The characterisation of simple \(2\)-connected claw-free cubic graphs reads as follows.
**Proposition 7.1**.: (Palmer [31]) _Let \(G\) be a simple \(2\)-connected cubic graph. Then \(G\) is claw-free cubic graph if and only if one of the following holds:_
1. \(G\) _is isomorphic to_ \(K_{4}\)_,_
2. \(G\) _is a ring of diamonds,_
Figure 5: Insertion of a diamond into an edge.
3. \(G\) _is obtained from a_ \(2\)_-edge-connected cubic graph_ \(H\) _by inflating each vertex of_ \(H\) _to a triangle and by replacing certain edges of_ \(H\) _with strings of diamonds._
We continue by defining the family \(\mathcal{F}\).
**Construction.** Take a \(2\)-connected cubic graph \(K\) on four vertices; there are two such graphs - the complete graph \(K_{4}\) on four vertices and the _necklace_\(L_{4}\) obtained from a \(4\)-cycle by replacing every second edge with a pair of parallel edges. In \(K\), inflate each vertex to a triangle; these triangles are called _vertex-triangles_. In the resulting graph \(K^{\prime}\) replace each edge not lying on a vertex-triangle with a string of diamonds with positive length; different edges of \(K^{\prime}\) may accommodate different numbers of diamonds. Include each graph obtained in this way in \(\mathcal{F}\). The smallest graphs in \(\mathcal{F}\) are the graphs \(F_{1}\) and \(F_{2}\) from Figure 3, which are obtained by vertex inflation and diamond insertion from \(K_{4}\) or to \(L_{4}\), respectively.
Our aim is to prove that all graphs in \(\mathcal{F}\) are tightly two-face embeddable. In the course of the proof it will be convenient to say that a Xuong tree of a two-face-embeddable cubic graph is _light_ if the unique odd component of the corresponding cotree is light, that is, it is formed by a single edge. It follows from Theorem 4.3 that every Xuong tree of a tightly two-face-embeddable cubic graph is light. A Xuong tree which is not light will be called _heavy_.
We need the following lemma.
**Lemma 7.2**.: _If a cubic graph \(G\) contains a light Xuong tree and \(G^{\prime}\) arises from \(G\) by a diamond insertion, then \(G^{\prime}\) also contains a light Xuong tree._
Proof.: Assume that \(G^{\prime}\) arises from \(G\) by inserting a diamond \(D\) into an edge \(e=uv\) of \(G\). Let \(u^{\prime}\) be the \(2\)-valent vertex of \(D\) adjacent to \(u\) in \(G^{\prime}\) and similarly let \(v^{\prime}\) be the \(2\)-valent vertex of \(D\) adjacent to \(v\). Let \(s\) and \(t\) denote the two \(3\)-valent vertices of \(D\).
Take an an arbitrary light Xuong tree \(T\) of \(G\). We show that \(T\) can be modified to a light Xuong tree of \(G^{\prime}\). We distinguish three cases depending on the position of \(e\) with respect to \(T\).
Case 1. _The edge \(e\) lies in \(T\)_. In this case we extend \(T-e\) with the edges \(uu^{\prime}\), \(u^{\prime}s\), \(st\), \(sv^{\prime}\), and \(v^{\prime}v\) to obtain a spanning tree \(T^{\prime}\) of \(G^{\prime}\). It is easy to see that \(T^{\prime}\) is light.
Case 2. _The edge \(e\) lies in an even cotree component with respect to \(T\)_. Recall that every even cotree component with respect to \(T\) can be partitioned into
pairs of adjacent edges. Let \(f\) be the edge forming a pair with \(e\), and let \(Q\) be the component of \(G-E(T)\) containing \(e\) and \(f\). Without loss of generality we may assume that \(v\) is the common endvertex of both \(e\) and \(f\). Now we extend \(T\) with the edges \(uu^{\prime}\), \(u^{\prime}s\), \(st\), and \(sv^{\prime}\) to obtain a spanning tree \(T^{\prime}\) of \(G^{\prime}\). Observe that \(Q\) is now transformed to an even component of \(G^{\prime\prime}\) containing the path \(u^{\prime}tv^{\prime}v\) and the edge \(f\), while the odd component of \(G-E(T)\) remains in \(G^{\prime}-E(T^{\prime})\) intact. Hence, \(T^{\prime}\) is a light Xuong tree of \(G^{\prime}\).
Case 3. _The edge \(e\) coincides with the one that forms the odd component of \(G-E(T)\)._ Now we form \(T^{\prime}\) by adding to \(T\) the path \(uu^{\prime}sv^{\prime}t\) of \(G^{\prime}\). As a consequence, the path \(u^{\prime}ts\) forms an even component of \(G^{\prime}-E(T^{\prime})\) and the edge \(v^{\prime}v\) constitutes the only odd component of \(G^{\prime}-E(T^{\prime})\). Again, \(T^{\prime}\) is a light Xuong tree of \(G^{\prime}\).
We proceed to the main result of this section. It characterises all tightly two-face embeddable graphs within the class of simple \(2\)-connected claw-free cubic graphs.
**Theorem 7.3**.: _The following statements are equivalent for every simple \(2\)-connected claw-free cubic graph \(G\)._
1. \(G\) _is tightly two-face-embeddable._
2. \(G\in\mathcal{F}\)_._
Proof.: (i) \(\Rightarrow\) (ii): Assume that \(G\) is a simple \(2\)-connected claw-free cubic graph which is tightly two-face-embeddable. Then one of the cases (i), (ii) or (iii) of Proposition 7.1 occurs. In cases (i) and (ii) it is easy to find a heavy Xuong tree, so \(G\) belongs to the family of graphs characterised by (iii) of Proposition 7.1. It follows that \(G\) arises from a \(2\)-connected cubic graph \(H\) by inflating every vertex of \(H\) to a triangle and by replacing certain edges of \(H\) with strings of diamonds.
At first we show that \(H\) has four vertices. Let \(n\) denote the order of \(H\). We apply Nebesky's characterisation of upper-embeddable graphs (Theorem 3.3) to \(G\). Let \(X\) be the set of all edges of \(G\) not lying on a triangle. With this choice, each cyclically odd component of \(G-X\) is a vertex-triangle and each cyclically even component is a diamond. Clearly, \(\operatorname{oc}(G-X)=n\). To calculate the number of cyclically even components we count the contribution of each edge of \(H\) to both \(\operatorname{ec}(G-X)\) and \(|X|\). Assume that an edge \(e\) of \(H\) has been replaced with a string of \(k\) diamonds. Since \(k+1\) edges of the string belong
to \(X\), the edge \(e\) contributes \(-1\) to the difference \(\operatorname{ec}(G-X)-|X|\). There are \(3n/2\) edges in \(H\), so \(\operatorname{ec}(G-X)-|X|=-3n/2\). Equation (1) implies that
\[2\geq 2\operatorname{oc}(G-X)+\operatorname{ec}(G-X)-|X|=2n-3n/2,\]
hence \(n\leq 4\). Recall that \(G\) is two-face-embeddable, which means that \(|V(G)|\equiv 0\pmod{4}\). It follows that \(n=|V(H)|\equiv 0\pmod{4}\) as well, and therefore \(n=4\). Thus \(H=K_{4}\) or \(H=L_{4}\).
To finish the proof of the implication (i) \(\Rightarrow\) (ii) we need show that to obtain \(G\) each edge of \(H\) must be replaced with a string of diamonds which has positive length. Suppose this is not the case, and there are certain edges in \(H\) which are inherited to \(G\) without any diamond insertion. First, let us examine the case where there is only one such edge \(e_{0}\) in \(H\). Up to isomorphism, there are two possibilities to choose \(e_{0}\) in \(L_{4}\) and one in \(K_{4}\). In all three cases \(G\) admits a heavy Xuong tree \(T\) with the odd cotree component containing \(e_{0}\); see Figure 6. Hence, by Theorem 4.3, \(G\) is amply upper-embeddable, contrary to the assumption. To proceed, observe that each diamond \(D\subseteq G\) supports an even component \(Q\) of \(G-E(T)\) such that \(Q\subseteq D\) and both edges of \(\delta_{G}(D)\) are contained in \(T\). If we contract any number of diamonds in \(G\) and suppress the resulting 2-valent vertices, \(T\) will transform to a heavy Xuong tree of the resulting cubic graph, with the original heavy cotree component containing \(e_{0}\) being preserved. It follows that \(G\) is amply upper-embeddable unless each edge of \(H\) has been replaced with a string of diamonds of positive length. Summing up, we have proved that if \(G\) is a simple 2-connected claw-free cubic graph which is tightly two-face-embeddable, then \(G\in\mathcal{F}\).
(ii) \(\Rightarrow\) (i): Assume that \(G\in\mathcal{F}\). We first show that \(G\) admits a light Xuong tree. As Figure 3 indicates, this is the case for both \(F_{1}\) and \(F_{2}\). Since \(G\) arises
Figure 6: Heavy Xuong trees in three small claw-free graphs.
from a graph \(F\in\{F_{1},F_{2}\}\) by iterated diamond insertion, a light spanning tree in \(G\) is guaranteed by Lemma 7.2.
To prove that \(G\) is tightly two-face embeddable we need to show that every Xuong tree of \(G\) is light. We proceed by contradiction and suppose that \(G\) contains a heavy Xuong tree \(T\). By Lemma 4.2, we can assume that the cotree of \(T\) is acyclic. Let \(C\) denote the corresponding cotree. We now examine all possible ways of how \(T\) and \(C\) can intersect a diamond or a vertex-triangle.
Firstly, we analyse the diamonds. It is clear that for each diamond \(D\) the intersection of \(T\) with \(\delta_{G}(D)\) consists of either one or two edges. Accordingly, we distinguish several types of diamonds three of which are of particular interest.
We say that a diamond \(D\) is
* _Type \(1\)_, if exactly one edge of \(\delta_{G}(D)\) belongs to \(T\) and \(C\cap D\) forms a path of length \(2\) connecting the two \(2\)-valent vertices of \(D\);
* _Type \(2\)_, if both edges of \(\delta_{G}(D)\) belong to \(T\) and \(C\cap D\) forms a path connecting the two \(2\)-valent vertices of \(D\); and
* _Type \(3\)_, if both edges of \(\delta_{G}(D)\) belong to \(T\) and \(C\cap D\) forms a path of length \(3\) connecting a \(3\)-valent vertex of \(D\) to a \(2\)-valent vertex of \(D\).
Next we discuss the vertex-triangles. It may be useful to realise that \(T\) contains at least one edge of each vertex-triangle \(W\) and at least one edge of each \(\delta_{G}(W)\). Two types of vertex-triangles are particularly important.
We say that a vertex-triangle \(W\) is
* _Type \(1\)_, if \(C\cap W\) is a path of length \(2\) and exactly one edge of \(\delta_{G}(W)\) belongs to \(C\); and
* _Type \(2\)_, if \(C\cap W\) is a path of length \(2\) and all three edges of \(\delta_{G}(W)\) belong to \(T\).
Note that in Type \(1\) the cotree edge of \(\delta_{G}(W)\) must be incident with the initial or terminal vertex of the path \(C\cap W\), otherwise \(T\) fails to be a spanning subgraph.
The following claim explains the importance of the five types of diamonds and vertex-triangles specified above.
Claim. _In \(G\), there exists a heavy Xuong tree such that each diamond is of Type \(1\), \(2\), or \(3\), and each vertex-triangle is of Type \(1\) or \(2\)._
Proof of Claim. We start with an arbitrary heavy Xuong tree \(T\subseteq G\) whose cotree \(C\) is acyclic. Consider an arbitrary diamond \(D\) of \(G\); let \(u\) and \(v\) be the \(2\)-valent vertices and let \(s\) and \(t\) be the \(3\)-valent vertices of \(D\). If the edge \(st\) belongs to \(T\), then \(D\) it is easily seen to be one of Type \(1\), \(2\), or \(3\). To see it, it is sufficient to realise that \(C\) is acyclic and its unique odd component is heavy. If \(st\) belongs to \(C\) then, up to isomorphism, there are three possibilities for the distribution of edges of \(D\) into \(T\) and \(C\); they are illustrated in Figure 7 on the left. In each of these three cases one can find an edge \(x\) of \(D\) different from \(st\) such that the elementary switch \(T^{\prime}=T+st-x\) gives rise to a Xuong tree whose cotree is again acyclic and has exactly one heavy odd component. According to the notation introduced in Figure 7, it is sufficient to take \(x=tv\) in all three cases. Moreover, with respect to \(T^{\prime}\) the diamond \(D\) turns to be one of Types \(1\), \(2\), or \(3\). By repeating this procedure wherever necessary we eventually obtain a Xuong tree with all diamonds of Type \(1\), \(2\), or \(3\).
We now take care of vertex-triangles. By the previous part of the proof we may assume that \(T\) already has the property that each diamond is of Type \(1\), \(2\), or \(3\). Consider an arbitrary vertex-triangle \(W\) of \(G\). At least one edge of \(W\) belongs to \(T\) and at least one edge of \(\delta_{G}(W)\) belongs to \(T\). If exactly one edge of \(W\) belongs to \(T\), then, up to isomorphism, there are two possibilities for the distribution of edges of \(\delta_{G}(W)\) to \(T\) and \(C\), which depend on the size of \(\delta_{G}(W)\cap E(T)\): if \(|\delta_{G}(W)\cap E(T)|=2\), then \(W\) is Type \(1\), and if \(|\delta_{G}(W)\cap E(T)|=3\), then \(W\) is Type \(2\). The situation
Figure 7: Transformation of diamonds to Type \(1\), \(2\), and \(3\).
that \(|\delta_{G}(W)\cap E(T)|=1\) does not occur, as \(T\) would not be connected. Next assume that exactly two edges of \(W\) belong to \(T\). This leads to three possible distributions, two with \(|\delta_{G}(W)\cap E(T)|=1\) and one with \(|\delta_{G}(W)\cap E(T)|=2\), see Figure 8 on the left. The situation that \(|\delta_{G}(W)\cap E(T)|=3\) does not occur, because it would create a light odd cotree component of \(C\).
Observe that if \(W\) is neither Type 1 nor Type 2, then at least one edge of \(\delta_{G}(W)\) lies in \(C\). If \(z\) is such an edge, then \(z\) joins \(W\) to a diamond of Type 1, implying that \(C\) lies in a cotree component with at least three edges. With this in mind, it is easy to perform a suitable elementary switch of the form \(T^{\prime}=T+x-y\) in such a way that the resulting tree \(T^{\prime}\) is again a heavy Xuong tree and, moreover, \(W\) turns to a vertex-triangle of Type 1 or Type 2. The transformations are indicated in Figure 8. After performing these modifications as many times as necessary we produce a Xuong tree, still denoted by \(T\), where each diamond is Type 1, 2, or 3, and each vertex-triangle is Type 1 or 2. The unique odd component of the corresponding cotree remains heavy by all theese modifications. This proves the claim.
We are ready to derive a contradiction. For \(i\in\{1,2,3\}\), let \(n_{i}\) denote the number of \(i\)-valent vertices of \(T\). A straightforward inductive argument implies that \(n_{1}=n_{3}+2\). Since the diamonds and the vertex-triangles partition the vertex set of \(G\), we can express \(n_{1}\) and \(n_{3}\) as sums of values ranging through the set \(\mathcal{D}\) of all diamonds and the set \(\mathcal{W}\) of vertex-triangles of \(G\). Let \(d_{i}\) denote the number of diamonds of Type \(i\), and let \(w_{i}\) denote the number of vertex-triangles of Type \(i\). Clearly, \(w_{1}+w_{2}=4\).
Note that every vertex-triangle \(W\) of Type 1 is matched to a unique
Figure 8: Transformation of vertex-triangles to Type 1 and 2.
diamond \(D_{W}\) of Type 1 via a cotree edge \(e_{W}\), and together they form the subgraph \(W\cup\{e_{W}\}\cup D_{W}\). There exist \(w_{1}\) such subgraphs in \(G\), and since each of them encloses a cotree component comprising five edges, we conclude that \(w_{1}\leq 1\) and \(w_{2}\geq 3\).
Given an induced subgraph \(K\subseteq G\), set \(\lambda(K)=n_{1}(K)-n_{3}(K)\), where \(n_{i}(K)\) denotes the number of \(i\)-valent vertices of \(T\) contained in \(K\). In particular, for a diamond \(D\) we have \(\lambda(D)=2-1=1\) if \(D\) is Type 1; \(\lambda(D)=1-1=0\) if \(D\) is Type 2; and \(\lambda(D)=2-0=2\) if \(D\) is Type 3. Similarly, for a vertex-triangle \(W\) we have \(\lambda(W)=2-0=2\) if \(W\) is Type 1, and \(\lambda(W)=1-0=1\) if \(W\) is Type 2. Putting this information together we obtain:
\[2=n_{1}-n_{3} =\lambda(G)=\sum_{D\in\mathcal{D}}\lambda(D)+\sum_{W\in\mathcal{ W}}\lambda(W)\] \[=1d_{1}+0d_{2}+2d_{3}+2w_{1}+1w_{2}\geq w_{2}\geq 3,\]
which is a contradiction. Our initial assumption that \(G\in\mathcal{F}\) admits a heavy Xuong tree was therefore false; it follows that \(G\) is tightly upper-embeddable. The proof is complete.
The collection of known tightly two-face-embeddable graphs can be significantly enlarged by applying a specific form of a well-known operation, known as the \(2\)-sum of cubic graphs. Before defining this operation it is convenient to recall that a cubic graph \(G\) is tightly two-face-embeddable if and only if every Xuong tree of \(G\) is light, that is, the unique odd component of the corresponding cotree is formed by a single edge (Theorem 4.3). An edge \(e\) of a tightly two-face-embeddable cubic graph will be called _odd_ if there exists a Xuong tree \(T\) such that \(e\) consitutes the unique odd cotree component of \(G-E(T)\).
Let \(G_{1}\) and \(G_{2}\) be two tightly two-face-embeddable graphs, and let \(e_{i}\) be an odd edge of \(G_{i}\) for \(i\in\{1,2\}\). We construct a new graph \(G\) by adding to \((G_{1}-e_{1})\cup(G_{2}-e_{2})\) two independent edges \(f_{1}\) and \(f_{2}\), each joining a \(2\)-valent vertex of \(G-e_{1}\) to a \(2\)-valent vertex of \(G-e_{2}\), in such a way that \(G\) becomes cubic. We say that \(G\) is an _odd \(2\)-sum_ of \(G_{1}\) and \(G_{2}\) with respect to \(e_{1}\) and \(e_{2}\). It is easy to see that \(G\) is again \(2\)-connected and that the two newly added edges form a \(2\)-edge-cut of \(G\). This cut will be referred to as the _principal_\(2\)-edge-cut of the \(2\)-sum.
**Theorem 7.4**.: _An odd \(2\)-sum of two \(2\)-connected tightly two-face embeddable cubic graphs is again \(2\)-connected and tightly two-face embeddable._
Proof.: Let \(G_{1}\) and \(G_{2}\) be tightly two-face-embeddable graphs with odd edges \(e_{1}\) and \(e_{2}\), respectively, and let \(G\) be an odd 2-sum of \(G_{1}\) and \(G_{2}\) with respect to \(e_{1}\) and \(e_{2}\). Let \(f_{1}\) and \(f_{2}\) be the edges of the principal 2-edge-cut of \(G\).
Take Xuong trees \(T_{1}\subseteq G_{1}\) and \(T_{2}\subseteq G_{2}\) for which \(e_{1}\) and \(e_{2}\), respectively, form the corresponding odd components. Observe that \((T_{1}\cup T_{2})+f_{1}\) is a spanning tree of \(G\) and that \(f_{2}\) constitutes the unique odd component of the corresponding cotree. Thus \(G\) is two-face-embeddable. To prove that \(G\) is tightly 2-face-embeddable it remains to show that for every Xuong tree \(T\) of \(G\) is light.
Suppose to the contrary that \(G\) admits a Xuong tree \(T\) such that \(G-E(T)\) has a heavy odd component, which we denote by \(H\). For \(i\in\{1,2\}\) set \(G^{\prime}_{i}=G_{i}-e_{i}\) and \(T_{i}=T\cap G^{\prime}_{i}\). Clearly, \(T\) contains at least one of the edges \(f_{1}\) and \(f_{2}\). Accordingly, we have two cases to consider.
Case 1. _The spanning tree \(T\) contains exactly one edge of the principal edge cut_. Without loss of generality we may assume that \(f_{1}\) is contained in \(T\). It follows that \(T=T_{1}\cup\{f_{1}\}\cup T_{2}\) and both \(T_{1}\) and \(T_{2}\) are spanning trees of \(G_{1}\) and \(G_{2}\), respectively. Now, if \(H\) does not contain \(f_{2}\), then \(H\) is a heavy odd cotree component with respect to either \(T_{1}\) or \(T_{2}\). However, this is impossible because both \(G_{1}\) and \(G_{2}\) are tightly upper-embeddable. Therefore \(H\) contains \(f_{2}\). For \(i\in\{1,2\}\) set \(H_{i}=H\cap G^{\prime}_{i}\). Since \(H\) is odd and contains \(f_{2}\), both \(H_{1}\) and \(H_{2}\) have the same parity and at least one of them is nonempty, say \(H_{1}\). If \(H_{1}\) is even, then \(H_{1}\cup\{e_{1}\}\) is a unique odd component of \(G_{1}-E(T_{1})\), and is heavy, which is a contradiction. It follows that both \(H_{1}\) and \(H_{2}\) are odd and consequently both \(T_{1}\) and \(T_{2}\) have only even cotree components in \(G_{1}\) and \(G_{2}\), respectively. This contradiction excludes Case 1.
Case 2. _The spanning tree \(T\) contains both edges of the principal edge cut_. In this case exactly one of \(T_{1}\) and \(T_{2}\) is connected, say \(T_{1}\). It follows that \(T_{1}\) is a spanning tree of \(G_{1}\) and \(T_{2}+e_{2}\) is a spanning tree of \(G_{2}\). Since \(T\) contains both edges of the principal 2-edge-cut, each cotree component with respect to \(T\) must be contained either in \(G^{\prime}_{1}\) or in \(G^{\prime}_{2}\). If \(H\subseteq G^{\prime}_{2}\), then \(T_{2}+e_{2}\) would be a heavy Xuong tree of \(G_{2}\), which is impossible because \(G_{2}\) is tightly two-face-embeddable. Therefore \(H\subseteq G^{\prime}_{1}\). Now, \(H\) cannot have a common vertex with \(e_{1}\), because \(T_{1}\) would be a Xuong tree of \(G_{1}\) with all cotree components even, one of them being \(H\cup\{e_{1}\}\). But if \(H\) has no common vertex with \(e_{i}\), then it constitutes a heavy odd component of \(G_{1}-E(T_{1})\), which is impossible because \(G_{1}\) is tightly two-face-embeddable. Thus Case 2 cannot happen as well, and the statement is proved.
In certain cases, odd edges are not difficult to specify. For instance, both the necklace \(L_{4}\) and the graph of order \(8\) obtained from the dipole \(D_{2}\) by inserting a digon into each edge are easily seen to be tightly two-face embeddable graphs. It is also easy to see that each edge lying on a digon in any of these two graphs is odd. Consulting Figure 3 again we can conclude that in the two basic graphs \(F_{1}\) and \(F_{2}\) of the family \(\mathcal{F}\) each edge lying on a vertex-triangle is odd. Since vertex-triangles are preserved by a diamond insertion and, by Lemma 7.2, a tight spanning tree of the smaller graph extends to a tight spanning tree of the larger one, we conclude that every edge lying on a vertex-triangle of any graph \(F\in\mathcal{F}\) is also odd.
## 8 Concluding remarks
**Remark 8.1**.: In Theorem 4.3 we have characterised amply two-face embeddable graphs as those which admit a Xuong tree whose single odd cotree component has at least three edges. Finding a similar characterisation for amply one-face embeddable graphs - or even finding the "right" definition of ample one-face embeddability that would be compatible with that for ample two-face embeddability - remains an open problem.
**Remark 8.2**.: If we wish to prove that a given cubic graph \(G\) is amply two-face embeddable it is sufficient to find a heavy Xuong tree in \(G\). By contrast, the proof of Theorem 7.3 suggests that to argue that \(G\) is _not_ amply two-face-embeddable is not such an easy task. The reason is that we do not have a tool similar to Nebesky's theorem, which can be efficiently used to prove that a graph is not upper-embeddable. By Equation (1), a connected graph \(G\) is upper-embeddable if and only if
\[\operatorname{ec}(G-X)+2\operatorname{oc}(G-X)-2\leq|X|\]
for each subset \(X\subseteq E(G)\). In other words, to prove that a connected cubic graph is not upper-embeddable it is sufficient to identify a subset \(Y\subseteq E(G)\) such that \(\operatorname{ec}(G-Y)+2\operatorname{oc}(G-Y)-2>|Y|\). In this context it is a natural question to ask whether there exists a function \(\alpha\colon E(G)\to\mathbb{Z}\) "similar" to the Nebesky function \(\nu(X)=\operatorname{ec}(G-X)+2\operatorname{oc}(G-X)-2\) such that a connected cubic graph is amply upper-embeddable (or at least amply two-face-embeddable) if and only \(\alpha(X)\leq|X|\) for each subset \(X\subseteq E(G)\).
**Remark 8.3**.: Theorem 4.3 offers a natural question of how amply and tightly upper-embeddable graphs are distributed within the class of cubic graphs. In
general, tightly upper-embeddable cubic graphs are not easy to find. Moreover, it does not seem easy to prove that a given cubic graph is tightly upper-embeddable. We have shown that there exist infinite families of tightly upper-embeddable cubic graphs with connectivity \(1\) and \(2\). However, no examples of \(3\)-connected tightly upper-embeddable graphs are known to us. This is why we conjecture that a \(3\)-connected cubic graph admits a coherent decycling partition if and only if it is upper-embeddable (Conjecture 1.5).
As far as cubic graphs with connectivity \(2\) are concerned, tight upper embeddability appears to be a rare event. This indicates that among the upper-embeddable graphs those that are tightly upper-embeddable should constitute a negligible part. Therefore, if we take into account that almost all cubic graphs are upper-embeddable and return to the equivalent language of decycling partitions, it seems likely that Problem 1.6 has a positive answer. In other words, we make the following conjecture.
**Conjecture 8.4**.: Almost all cubic graphs contain an induced tree whose removal leaves a subgraph with at most one edge.
### Acknowledgements
The authors express their gratitude to the anonymous referees for their careful reading and constructive suggestions, and to J. Fiala and J. Karabas for useful comments.
|
2308.16526 | Gravity-induced entanglement between two massive microscopic particles
in curved spacetime: I.The Schwarzschild background | The experiment involving the entanglement of two massive particles through
gravitational fields has been devised to discern the quantum attributes of
gravity. In this paper, we present a scheme to extend this experiment's
applicability to more generalized curved spacetimes, with the objective of
validating universal quantum gravity within broader contexts. Specifically, we
direct our attention towards the quantum gravity induced entanglement of mass
(QGEM) in astrophysical phenomena, such as particles traversing the
interstellar medium. Notably, we ascertain that the gravitational field within
curved spacetime can induce observable entanglement between particle pairs in
both scenarios, even when dealing with particles significantly smaller than
mesoscopic masses. Furthermore, we obtain the characteristic spectra of QGEM
across diverse scenarios, shedding light on potential future experimental
examinations. This approach not only establishes a more pronounced and
extensive manifestation of the quantum influences of gravity compared to the
original scheme but also opens avenues for prospective astronomical
experiments. These experiments, aligned with our postulates, hold immense
advantages and implications for the detection of quantum gravity and can be
envisioned for future design. | Chi Zhang, Fu-Wen Shu | 2023-08-31T08:16:43Z | http://arxiv.org/abs/2308.16526v2 | Gravity-induced entanglement between two massive microscopic particles in curved spacetime: I. The Schwarzschild background
###### Abstract
The experiment involving the entanglement of two massive particles through gravitational fields has been devised to discern the quantum attributes of gravity. In this paper, we present a scheme to extend this experiment's applicability to more generalized curved spacetimes, with the objective of validating universal quantum gravity within broader contexts. Specifically, we direct our attention towards the quantum gravity induced entanglement of mass (QGEM) in astrophysical phenomena, such as particles traversing the interstellar medium. Notably, we ascertain that the gravitational field within curved spacetime can induce observable entanglement between particle pairs in both scenarios, even when dealing with particles significantly smaller than mesoscopic masses. Furthermore, we obtain the characteristic spectra of QGEM across diverse scenarios, shedding light on potential future experimental examinations. This approach not only establishes a more pronounced and extensive manifestation of the quantum influences of gravity compared to the original scheme but also opens avenues for prospective astronomical experiments. These experiments, aligned with our postulates, hold immense advantages and implications for the detection of quantum gravity and can be envisioned for future design.
## I Introduction
One of the significant cornerstones in the history of physics is the establishment of quantum field theory, which has successfully described the interaction of all relativistic fields except gravity. Despite decades of substantial efforts, the quantization of the gravitational field remains a challenging endeavor, leaving the comprehensive theory of quantum gravity still elusive. A significant barrier lies in the absence of experimental evidence supporting the quantum aspects of gravity, despite several proposed experimental designs aimed at detecting quantum gravity phenomenology [1].
Recently, the experiment concerning the quantum gravity induced entanglement of mass (QGEM), a laboratory-based proposal designed to measure quantum gravitational effects, was introduced by Bose et al. [2] and Marletto and Vedral [3]. This experiment involves the gravitational entanglement of two mesoscopic test particles. By observing the growth of entanglement between the particles in the QGEM scenario [see Fig. 1], one can confirm the quantum nature of the gravitational field. Christodoulou and Rovelli [4] further extended this scheme to a generally covariant description, considering the effect as the quantum superposition of two distinct spacetime geometries along a particle's worldline. More recent advancements can be found in [5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22].
However, these previous studies assumed that massive particles were placed in an approximately flat spacetime (localized within the lab) and existed for short durations (seconds). In contrast, numerous astrophysical processes generate massive particles that interact over extended periods while traversing the universe, providing a natural setting for detecting the QGEM effect.
This paper proposes an innovative scheme to demonstrate gravity-induced entanglement in more general curved spacetimes, involving smaller-scale particles (microscopic when compared to the mesoscopic particles utilized in the QGEM setting). Specifically, we investigate the generation of entanglement between two particles in a Schwarzschild background to universally and convincingly test the quantum gravity experiment. Assuming that the particle pairs move along geodesic paths in each instance, the separation between each pair of trajectories will change due to geodesic deviation [see Fig. 2]. Consequently, the proper time will vary between each pair of trajectories due to alterations in spacetime geometry. Intuitively, the proper time of the closest particles, which have the shortest spacelike distance between them, will experience the most significant increase. Building upon [4], we compute the phase shift in each superposition state to determine the presence of entanglement. Remarkably, entanglement indeed emerges, illustrating the quantum gravity effect in general curved spacetime. We conduct a comprehensive analysis to explore the factors influencing the phase shift and their quantitative impact on entanglement.
However, a challenge remains in distinguishing whether the observed entanglement is generated by the quantum gravity effect of the particle pairs or by other processes during emission and propagation. To address this concern, we propose that QGEM during geodesic motion will exhibit a distinctive spectrum, as phase shifts occur across a series of geodesics. Analyzing the entangled patterns formed by various geodesics can assist in determining whether the entanglement originates from the gravitational field of nearby particles or from alternative sources.
The paper is structured as follows: In Sec. II, we provide a detailed description of our proposition for generating gravity-induced entanglement of microscopic massive particles in curved spacetime. In Sec. III, as an illustration, we consider a pair of particles moving within a globular clus
ter with a Schwarzschild-like metric and explore the influence of initial conditions on gravity-induced entanglement. We analyze two galactic models with different mass profiles, namely the dual pseudo-isothermal elliptical density (dPIE) profiles and the Navarro-Frenk-White (NFW) profile. In Sec. IV, we examine a more realistic scenario where the particles' geodesic trajectories deviate from the galaxy's center. Additionally, to uncover more features of QGEM, we investigate the entanglement witness as a function of the particles' kinetic energy. The paper concludes in the final section.
Throughout this paper, we adopt the natural units system, \(c=G=1\), to simplify calculations. Physical quantities with units are provided in the SI system of units.
## II Entanglement generation of microscopic particles in curved spacetime
In order to apply the QGEM scheme to astrophysics, its covariant description becomes crucial. The first covariant generalization of the QGEM scheme was undertaken by Christodoulou and Rovelli [4]. They also highlighted that gravity-induced entanglement arises from the superposition of spacetime geometry, leading to distinct changes in proper time across the four branches.
To delve into specifics, the particle pair is initially prepared in a superposition state of spin and spatial position:
\[\left|\Psi_{i}\right\rangle=\frac{1}{2}\Big{(}\lvert LL\rangle+\lvert LR \rangle+\lvert RL\rangle+\lvert RR\rangle\Big{)}\otimes\left|B\right\rangle, \tag{1}\]
where \(\lvert LL\rangle=\left|\Psi_{1}^{L}\right\rangle\otimes\left|\Psi_{2}^{L}\right\rangle\), and so forth. The notation \(\left|B\right\rangle\) represents the quantum state of the background gravitational field.
Subsequently, in accordance with the hypothetical quantum superposition of distinct spacetimes [4], the four branches will exhibit distinct time evolutions. The term \(\lvert RL\rangle\), characterized by the shortest separation, will accrue the maximum phase due to the rapid growth of inherent time, as expressed by
\[\phi=-\frac{m_{0}\tau}{\hbar}\approx-\frac{m_{0}t}{\hbar}\left(1-\frac{m_{0}} {R}-\frac{m_{0}}{d}\right), \tag{2}\]
where \(m_{0}\) signifies the particle's mass. Consequently, the central phase difference responsible for entanglement is represented by
\[\delta\phi=-\frac{m_{0}\delta\tau}{\hbar}=\frac{{m_{0}}^{2}t}{\hbar d}. \tag{3}\]
Upon recombining the two components of the superposition, the final state, accounting for an overall phase factor, can be articulated as
\[\left|\Psi_{f}\right\rangle=\frac{1}{2}\left(\lvert LL\rangle+\lvert LR \rangle+e^{i\delta\phi}\left|RL\rangle+\lvert RR\rangle\right). \tag{4}\]
Evidently, this state embodies entanglement of the spins of the two test masses, signifying that the gravitational field manifests as a quantum phenomenon.
However, this covariant description [4] remains confined to a flat space-time background. Furthermore, within the QGEM setting, two masses are situated at the mesoscopic scale and are subjected to a Stern-Gerlach setting, leading to a superposition of two components where each particle occupies different positions. However, this operation becomes challenging for astrophysical sources located far from Earth.
To address these challenges, our approach deviates from the use of mesoscopic particles. In this study, we instead focus on microscopic massive particles, which inherently possess a superposition state due to their quantum nature. These microscopic particles freely fall within a generally curved spacetime, providing an opportunity to investigate gravity-induced entanglement. An overview of our general experimental setup is presented in Fig. 2.
Two identical microscopic particles, labeled A and B, both in the superposition state of two spatial positions as detailed in [2; 3], traverse curved spacetime along their respective geodesic paths toward an Earth-based detector. Our objective is to employ a covariant methodology to comprehend the generation of entanglement throughout their journey.
In the reference frame of the moving particle, the particle's proper time is given by: \(\tau=\int d\tau\). Let's now consider the scenario where another particle is in close proximity, moving alongside it but at a certain distance apart. The distance \(d\left(\tau\right)\) varies over time due to geodesic deviation. In this context, we assume that the gravitational attraction between the particles
Figure 1: The QGEM experiment setup: In the initial phase of this experiment, two mesoscopic particles, each initially in a spin superposition state, are placed a short distance apart. Subsequently, the application of an inhomogeneous magnetic field prompts each particle to assume a spatially split state contingent on its spin configuration. This step ensures a spin-dependent spatial separation. Following this, a coherent superposition of states is maintained for a certain duration, while keeping the separation distance fixed within each branch. In the subsequent phase, the magnetic field is deactivated, and the spatially split states are realigned to regain their coherence. Finally, the experiment involves measuring spin correlations and calculating the entanglement witness. This analysis aims to determine if the system is indeed in an entangled state. Successful identification of entanglement would serve as confirmation of the quantum properties of gravity, consistent with the principles of entanglement theory.
is significantly weaker than the tidal force, allowing \(d\left(\tau\right)\) to be primarily influenced by geodesic deviation. According to the equivalence principle in superposition spacetime [23] and the Newtonian limit approximation, the proper time for the two particles in each branch can be expressed as:
\[\tau=\int\left(1-\frac{m}{d\left(\tau\right)}-\frac{m}{R}\right)d\tau, \tag{5}\]
where \(R\sim\lambda_{c}\) (\(\lambda_{c}\) representing the Compton wavelength of the particle) denotes the radius of each body. It should be much smaller than the distance, \(R\ll d(\tau)\), yet significantly larger than the Schwarzschild radius: \(R\gg r_{s}=2m\). The phase difference between each branch is entirely attributed to the term:
\[\delta\tau=-\int\frac{m}{d\left(\tau\right)}d\tau, \tag{6}\]
resulting in a phase change that can now be expressed as:
\[\delta\phi=-\frac{m_{0}\delta\tau}{\hbar}=\frac{{m_{0}}^{2}}{\hbar}\int\frac{1 }{d\left(\tau\right)}d\tau, \tag{7}\]
where \(m_{0}\) denotes the static mass of the particles.
Subsequently, we will primarily focus on calculating the phase change in one typical astrophysical scenario: a spherically symmetric background, such as a globular cluster. It is important to note that, in this experimental design, for this order of analysis, the moving particles should be massive. This is because a relatively stationary observer cannot perceive the gravitational effects of a massless particle, such as a photon.
## III Entanglement generation by globular galaxy
Our example focuses on entanglement induced by the gravitational field within a cluster of galaxies, teeming with neutral massive particles and possessing a size substantial enough to generate a pronounced entanglement effect. To provide a comprehensive exploration, we investigate two distinct galactic models with differing mass profiles. The first model employs the dual pseudo-isothermal elliptical density (dPIE) profiles [24; 25], which have been demonstrated to aptly describe the mass distributions of the brightest cluster galaxies (BCGs) [26]. These profiles find widespread use in lensing studies and deliver accurate fits to observed galaxies. The second model encompasses the Navarro-Frenk-White (NFW) profile [27], derived from N-body simulations, and widely employed for simulating dark matter (DM) halos within the \(\Lambda\)CDM universe.
### dPIE model
In this subsection, let us envision an isolated galaxy cluster within the universe. For the sake of simplicity, we will exclusively consider the brightest cluster galaxy (BCG) component of the cluster. As previously indicated, the spherical dual pseudo-isothermal elliptical (dPIE) profiles prove especially fitting for characterizing the mass distributions of BCGs. These profiles are defined by their 3D-density [24; 25; 26]:
\[\rho_{\text{dPIE}}\left(r\right)=\frac{\rho_{0}}{\left(\frac{r^{2}}{r_{\text{ core}}^{2}}+1\right)\left(\frac{r^{2}}{r_{\text{core}}^{2}}+1\right)}, \tag{8}\]
where \(r\) is the distance from the center of the mass distribution, \(r_{\text{core}}\) is the core radii and and \(r_{\text{cut}}\) is the truncation radii with \(r_{\text{cut}}>r_{\text{core}}\). While \(\rho_{0}\) is the central density, which is related to the 1D-central velocity dispersion, \(\sigma_{0}\), by [24]
\[\rho_{0}=\frac{\sigma_{0}}{2\pi G}\frac{r_{\text{cut}}+r_{\text{core}}}{r_{ \text{core}}^{2}r_{\text{cut}}}. \tag{9}\]
Integrating Eq. (8), one can obtain the total mass \(m(r)\) enclosed by a sphere of radius \(r\)
\[m\left(r\right) =4\pi\int_{0}^{r}x^{2}\rho_{\text{dPIE}}\left(x\right)\mathrm{d}x \tag{10}\]
when \(r<R\), where \(R\) is the total radius of the BCG. Hence, the total mass of the galaxy is
\[M=m(R), \tag{11}\]
\begin{table}
\begin{tabular}{c c c c c} Galactic & Model Parameters & Radius & \(\rho_{0}\) & Mass \\ Models & (kpc) & R (kpc) & \((M_{\odot}\cdot\text{kpc}^{-3})\) & \((M_{\odot})\) \\ \hline dPIE & \(r_{\text{core}}=1.2,r_{\text{cut}}=38\) & 2500 & \(2\times 10^{8}\) & \(2.07\times 10^{11}\) \\ NFW & \(r_{s}=260\) & 2500 & \(644.5\) & \(2.07\times 10^{11}\) \\ \end{tabular}
\end{table}
Table 1: Parameter choices of two different galaxy models
Figure 2: Illustration of particle trajectories. The particles labeled as A and B are identical and microscopic in nature. They exist in a superposition state across two distinct spatial positions. The solid line delineates the geodesic trajectory of the particles within the curved spacetime.
when \(r>R\). The complete set of parameters utilized in the model adopted for this paper is enumerated in Table 1. The parameters of the BCG model are grounded on the fitting conducted on the A383 cluster, as demonstrated in [26]. The parameter values for NFW, which will be employed in the subsequent subsection, also derive from the A383 dataset. However, to enable a meaningful comparison between the two models, suitable adjustments have been applied to ensure that both models possess identical radius and mass values.
With this mass profile in consideration, we are now prepared to present the metric for the background. Particularly, beyond the BCG, i.e., for \(r>R\), the metric can be succinctly described by the Schwarzschild metric, with the mass specified in Equation (11) [28].
\[ds^{2}=\left(1-\frac{2M}{r}\right)dt^{2}-\frac{dr^{2}}{1-\frac{2M}{r}}-r^{2} \left(d\theta^{2}+\sin^{2}\!\theta d^{2}\varphi\right). \tag{12}\]
While for the interior of the BCG (i.e., \(r<R\)), the metric is given by [see Appendix A]
\[ds^{2}=e^{2A(r)}dt^{2}-e^{2B(r)}dr^{2}-r^{2}\left(d\theta^{2}+\sin^{2}\!\theta d ^{2}\varphi\right), \tag{13}\]
Here \(A(r)\) is obtained by integrating \(A(r)=\int_{0}^{r}\frac{m(r)}{x^{2}}\mathrm{d}x\) and the explicit expression is given in Eq. (10), and \(B(r)\) is given by
\[B(r)=-\frac{1}{2}\ln\left(1-\frac{2m(r)}{r}\right). \tag{14}\]
For convenience in the following calculations, let us unify these two cases into one form like Eq. (13), i.e.,
\[A(r)=-B(r)=\frac{1}{2}\ln\left(1-\frac{2M}{r}\right)\ \ \text{for}\ r>R. \tag{15}\]
From now on, let us focus on one specific trajectory as shown in Fig. 3. One of the particles starts at \(r_{0}\), moving toward the center of the cluster and arrive the observer at \(r_{f}\) finally. The whole trajectory follows a geodesic line through the center of the cluster.
It turns out the tetrad formalism is particularly convenient to discuss the geodesics. As to the present case, we construct the following four unity vectors,
\[{e_{(0)}}^{\mu} = e^{-A(r)}(\partial t)^{\mu},\ \ {e_{(1)}}^{\mu}=e^{-B(r)}(\partial r)^{\mu}, \tag{16}\] \[{e_{(2)}}^{\mu} = \frac{1}{r}(\partial\theta)^{\mu},\ \ {e_{(3)}}^{\mu}=\frac{1}{r\sin \theta}(\partial\varphi)^{\mu}, \tag{17}\]
so that they form an orthogonal base
\[g_{\mu\nu}{e_{(a)}}^{\mu}{e_{(b)}}^{\nu}=\eta_{(a)(b)}, \tag{18}\]
where \((a=0,1,2,3)\) and \(\eta_{(a)(b)}=\text{diag.}(1,-1,-1,-1)\).
Recalling the geodesic equations in terms of four-velocity \(u^{\mu}(\tau)\) is
\[u^{\mu}\circ({e_{(a)}}_{\nu}u^{\nu})+({e^{(b)}}_{\nu}u^{\nu})(de_{(b)})_{\rho \sigma}u^{\rho}{e_{(a)}}^{\sigma}=0. \tag{19}\]
For radial geodesics passing through the center (the origin) as shown in Fig. 3, the geodesic equations, after substituting Eqs. (13) and (16) into Eq. (19), reduce to
\[\begin{split}&\tilde{t}+2A^{\prime}(r)\dot{r}\dot{t}=0,\\ & e^{2B(r)}\ddot{r}+e^{2B(r)}B^{\prime}(r)\dot{r}^{2}+e^{2A(r)}A^ {\prime}(r)\dot{t}^{2}=0,\end{split} \tag{20}\]
where the prime denotes differentiation with respect to \(r\), while the dot represents differentiation with respect to \(\tau\). Note that the full trajectory can be split into four stages as shown in Fig. 3. The above equations (20) are only used for the first two stages. However, using the spherically symmetric nature of the spacetime, the third and forth stages can be viewed as a reverse of the stages 2 and 1, respectively. Therefore, equations (20) can be used by redefining
\[\tilde{t}=2t_{2}-t\left(2\tau_{2}-\tau\right),\tilde{r}=r\left(2\tau_{2}-\tau \right),\tilde{\theta}=\theta,\tilde{\varphi}=\varphi+\pi. \tag{21}\]
Our first goal is to solve these equations (20), which are difficult to solve analytically. We therefore find the numerical solutions. Without loss of generality, for later numerical computations, we assume that the particle has static mass \(m_{0}=10^{-25}\)kg, which is a typical mass of a microscopic particle (with the Compton wavelength \(\lambda_{c}\sim 10^{-15}\) m).
Fig. 4 shows a set of particle trajectories. The initial value for radial coordinate \(r_{0}\) in our numerical calculations is taken to be \(4.5\times 10^{16}\) (in natural unit1), while the initial values for \(\dot{r}(\tau=0)\) and \(\dot{t}(\tau=0)\) can be transformed to the initial values of four velocity as
Footnote 1: Translated to the International System of Units (SI system), \(r_{0}=1.35\times 10^{22}\text{km}\sim 4.375\times 10^{8}\text{pc}\).
\[u^{\mu}(\tau=0)=\gamma{e_{(0)}}^{\mu}-\gamma v_{0}{e_{(1)}}^{\mu}, \tag{22}\]
where \(\gamma=\frac{1}{\sqrt{1-v_{0}^{2}}}\) and \(v_{0}\) is the initial velocity of the particle.
Fig. (4a) illustrates that the value of \(r\) decreases almost linearly with \(\tau\), implying the particle's propagation from the source to the cluster. The steeper slope of the curve corresponds to a larger initial velocity. This correlation is reasonable since particles with higher \(v_{0}\) will outpace those with
Figure 3: Sketch of particle trajectory in galaxy cluster. It includes four different stages. The first stage refers to the trajectory from the starting point (\(r=r_{0},\varphi=\varphi_{0}\)) to the boundary of the cluster (\(r=R,\varphi=\varphi_{0}\)). The second stage involves the boundary \(r=R\) to the center of the cluster (\(r=0\)). The third stage is the reverse of the second one, namely, from (\(r=0\)) to (\(r=R,\varphi=\varphi_{0}+\pi\)). The last stage, which can be regarded as the reverse of the first stage, involves the geodesics from (\(r=R,\varphi=\varphi_{0}+\pi\)) to (\(r=r_{f},\varphi=\varphi_{0}+\pi\)), the observer.
lower \(v_{0}\). In Fig. (4b), the curves of \(t\) vs. \(\tau\) largely overlap across the four different \(v_{0}\) values, suggesting insensitivity to the initial velocity's magnitude.
Taking into account that the spacelike interval between two adjacent branches must exceed the Compton wavelength, we provide distinct initial conditions for the following scenarios.
Now, let's proceed to calculate the geodesic deviation vectors, from which values of \(d(\tau)\) can be deduced--this term plays a pivotal role in evaluating \(\delta\phi\) as shown in Equation (7). These vectors are determined through the solution of geodesic deviation equations in the tetrad formalism. Unlike the fixed tetrad used in Equation (16) for geodesics, the tetrad here must be parallelly transported along the geodesics [29]. In other words, \(\tilde{e}_{(a)}{}^{\mu},_{\nu}v^{\nu}=0\), where \(v^{\nu}\) is the tangent vector of the geodesics. Consequently, the axes' orientations remain fixed and non-rotating, as ascertained by local dynamical experiments. It turns out that the following selection of tetrads is appropriate
\[\tilde{e}_{(0)}{}^{\mu} = \dot{t}(\partial t)^{\mu}+\dot{r}(\partial r)^{\mu}, \tag{23}\] \[\tilde{e}_{(1)}{}^{\mu} = e^{B-A}\dot{r}(\partial t)^{\mu}+e^{A-B}\dot{t}(\partial r)^{\mu},\] (24) \[\tilde{e}_{(2)}{}^{\mu} = \frac{1}{r}(\partial\theta)^{\mu},\ \ \tilde{e}_{(3)}{}^{\mu}=\frac{1}{r \sin{(\theta)}}(\partial\varphi)^{\mu}. \tag{25}\]
Note that in order for the above tetrad satisfying the orthogonal normalized condition (18), one extra condition should be imposed
\[e^{2A}\dot{t}^{2}-e^{2B}\dot{r}^{2}=1. \tag{26}\]
The timelike basis \(\tilde{e}_{(0)}{}^{\mu}\) is the geodesic four-velocity vector and the remaining three are spacelike basis pointing in different direction. All the four basis compose an orthogonally translational basis. In this tetrad basis the metric becomes just Minkowski metric.
Then according to the geodesic deviation equation in the above frames, we have
\[\frac{d^{2}w^{(a)}}{d\tau^{2}}+k^{(a)}{}_{(b)}w^{(b)}=0, \tag{27}\]
where \(w^{(a)}\) is geodesic deviation vector in tetrad form
\[w^{(a)}=\tilde{e}_{\mu}^{(a)}w^{\mu}, \tag{28}\]
and
\[k^{(a)}{}_{(b)}=-R^{\mu}{}_{\nu\rho\sigma}\tilde{e}_{\mu}^{(a)}v^{\nu}v^{\rho }\tilde{e}_{(b)}^{\sigma}. \tag{29}\]
Substituting the metric (13) and the tetrad (23) and (26) into above definition, one finds that the nonvanishing components of \(k^{(a)}{}_{(b)}\) are
\[k^{(1)}{}_{(1)} = e^{-2B(r)}\left(A^{\prime\prime}\left(r\right)+A^{\prime}(r)^{2} -A^{\prime}\left(r\right)B^{\prime}\left(r\right)\right), \tag{30}\] \[k^{(2)}{}_{(2)} = k^{(3)}{}_{(3)}=\frac{B^{\prime}\left(r\right)\dot{r}^{2}}{r}+ \frac{e^{2A(r)-2B(r)}A^{\prime}\left(r\right)}{r}\dot{t}^{2}. \tag{31}\]
Without loss of generality, we assume that \(w^{\mu}\) is orthogonal to \(\tilde{e}_{(0)}{}^{\mu}\), such that we have \(w^{(0)}=0\). Due to the spherical symmetry of the background, the second and third components of \(w^{(a)}\) are simply symmetric angular components. Therefore, we can also choose suitable frame such that the initial value of \(w^{(3)}\) is vanishing. Note that Eq. (27) is homogenous, hence if \(\hat{w}^{(a)}\) is a solution of this equation, so is \(\kappa\hat{w}^{(a)}\), where \(\kappa\) is an arbitrary nonzero constant. Hence, once we initially have vanishing \(w^{(3)}\) and \(\hat{w}^{(3)}\), it will always be. In the end we are left with two nonzero components \(w^{(1)}\) and \(w^{(2)}\).
The total spacelike distance \(d(\tau)\) between the pair of particles is
\[\begin{split} d(\tau)&=\sqrt{w^{(1)}(\tau)^{2}+w^{ (2)}(\tau)^{2}}\Delta l,\\ \tilde{d}(\tau)&=\sqrt{\tilde{w}^{(1)}(\tau)^{2}+ \tilde{w}^{(2)}(\tau)^{2}}\Delta l,\end{split} \tag{32}\]
where \(\tilde{\ }\) means the distance in the stages 3 and 4. \(\Delta l\) is a constant and can be absorbed into \(w^{(a)}\) in our numerical computations. Integrating over all four stages, one leads to the change of the proper time
Figure 4: Particle trajectories for different initial particle velocity \(v_{0}\) with fixed \(r_{0}=r_{f}\sim 4.5\times 10^{16}\). (a) Relations between radial distance \(r\) and the proper time \(\tau\). Value of \(r\) decreasing with \(\tau\) indicates that the particle is propagating from the source to the cluster. Negative values of \(r\) represent that the particle has passed through the center of the cluster. (b) Relations between temporal coordinate \(t\) and the proper time \(\tau\). They are insensitive to the initial velocity.
Once \(\delta\tau\) is obtained, the change in phase can be known through
\[\delta\phi=-\frac{m_{0}\delta\tau}{\hbar}. \tag{34}\]
As a consequence, the key step is to get the values of \(w^{(i)}\) and \(\hat{w}^{(i)}\) (\(i=1,2\)), which can be obtained by solving the geodesic deviation equations (27), (30) and (31).
In what follows let us turn to numerically solve the geodesic deviation equations (27), (30) and (31) under the metric ansatz Eqs. (13)-(15) and (20), and obtain \(\delta\phi\) for appropriate initial conditions. Totally the initial conditions include: the initial geodesic deviations \(w_{0}^{(1)}\) and \(w_{0}^{(2)}\), the initial geodesic deviation velocities \(\dot{w}_{0}^{(1)}\) and \(\dot{w}_{0}^{(2)}\) measured in tetrad basis (23), the initial radial coordinates \(r_{0}\), and lastly the initial values for \(\dot{r}(\tau=0)\) and \(\dot{t}(\tau=0)\), which, again, can be transformed to the initial values of four velocity via (22) by setting suitable initial velocity \(v_{0}\). Totally there are six initial conditions to be assigned. However, for simplicity, in our following numerical simulations we will assume that \(w_{0}^{(1)}=w_{0}^{(2)}\) and \(\dot{w}_{0}^{(1)}=\dot{w}_{0}^{(2)}\). The full set of initial values which are adopted in the numerical simulations can be found in Table 2. Note that throughout the paper, \(z\) represents the redshift of the source2
Footnote 2: The redshift \(z\) can be calculated by using
\[r(z)=r_{0}+r_{f}=\int_{z}^{0}\frac{-\mathrm{d}z}{H_{0}\sqrt{\Omega_{\Lambda}+ \Omega_{M}(1+z)^{3}+\Omega_{R}(1+z)^{2}}},\]
where \(\Omega_{\Lambda}\), \(\Omega_{M}\) and \(\Omega_{R}\) are the density parameters for dark energy, matter and radiation, respectively. \(H_{0}\) is the Hubble constant. According to Plank [2018]’ \(\Omega_{\Lambda}=0.685\), \(\Omega_{m}=0.315\), \(\Omega_{R}=0\) and \(H_{0}=67.4\ km/s/Mpc\). Without loss of the generality, in the paper we take \(r_{f}=4.5\times 10^{16}\) for simplicity.
magnitude of the phase change varies with two of four initial values, with the remaining two values keep fixed.
From the left six pictures, it is obvious that the phase change is sensitive to the initial geodesic deviation velocity \(\dot{w}_{0}\) no matter whether it is positive or negative. More specially, from figures in the leftmost panel one can see that the phase change increases at the beginning with negative deviation velocity \(-\dot{w}_{0}\) and then decreases. This is contrary to that of \(d(\tau)\) as shown in Fig. 5a. This is reasonable because the separation distance \(d(\tau)\) appearing in the denominator of Eq. (33). While figures in the second column show that \(\delta\phi\) decreases with initial deviation velocity \(\dot{w}_{0}\) monotonously, which is also consistent with the results as shown in Fig. 5. In both cases, \(\delta\phi\) increases monotonously with the redshift \(z\), although it is relatively mild for the negative initial deviation velocity case. In contrast, the phase change decreases monotonously both with the initial particle velocity \(v_{0}\) and initial deviation distance \(w_{0}\) no matter whether the initial geodesic deviation velocity is positive or negative.
On the other hand, three figures in the bottom right corner corresponds to the cases where the initial deviation velocity is 0, leaving dependence of \(\delta\phi\) on \(z,v_{0}\) and \(w_{0}\), respectively. Specifically, the third column shows that for fixed \(z\), \(\delta\phi\) decreases with both \(v_{0}\) and \(w_{0}\) monotonously. In contrast, it increases with \(z\) especially obvious for fixed \(v_{0}\). This is because as \(z\) increases, the proper time of the whole geodesic motion also increases, but the geodesic deviation is basically unaffected. The rightmost figure shows that for fixed \(z\) and \(v_{0}\), the phase change is inversely proportional to the initial geodesic deviation \(w_{0}\). This is because the geodesic deviation equations are a set of homogeneous second-order differential equations and the spacelike interval at every moment is proportional to its initial value \(w_{0}\).
### NFW model
In this subsection let us consider the NFW model, which can be useful to observe the effects of the DM halos in the \(\Lambda\)CDM universe. The density profile and mass of the NFW model is [26]
\[\begin{split}&\rho_{\text{NFW}}\left(r\right)=\frac{\rho_{0}}{ \frac{r}{r_{s}}\Big{(}1+\frac{r}{r_{s}}\Big{)}^{2}},\\ & m\left(r\right)=4\pi\rho_{0}{r_{s}}^{3}\left(-\frac{r}{r+r_{s} }+\ln\left(1+\frac{r}{r_{s}}\right)\right),\end{split} \tag{35}\]
where \(\rho_{0}\) is the scaling density and \(r_{s}\) is the scaling radius. These two parameters have relationship with the Virial mass of the halo through \(M_{200}\propto\rho_{0}r_{s}^{3}\) and are related to the concentration parameter through \(c=r_{200}/r_{s}\). In order to compare with the dPIE model, in this paper we set \(\rho_{0}\) such that the total mass of the cluster is the same as the one of the dPIE model. The full list of the parameters can be found in Table 1. Following the same procedures, one finds the metric parameters \(A(r)\), \(B(r)\) in (13) is
\[\begin{split}& A(r)=4\pi\rho_{0}{r_{s}}^{3}\left(\frac{1}{r_{s}}+ \frac{\ln\left(\frac{r}{r+r_{s}}\right)}{r}\right)+c,\\ & B(r)=-\frac{1}{2}\ln\left(1-\frac{2m(r)}{r}\right),\\ & c=\frac{1}{2}\ln\left(1-\frac{8\pi\rho_{0}{r_{s}}^{3}\left(- \frac{R}{R+r_{s}}+\ln\left(\frac{R+r_{s}}{r_{s}}\right)\right)}{R}\right)- \frac{4\pi\rho_{0}{r_{s}}^{2}\left(r_{s}\ln\left(\frac{r_{s}}{R+r_{s}}\right) +R\right)}{R}.\end{split} \tag{36}\]
Following what we did in the last subsection, one can obtain phase change \(\delta\phi\) for different initial conditions.
Fig. 7 depicts the phase change \(\delta\phi\) as a function of initial redshift, considering other initial values given in Table 2. Notably, the plot exhibits a nearly linear growth of the phase change with increasing initial \(z\). This behavior is attributed to the increase in the proper time of motion before passing through the center of the galaxy cluster, which serves as the integral variable, while the geodesic deviation after passing through the center remains almost unchanged. Consequently, the curve closely resembles a straight line.
In Fig.8, we plot the phase change \(\delta\phi\) as a function of initial radial velocity \(v_{0}\). One can see from this plot that the phase change decreases monotonously as increasing \(v_{0}\). In addition, for small \(v_{0}\), it decreases fast as \(v_{0}\) increases and gradually the decrease becomes mildly. Finally it approaches zero gradually for large \(v_{0}\). This is can be explained in the following way: the particle's total proper time integrated over the trajectory decreases precipitously as initial radial velocity \(v_{0}\) goes from zero to non-zero. This has been already verified in the dPIE model as shown in Fig. 5b.
As we can see from Fig. 9, the phase change is just inversely proportional to \(w_{0}\) as expected.
Fig. 10 is a plot of \(\delta\phi\) as a function of initial geodesic deviation velocity \(\dot{w}_{0}\) of the particles3. The initial values of other parameters are given in Table 2. From Fig. 10a, we find that
for smaller \(\dot{w}_{0}\), \(\delta\phi\) decreases slowly with increasing \(\dot{w}_{0}\), then suddenly for some point (\(\dot{w}_{0}\approx 10^{-37}\)), it decrease quickly and then tends to be nearly flat. This behavior can be roughly explained in the following way: Eq. (27) can be converted to \(\frac{d^{2}\omega_{0}(z)}{dx^{2}}=-{k^{(a)}}_{(b)}w^{(b)}\), the r.h.s of the equation looks like a "geodesic deviation acceleration" which is proportional to \({k^{(a)}}_{(b)}\) and \(w^{(b)}\). Meanwhile, comparing with other regions, \({k^{(a)}}_{(b)}\) has dominant effects near the center of the cluster. As a consequence, the particle pair with a larger \(w^{(b)}\) will acquire an overwhelming geodesic deviation acceleration as they pass the center of the cluster. The separation distance \(d(\tau)\) between them will continue to increase after this process. So when the absolute value of the geodesic deviation velocity reaches a certain value, \(w^{(b)}\) (or \(d(\tau)\)) will be large enough and the phase change will drop sharply as it is inverse proportional to \(d(\tau)\) as shown in Eqs. (33) and (34).
The negative initial geodesic deviation velocity case, however, is very different as shown in Fig. 10b. \(\delta\phi\) of this case increases slowly with increasing \(-\dot{w}_{0}\) at the beginning, and then suddenly increases to a maximum at some point (\(\dot{w}_{0}\approx 10^{-37}\)), followed by a quick decrease to lower value. Finally it descends gradually to some finite value. Recalling a similar behavior existing in S3 of Fig. 5a, where \(d(\tau)\) firstly goes down quickly as increasing \(-\dot{w}_{0}\) and then grows
Figure 8: The phase change \(\delta\phi\) as a function of initial radial velocity \(v_{0}\) of the particles. The initial values of the other parameters are given in Table 2.
Figure 7: The phase change \(\delta\phi\) as a function of initial radial position (redshift) of the particles. The initial values of the other parameters are given in Table 2.
Figure 9: The phase change \(\delta\phi\) as a function of initial geodesic deviation \(w_{0}\) of the particles. The initial values of the other parameters are given in Table 2.
up vastly. We know that this is because for the particle pairs with a negative initial geodesic deviation velocity, at the beginning the particle pairs approach each other and then they move away from each other. Similarly, in the present case, the particles with a negative initial geodesic deviation velocity approach each other firstly and later will move away from each other, so that \(d(\tau)\) will decrease quickly at the beginning and then increase fast, leading to inverse behavior of \(\delta\phi\) as \(d(\tau)\) appearing in the denominator of Eq. (33).
In the previous simulations, we let the total mass \(M\) of the cluster fix. In order to see the influence of \(M\) on the entanglement phase, it is useful to calculate the phase change with different mass in dPIE and NFW model. We first plot phase change as a function of \(M\) in the dPIE model as shown in Fig. (a)a. Then we define a \(\Delta\phi=\delta\phi_{dPIE}-\delta\phi_{NFW}\), where \(\delta\phi_{dPIE}\) and \(\delta\phi_{NFW}\) denote, respectively, the phase change for dPIE and NFW models. We then plot \(\Delta\phi\) as a function of \(M\) as shown in Fig. (b)b. We can find from Fig. (a)a that phase change decreases faster and faster as \(M\) increases. This is because the stronger the gravitational field, the shorter the proper time, the phase change decreases with galaxy's mass. Then we can see from Fig. (b)b that the specific model also act on the phase change in a certain way. In the smaller mass range NFW model is more conducive to the formation of phase change while in the bigger mass range dPIE is better at inducing phase change.
In summary, compared with dPIE model, the relationship between the phase change and each initial value exhibits the similar characteristics. Generally speaking, larger \(z\), smaller initial radial velocity \(v_{0}\), and appropriate negative initial deviation velocity lead to bigger \(\delta\phi\) and (possibly) more significant effects of entanglement.
## IV Offset geodesics and characteristic spectrum
In the preceding discussion, we made the assumption that all particles follow geodesics passing through the cluster's center. However, in reality, geodesics may deviate from the cluster's center in various ways. Moreover, from an experimental standpoint, distinguishing whether observed entanglement arises from the quantum gravity effect of particle pairs or from their emission at the source presents a significant challenge. Therefore, it becomes crucial to discern whether gravity induces entanglement or if other physical processes are at play.
To tackle this challenge, we propose exploring the characteristic spectrum of QGEM during geodesic motion, which
Figure 10: The phase change \(\delta\phi\) as a function of initial geodesic deviation velocity \(\dot{w}_{0}\) of the particles. The initial values of the other parameters are given in Table 2.
Figure 11: (a) The phase change \(\delta\phi\) as a function of mass for dPIE model. (b) The difference of the phase change for different models \(\Delta\phi=\delta\phi_{dPIE}-\delta\phi_{NFW}\) as a function of \(M\). Here, \(M_{clu}\) is just the mass given in Table 1. The initial values of the other parameters are given by S1 in Table 2.
manifests as phase changes along a series of geodesics. Through analysis of the entangled patterns formed by different geodesics, we can infer the entanglement's origin--whether it arises from the gravitational field of nearby particles or from alternative sources.
Generally speaking, there are two possible geodesics, contingent upon our knowledge of the particles' source location, as illustrated in Fig. 12. In what follows, we would like to discuss possible characteristic spectrum of the QGEM in these two cases, respectively.
### The particles' source location is given
In the first geodesic case, we consider a globular cluster with a density profile described by the dPIE model [26]. The parameter values of the model are chosen the same as in Table 1, and the particle's mass is assumed, again, to be \(10^{-25}\)kg. For the sake of simplicity, in this subsection, we only consider the set S1 from Table 2 as the initial parameter values.
At the source point, we use the same orthogonal frame as in (16) and (17). The four-velocity is expressed as:
\[u^{\mu}=\gamma e_{(0)}{}^{\mu}-\gamma v_{0}\left(\cos\xi e_{(1)}{}^{\mu}-\sin \xi e_{(2)}{}^{\mu}\right), \tag{37}\]
where \(\gamma=\frac{1}{\sqrt{1-v_{0}^{2}}}\) and \(v_{0}\) is the particle's initial emission velocity. The four-velocity mentioned above is connected to the deflection angle \(\xi\), with \(\xi\) being assigned to values in the range of \(\xi\in[0,\pi)\) for the initial four-velocity. For particles that can reach the Earth, \(\xi\) and \(v_{0}\) are not independent. Namely, for each \(\xi\), there is a unique value of \(v_{0}\). To simplify our analysis, we will focus on the scenario where the entire trajectories lie in the same plane. Under these initial conditions, the geodesics will deviate from the center of the cluster. Notably, when \(\xi=0\), the geodesics that pass through the galaxy's center will be recovered. We also build a set of parallelly transported tetrads and assume the initial geodesic deviation vector is:
\[w^{(a)}=\tilde{e}_{(\parallel)}w^{\parallel}+\tilde{e}_{(\perp)}w^{\perp}, \tag{38}\]
where \(\tilde{e}_{(\parallel)}\) is the basis parallel to the connecting line between the Earth and the galaxy cluster, \(\tilde{e}_{(\perp)}\) is the basis perpendicular to \(\tilde{e}_{(\parallel)}\) and four-velocity. The initial geodesic deviation component \(w^{\parallel}=\frac{1}{3}\times 10^{-16}\) and \(w^{\perp}=0\).
Now let us delve into the QGEM effect in this scenario. As usual, we can denote positive helicity as \(\uparrow\) and negative helicity as \(\downarrow\). Therefore, the adjacent propagating particles are in a superposition state of \(\frac{1}{\sqrt{2}}\left(\left|\uparrow L\right.\right)+\left|\downarrow R\right.\rangle\). As a result, the expected entangled final state matches (4), and the phase change induced by gravity is again described by (7). Following the same procedures given in section III, one can calculate the phase change under different initial conditions.
Compared to the initial conditions, particle's energy is an important observable. In observer's frame, each particle emitted at a different angle will have a specific speed, and the Earth's observer will measure a corresponding kinetic energy. In this static spherically symmetric space-time, there is a conserved quantity along geodesics, denoted as \(C\)[4], is given by 4:
Footnote 4: This is because: for a given Killing vector \(\xi^{\mu}\) and a tangent vector \(u^{\mu}=(\partial_{\tau})^{\mu}\) of a geodesic \(\gamma(\tau)\), the identity holds: \(u^{\mu}\nabla_{\mu}(\xi_{\nu}u^{\nu})=0\).
\[C=\xi_{\mu}u^{\mu}=\frac{\sqrt{\xi_{\mu}\xi^{\mu}}}{m}E_{ob}, \tag{39}\]
where \(\xi^{\mu}\) represents the timelike killing vector \((\partial t)^{\mu}\), \(u^{\mu}\) is the particle's four-velocity (37), and \(E_{ob}\) is the energy measured by a local stationary observer at infinity (e.g., an observer on Earth). We use the kinetic energy of the particles measured by observer on Earth to label each particle. The observed kinetic energy for particles emitted from different initial angles is calculated as follows:
\[E_{\text{kin}} = E_{ob}-m \tag{40}\] \[= m\gamma e^{A(r_{0})-A(r_{\text{amb}})}-m,\]
where, \(r_{\text{earth}}\) represents the coordinate distance between the center of the Earth and the center of the galaxy cluster, and \(r_{0}\) represents the coordinate distance between location of particle emission and the center of the galaxy cluster.
The result is shown in Fig.13. It shows that as the emission angle \(\xi\) increases, the kinetic energy of particles reaching Earth decreases, and the intrinsic length of the geodesic line increases, leading to a progressively larger entanglement phase. The irregular jumps on the curve are caused by the instability of numerical simulation.
Phase change is not directly observable. On the other hand, the entanglement witness \(\mathcal{W}\) is often utilized as an experimental indicator to detect entanglement formation.The definition of entanglement witness is
\[\mathcal{W}=\left|\langle\sigma 1_{x}\otimes\sigma 2_{z}\rangle+\langle\sigma 1_{y} \otimes\sigma 2_{y}\rangle\right|. \tag{41}\]
When it's greater than 1, we can infer that there is entanglement between the two particles. The variation of \(\mathcal{W}\) with respect to \(E_{kin}\) is plotted in Fig. 14. From this figure, we observe that the entanglement witness oscillates rapidly with energy, but it exhibits different oscillating frequencies in various energy segments. Specifically, it oscillates relatively fast in the region of low energy, which coincides with the fact that the phase increases faster with an increase in the emission angle.
### The particles' source location is unknown
In this case, since we lack information about the location of the particles' source, we cannot use the deflection angle to span the geodesics. Instead, we utilize \(h\), which represents the initial vertical distance from horizontal geodesics, to effectively span the geodesics, as illustrated in Fig. 12b. Each particle exhibits a distinct starting \(h\) value and, when coupled
with an appropriate initial four-velocity, can be successfully detected by a probe on Earth. The four-velocity can be expressed as follows:
\[u^{\mu}=\gamma{e_{(0)}}^{\mu}+\gamma v_{0}{e_{\parallel}}^{\mu}, \tag{42}\]
where \({e_{\parallel}}^{\mu}\) is the unit space-like base formed by the linear combination of \({e_{(1)}}^{\mu}\) and \({e_{(2)}}^{\mu}\), making it parallel to the connecting line between the Earth and the center of the galaxy cluster. Namely,
\[{e_{\parallel}}^{\mu} = \sqrt{\frac{{r_{0}}^{2}+h^{2}}{{e^{2B({r_{0}}^{2}+h^{2})}}{r_{0}} ^{2}+h^{2}}} \tag{43}\] \[\cdot\left(-\frac{r_{0}}{\sqrt{{r_{0}}^{2}+h^{2}}}{e_{(1)}}^{\mu} +\frac{h}{{r_{0}}^{2}+h^{2}}{e_{(2)}}^{\mu}\right)\]
In this case, the kinetic energy observed by the observer on Earth is determined by
\[E_{\rm kin}=m\gamma{e^{A(\sqrt{{r_{0}}^{2}+h^{2}})-A(r_{\rm earth})}}-m, \tag{44}\]
and each geodesic to the Earth corresponds to a specific value of \(v_{0}\) and \(\gamma\).
The characteristic entangled spectral lines are shown in Figure 15 and Fig. 16. Similarly, the entanglement phase drops rapidly with increasing energy, and the entanglement witness oscillates slowly at high energy and rapidly at low energy. This behavior indicates that at greater emission heights, the phase increases faster. The plot of entanglement witness as a function of the emission height \(h\) can show more details. From Fig. 17, we see that for dPIE models with different center densities, the segment of the entanglement curve where the entanglement exceeds \(1\) varies. Additionally, the entanglement witness with greater center density oscillates more slowly. In other words, a higher center density \(\rho_{0}\) of a galaxy will impede the generation of entanglement between particles. A possible explanation is that, under the condition of the same
Figure 14: Variation of entanglement witness \(\mathcal{W}\) as a function of \(E_{kin}\) in dPIE model.
Figure 12: Two series geodesics offset from the center of the galaxy. (a) The particles’ source location is given. (b) The particles’ source location is unknown.
Figure 13: Plot of entangled phase and observed particle energy. The vertical axis represents the entanglement phase \(\delta\phi\), which is formed in the propagation process. On the horizontal axis, we have the kinetic energy \(E_{\rm kin}\) measured by Earth observers after the particles with different emission parameters have reached the Earth.
Figure 15: Entangled phase as a function of observed particle energy. The vertical axis represents the entanglement phase \(\delta\phi\) formed during the propagation process, while the horizontal axis corresponds to the kinetic energy \(E_{kin}\) measured by the Earth observer after the particles with different emission parameters reach the Earth.
emission height, particles take less proper time to reach Earth when passing through more massive galaxies, which hinders entanglement formation.
In summary, in both cases, the entanglement phase varies monotonously with the initial geodesic emission parameters, \(\xi\) and \(h\), causing the entanglement witness \(\mathcal{W}\) to oscillate with the initial parameters (or the kinetic energy), as shown in Figs. 14, 16 and 17. This characteristic could be a significant index to distinguish the origin of the observed entanglement. More specifically, if the particles' entanglement is native (formed during particle emission), then the entanglement witness will be randomly distributed with energy and will not exhibit the quasi-periodic oscillation behavior mentioned above. In other words, only entanglement induced by quantum gravity effects can result in \(\mathcal{W}\) exhibiting this quasi-periodic behavior. Thus, the identification of these entanglement features in experimental observations can provide confidence in concluding that the entanglement is induced by quantum gravity along the geodesic, rather than being formed during their emission.
The above conclusions are also valid for the NFW model. Fig. 18 and Fig. 19 show the entanglement witness as a function of kinetic energy, using the initial NFW parameters given in Tables 1 and 2. These plots explicitly exhibit similar quasi-oscillation behavior as observed in the dPIE model in BCG.
## V Conclusions and discussions
In the realm of quantum gravity, the challenge lies not in the absence of a complete mathematical physical theory, but rather in the scarcity of experimental approaches to connect theory and reality. In light of recent advancements in quantum technologies, endeavors have been made to elucidate the quantum nature of gravity. Among these endeavors, the quantum gravity induced entanglement of masses (QGEM) proposal has garnered significant attention [31, 2, 4, 2]. In this paper, we extend the QGEM experiments to include curved spacetimes. More specifically, we consider the QGEM for a pair of particles traveling along their geodesics in a galaxy with Schwarzschild metric as the spacetime background. We find that particle pairs readily become entangled at larger radial coordinates with appropriate small initial radial velocities.
By investigating the relations between entanglement witness and the kinetic energy observed by the observer (determined by the initial values of the model parameters), we find that there has a characteristic spectrum of QGEM. This provides us a way to distinguish whether the observed entanglement arises from the quantum gravity effect of particle pairs or from other process (e.g, the emission stage at the source as suggested by the Hawking radiation of black holes).
In addition, execution of the proposals outlined in this paper will demonstrate a more comprehensive quantum gravity effect in extensive spacetime, surpassing the limitations of lo
Figure 16: Entanglement witness as a function of kinetic energy.
Figure 19: Entanglement witness as a function of kinetic energy in the NFW model as the location of the particles’ source is not given.
Figure 17: Variation of entanglement witness with particle emission height for dPIE models with different center density parameters. Here, we assume that the value range of \(h\) is 0.05\(R\) to 0.2\(R\). Other galaxy parameters and kinematic parameters remain unchanged.
Figure 18: Entanglement witness as a function of kinetic energy in the NFW model as the location of the particles’ source is given.
cal experiments carried out in labs [2; 3]. Notably, our design scheme accommodates much lighter particle masses and significantly larger particle spacings. These particles may traverse hundreds of millions of years of geodesic movement before detection, an impractical feat on Earth. Additionally, direct detection of particles from the universe simplifies the experiment's preparation process. Microscopic particles may spontaneously exist in a superposition state somewhere in the universe, coinciding with our computational hypothesis mentioned above.
Furthermore, our astronomical observation scheme, based on the hypothesis of the equivalence principle in superposition spacetime [23], will serve as validation for this extended equivalence principle.
Admittedly, there are some technical details need improve in the future. The geodesic deviation equation serves as a first-order approximation formula for describing spacelike distance variations between particles. We solely consider the background spacetime metric while neglecting the backreactions of the nearby particles. Numerical calculations introduce method error and truncation error in the numerical simulation. Additionally, our idealized assumptions may not fully capture the complexities of the real universe.
In future research, we aim to employ more sophisticated calculation methods to account for the gravitational force of nearby particles. Additionally, we will explore more complex geodesic trajectories beyond radial movement to detect as many entangled particles as possible. Furthermore, in cosmic spacetime, two particles may be separated by considerable spacelike distances. As highlighted in [31], we will seek new methods to extract spacelike entanglement and investigate how gravity induces spacelike entanglement in curved spacetime.
## Acknowledgements
This work was supported by the National Natural Science Foundation of China with the Grants Nos. 12375049, 11975116, and Key Program of the Jiangxi Natural Science Foundation under Grant No. 20232ACB201008.
## Appendix A Inner metric of BCG
Inside a static globular galaxy cluster, the metric could be assumed as:
\[ds^{2}=e^{2A(r)}dt^{2}-e^{2B(r)}dr^{2}-r^{2}\left(d\theta^{2}+\text{sin}^{2} \theta d^{2}\varphi\right), \tag{10}\]
where \(A(r)\) is calculated by \(A(r)=\int_{0}^{r}\frac{m(x)}{x^{2}}\mathrm{d}x+c\). The additional \(c\) is added so that \(A(r)\) transitions smoothly in and out of the galaxy. Substituting (10) into above formula, after careful calculation, we find the full expression for \(A(r)\) is
\[A(r)=\frac{2\pi\rho_{0}r_{\text{core}}^{2}r_{\text{cut}}^{2}\left(r\left(\log \left(\frac{r^{2}}{r_{\text{cut}}^{2}}+1\right)-\log\left(\frac{r^{2}}{r_{ \text{cut}}^{2}}+1\right)\right)-2r_{\text{core}}\text{ancrocot }\left(\frac{r_{\text{core}}}{r}\right)+2r_{\text{cut}}\text{ancrocot }\left(\frac{r_{\text{core}}}{r}\right)\right)}{r\left(r_{\text{core}}^{2}-r_ {\text{cut}}^{2}\right)}+c, \tag{11}\]
where
\[\begin{split} c=&\frac{2\pi\rho_{0}r_{\text{core}}^ {2}r_{\text{cut}}^{2}\left(R\left(\log\left(\frac{R^{2}}{r_{\text{core}}^{2}} +1\right)-\log\left(\frac{R^{2}}{r_{\text{cut}}^{2}}+1\right)\right)+2r_{ \text{core}}\text{cot}^{-1}\left(\frac{r_{\text{core}}}{R}\right)-2r_{\text {cut}}\text{cot}^{-1}\left(\frac{r_{\text{core}}}{R}\right)\right)}{R\left(r_ {\text{core}}^{2}-r_{\text{cut}}^{2}\right)}\\ &+0.5\log\left(\frac{8\pi\rho_{0}r_{\text{core}}^{2}r_{\text{ cut}}^{2}\left(r_{\text{cut}}\text{tan}^{-1}\left(\frac{R}{r_{\text{core}}} \right)-r_{\text{core}}\text{tan}^{-1}\left(\frac{R}{r_{\text{core}}}\right) \right)}{R\left(r_{\text{core}}^{2}-r_{\text{cut}}^{2}\right)}+1\right). \end{split} \tag{12}\]
|
2305.19831 | An Empirical Study of Federated Learning on IoT-Edge Devices: Resource
Allocation and Heterogeneity | Nowadays, billions of phones, IoT and edge devices around the world generate
data continuously, enabling many Machine Learning (ML)-based products and
applications. However, due to increasing privacy concerns and regulations,
these data tend to reside on devices (clients) instead of being centralized for
performing traditional ML model training. Federated Learning (FL) is a
distributed approach in which a single server and multiple clients
collaboratively build an ML model without moving data away from clients.
Whereas existing studies on FL have their own experimental evaluations, most
experiments were conducted using a simulation setting or a small-scale testbed.
This might limit the understanding of FL implementation in realistic
environments. In this empirical study, we systematically conduct extensive
experiments on a large network of IoT and edge devices (called IoT-Edge
devices) to present FL real-world characteristics, including learning
performance and operation (computation and communication) costs. Moreover, we
mainly concentrate on heterogeneous scenarios, which is the most challenging
issue of FL. By investigating the feasibility of on-device implementation, our
study provides valuable insights for researchers and practitioners, promoting
the practicality of FL and assisting in improving the current design of real FL
systems. | Kok-Seng Wong, Manh Nguyen-Duc, Khiem Le-Huy, Long Ho-Tuan, Cuong Do-Danh, Danh Le-Phuoc | 2023-05-31T13:16:07Z | http://arxiv.org/abs/2305.19831v1 | # An Empirical Study of Federated Learning on IoT-Edge Devices: Resource Allocation and Heterogeneity
###### Abstract
Nowadays, billions of phones, IoT and edge devices around the world generate data continuously, enabling many Machine Learning (ML)-based products and applications. However, due to increasing privacy concerns and regulations, these data tend to reside on devices (elients) instead of being centralized for performing traditional ML model training. Federated Learning (FL) is a distributed approach in which a single server and multiple clients collaboratively build an ML model without moving data away from clients. Whereas existing studies on FL have their own experimental evaluations, most experiments were conducted using a simulation setting or a small-scale testbed. This might limit the understanding of FL implementation in realistic environments. In this empirical study, we systematically conduct extensive experiments on a large network of IoT and edge devices (called IoT-Edge devices) to present FL real-world characteristics, including learning performance and operation (computation and communication) costs. Moreover, we mainly concentrate on heterogeneous scenarios, which is the most challenging issue of FL. By investigating the feasibility of on-device implementation, our study provides valuable insights for researchers and practitioners, promoting the practicality of FL and assisting in improving the current design of real FL systems.
Federated Learning, IoT-Edge Devices, On-Device Training, Empirical Study.
## I Introduction
By the end of 2018, there were an estimated 22 billion IoT devices in use around the world and this number is increasing fast. Forecasts suggest that by 2030 the number of IoT devices will increase to around 50 billion [1]. Also, 100 billion ARM CPUs currently dominate the IoT market have been shipped so far [2]. This installation base is a key enabler for many industrial and societal domains, especially Artificial Intelligence (AI) and Machine Learning (ML) powered applications [3]. However, due to increasing privacy concerns and regulations [4], especially in sensitive domains like healthcare or finance, these valuable assets mostly remain inaccessible and cannot be centralized for conducting traditional ML model training.
To address this issue, Federated Learning (FL) [5] was proposed, which allows multiple parties (clients) to train a shared global model collaboratively in a decentralized fashion without sharing any private dataset. In general, a standard FL framework, as illustrated in Fig. 1, consists of two main steps: (1) Client training, in which clients train models on their local data for several epochs and send their trained models to a central server, and (2) Model aggregation, in which the server aggregates those models to establish a global model and distributes this global model back to the clients. This 2-step procedure is repeated for numerous rounds until the global model converges or a target level of accuracy is reached.
Although FL recently has received considerable attention from the research community [6, 7] thanks to several advantages such as scalability or data privacy protection, it still has many serious challenges which lead to difficulties for real-world implementation. Specifically, clients in a federation differ from each other in terms of computational and communication capacity. For instance, the hardware resources (memory, CPU/GPU, or connectivity) of various IoT and edge devices (IoT-Edge devices) like Raspberry Pi devices or NVIDIA Jetson devices are much different. Therefore, considering all clients equally might lead to suboptimal efficiency. Furthermore, the training data owned by each client can be non-independent, identically distributed (Non-IID), and with different quality and quantity.
Fig. 1: The standard FL framework.
These challenges make FL impractical and limit the motivation of parties to join the federation for training.
Despite the aforementioned real-world issues, most existing studies on FL heavily rely on simulation settings or small-scale testbeds of devices [8, 9, 10] to examine the behavior of their systems. While simulation settings are useful for controlled testing and development of FL models, they face significant challenges in adequately covering all operational aspects of real-world deployments. Specifically, existing simulators cannot emulate crucial aspects of realistic execution environments, such as resource consumption (e.g., memory, CPU/GPU usage, battery life) and network connectivity (e.g., bandwidth and network congestion). These factors significantly impact the performance of FL systems, as demonstrated in Section IV. Additionally, other realistic environment aspects such as data distribution, underlying software libraries, and executing settings introduce further challenges that can affect FL performance. Therefore, this motivates us to conduct more comprehensive evaluations of such aspects to ensure their effectiveness and scalability.
In Section II, we observe a lack of experimental studies that systematically investigate the implementation of FL on real devices and assess the impact of intrinsic heterogeneity on performance and costs. Although there have been some attempts to implement FL on IoT-Edge devices at small scales with simplistic settings, it is desirable to have more reproducible experiments in larger and more realistic settings. Hence, to the best of our knowledge, our study pushed the experiment scale and complexity to a new level.
### _Objectives, Research Questions and Scope_
To identify potential issues and limitations on real devices that may not be apparent in simulated environments, we focus our study on the impact of resource allocations and heterogeneity independently and their combined effects in realistic environments. To achieve this, we focus on the following research questions (RQ):
* **RQ1: What are the behaviors of FL implementation in realistic environments compared to a simulation setting?** In this RQ, we compare many simulation and on-device deployment aspects. We want to see how simulation results can represent reality because FL experiments conducted in a controlled laboratory setting may not accurately reflect the challenges and complexities of realistic device-based environments.
* **RQ2: How do resource allocation and heterogeneity affect the learning performance and operation costs?** There are several factors that can affect FL deployment. This RQ focuses on the client participation rate, communication bandwidth, device and data heterogeneity. We test each factor independently to learn their impact on the behaviors of FL. Specifically, we want to observe the impact of varying the number and type of devices, bandwidth, and data distribution on the FL process for each factor.
* **RQ3: How do these two factors, resource allocation and heterogeneity, simultaneously affect the learning performance and operation costs?** This RQ is an essential study on understanding the impact of combined factors as specified in RQ2. Additionally, we aim to find the dominant factor towards the behaviors of FL in a real-world deployment.
To answer these questions, we need stable FL systems that can be deployed our targeted hardware, i.e., Raspberry Pi 3 (Pi3), Raspberry Pi 4 (Pi4), Jetson Nano (Nano) and Jetson TX2 (TX2) and can support GPUs on edge computing boards. While many algorithms are accompanied by source code, only Federated Averaging (FedAvg) [5] can satisfy our requirements due to its popularity. FedAvg has been extensively studied and evaluated in the literature with a large number of works reporting its performance characteristics and limitations in simulations. However, understanding its behavior on real devices is still limited (c.f. Secion II. Hence, we will focus on FedAvg for our studies in this paper and leave others for future work. However, our experiment design in Section III is general enough to be replicated in other algorithms, given that their implementations are stable enough to run on targeted devices.
### _Our Key Findings_
Along this light, our extensive set of experiments reported in Section IV reveal the following key findings:
* The on-device settings can achieve similar training accuracy to the simulation counterparts with similar convergence behaviors. But when it comes to operational behaviours related to computation and communication, the on-device ones show much more complicated behavior patterns for realistic IoT-Edge deployments.
* The disparity in computational and networking resources among the participating devices leads to longer model update (local and global) exchange times because high computational devices need to wait for the server to receive and aggregate local updates from low computational devices. This hints that an oversimplified emulation of these aspects in simulation setting highly likely lead to unexpected outcomes of a FL algorithm at the deployment phase.
* Data heterogeneity is the most dominant factor in FL performance, followed by the number of clients. The performance of the global model is affected most by the data distribution (i.e., Non-IID and Extreme Non-IID) of each participating client, especially for challenging learning tasks. Hence, combining with the disparity in computational and networking resources, FL on diverse IoT-Edge devices in realistic deployment settings need further understanding on-device behaviours in terms combining all these factors in tandem.
### _Paper Outline_
The rest of this article is organized as follows. Section II presents preliminaries to our work and discusses some existing surveys and empirical studies on FL. In Section III, we show our experimental designs and followed by our results and findings in Section IV. Finally, we give further discussions in Section V and conclude this empirical study in Section VI.
## II Preliminaries and Related Works
### _Federated Learning_
In the standard FL framework, data for learning tasks is acquired and processed locally at the IoT-Edge nodes, and only the trained model parameters are transmitted to the central server for aggregation. In general, along with an initialization stage, FL involves the following stages:
* _Stage 0 (Initialization)_: The aggregation server \(S\) first initiates the weight \(w_{0}\) of the global model and hyperparameters such as the number of communication rounds \(T\), size of the selected clients for each round \(N\), and local training details.
* _Stage 1 (Client training)_: All selected clients \(C_{1}\), \(C_{2}\), \(C_{3}\),..., \(C_{N}\) receive the current global weight from \(S\). Next, each \(C_{i}\) updates its local model parameters \(w_{i}^{t}\) using its local dataset \(D_{i}\), where \(t\) denotes the current communication round. Upon the completion of the local training, all selected clients send the local weight to \(S\) for model aggregation.
* _Stage 2 (Model Aggregation)_: \(S\) aggregates the received local weights based on a certain mechanism and then sends back the aggregated weights to the clients for the next round of local training.
### _Federated Averaging Algorithm_
Federated Averaging (FedAvg) is the de facto FL algorithm that is included in most FL systems [5]. As shown in Algorithm 1, FedAvg aggregates the locally trained model parameters by weighted averaging proportional to the amount of local dataset \(D_{i}\), that each client \(C_{i}\) had (corresponding to the above Stage 2). Note that there are many advanced FL algorithms were introduced (e.g., FedProx [11] and FedMA [12]), with different purposes in the last few years [13, 14].
```
1:Aggregation Server executes:
2:initialize: \(w\gets w_{0}\)
3:for each round \(t=1,2,3,\ldots,T\)do
4:for each client \(i=1,2,3,\ldots,N\)in parallel do
5:\(w_{i}^{t}\gets w^{t-1}\)
6:\(w_{i}^{t}\leftarrow\)ClientTraining(\(w_{i}^{t}\), \(D_{i}\))
7:endfor
8: // ModelAggregation
9:\(w^{t+1}\leftarrow\frac{1}{\sum_{i=1}^{N}n_{i}}\sum_{i=1}^{N}n_{i}w_{i}^{t}\)
10:endfor
11: return: \(w^{T}\)
12:
13:ClientTraining(\(w_{i}\), \(D_{i}\)): // Run on client \(C_{i}\)
14:for each epoch \(e=1,2,3,\ldots,E\)do
15:\(w_{i}\gets w_{i}-\eta\nabla l(w_{i};D_{i})\)
16:endfor
17: return: \(w_{i}\)
```
**Algorithm 1** FedAvg Algorithm [5].
### _Related Works_
Several available theoretical surveys and simulation-based empirical studies on FL are available in the literature. Dinh et al. [15] explore and analyze the potential of FL for enabling a wide range of IoT services, including IoT data sharing, data offloading and caching, attack detection, localization, mobile crowdsensing, and IoT privacy and security. Ahmed et al. [16] discuss the implementation challenges and issues when applying FL to an IoT environment. Zhu et al. [17] provides a detailed analysis of the influence of Non-IID data on different types of ML models in both horizontal and vertical FL. Li et al. [18] conduct extensive experiments to evaluate state-of-the-art FL algorithms on Non-IID data silos and find that Non-IID does bring significant challenges in learning accuracy of FL algorithms, and none of the existing state-of-the-art FL algorithms outperforms others in all cases. Recently, Matsuda et al. [19] benchmark the performance of existing personalized FL through comprehensive experiments to evaluate the characteristics of each method and find that there are no champion methods. Caldas et al. [20] propose LEAF, a modular benchmarking simulation-based framework for learning in federated settings. LEAF includes a suite of open-source federated datasets, a rigorous evaluation framework, and a set of reference implementations. To the best of our knowledge, we are the first ones that consider an empirical study of FL on IoT-Edge devices.
For real-world FL implementation, Di et al. [8] present FedAdapt, an adaptive offloading FL framework based on reinforcement learning and clustering to identify which layers of the DNN should be offloaded for each device onto a server. Experiments are carried out on a lab-based testbed, including two Pi3s, two Pi4s, and one Jetson Xavier. Sun et al. [9] propose a model selection and adaptation system for FL (FedMSA), which includes a hardware-aware model selection algorithm, then demonstrate the effectiveness of their method on a network of two Pi4s and five Nanos. Mills et al. [10] propose adapting FedAvg to use a distributed form of Adam optimization, then test their method on a small testbed of five Pi2s and five Pi3s. Furthermore, Zhang et al. [21] build the FedIoT platform for on-device anomaly data detection and evaluate their platform on a network of ten Pi4s. However, these attempts are still on a small scale and do not represent real-world environments.
## III Experimental Design
This section describes how we designed our experiments to answer our research questions in Section I-A. Starting with data preparation, we then implement FL on IoT-Edge devices with different settings based on the evaluation factors we defined.
After that, we use a bag of metrics to analyze the impact of these factors individually and their combined effects in different aspects. Fig. 2 illustrates this workflow in detail.
### _Data Preparation and Models_
#### Iii-A1 Datasets
We use two datasets in this work : CIFAR10 [22] and CIFAR100 [22], which are commonly used in previous studies on FL [11, 18]. CIFAR10 consists of 60000 32x32 color images and is the simple one. The images are labeled with one of 10 exclusive classes. There are 6000 images per class with 5000 training and 1000 testing images. CIFAR100 also consists of 60000 32x32 color images and is more challenging to train, however, each image comes with one of 100 fine-grained labels. There are 600 images per class with 500 training and 100 testing images.
#### Iii-A2 Data Partitioning
The CIFAR10 and CIFAR100 datasets are not separated for FL originally, we need to divide these two datasets synthetically. While the test sets are kept at the server for testing the aggregated model, we divide the training set of each dataset into 64 disjoint partitions with an equal number of samples in three different ways to simulate three scenarios of heterogeneity that are IID, Non-IID, and Extreme Non-IID (ExNon-IID). The IID strategy adapts independent and random division, as shown in Fig. 3(a) and 3(b), the data distribution in each client is basically the same. The Non-IID and ExNon-IID strategies use biased divisions proposed in [5, 23]. Specifically, the whole dataset is sorted according to the labels and divided into different chunks, then these chunks are randomly assigned to different clients. The number of chunks affects the degree of heterogeneity across clients. As shown in Fig. 3(c)-(f), while each client in Non-IID contains approximately four and ten data classes in CIFAR10 and CIFAR100, respectively, each client in ExNon-IID contains only one and two data classes in CIFAR10 and CIFAR100 respectively, which simulates the extreme data heterogeneity across clients.
#### Iii-A3 Model Architecture
Following previous works [5, 20], we study a popular CNN model designed for image classification tasks, called CNN3 on the two datasets. The model only includes two 5x5 convolution layers (the first with 32 channels, the second with 64), each followed by a ReLU activation function and a 2x2 max pooling. After that, one fully connected layer with 512 units and ReLu activation is added, followed by a softmax layer as a classifier. The number of output units is 10 for CIFAR10 and 100 for CIFAR100. By its simple architecture, the model does not need massive resources for training, making it suitable for deployment on IoT-Edge devices.
### _Hardware and Software Specifications_
In the past few years, many IoT-Edge devices have entered the market with different prices and abilities. In this work, we use the most popular ones such as Pi3, Pi4, Nano, and TX2. Different types of devices with different generations have different resources and processing capabilities. A diverse pool of devices helps us more accurately represent the real world. Our devices are connected to a workstation, which is used as the server, via a network of IoT-Edge devices and switches. Fig. 4 is a snapshot of our infrastructure. In more detail, Table II provides specifications of these devices, and the server machine and simulation machine are also described.
For software specifications, we use the PyTorch [24] framework version 1.13.1 to implement deep learning components and use the Flower [25] framework version 1.11.0 FedAvg algorithm. Additionally, we use Docker technology to create a separate container on each device to perform local training.
### _Evaluation Metrics_
In this study, we use a comprehensive set of metrics to characterize and quantify the impact of heterogeneity factors on the behaviors of FL implementation in realistic environments. Specifically, test accuracy and convergence speed are used to evaluate the learning performance. Averaged training time, memory, and GPU/CPU utilization are used to measure computational costs. Finally, we use the averaged model update (local and global) exchange time between the clients and the aggregation server to measure the communication cost. Table III provides concise definitions of all our used metrics.
### _Experiments Setup_
#### Iii-D1 Behaviors of On-Device FL Implementation (RQ1)
First of all, we conduct a baseline experiment on the simulation. Particularly, we simulate eight clients in which each client holds one of the first eight partitions (12.5 % of total partitions) in the CIFAR10 IID dataset. For the training settings, we train a simple CNN3 model described above for 500 communication rounds, at each round, the model is trained for 2 local epochs at the clients, SGD optimizer is used with a learning rate of 0.01, and the batch size is set to 16. To answer the
Fig. 2: Our Methodology.
RQ1 described in Section I-A, we then turn the simulation environment in the above experiment into realistic environments by sequentially using eight Pi3s, eight Pi4s, and eight Nanos as clients. These devices are connected to a server machine via ethernet connections. For comparison, all training settings are maintained as in the baseline. We use all metrics defined in Table III to describe the behaviors of FL implementation. The results and conclusions are shown in Section IV-A.
#### Iv-A2 Impact of Single Factor (RQ2)
For the RQ2, we consider two critical factors in FL, namely resource allocation and heterogeneity. Resource allocation includes the number of participating clients and the connection's communication bandwidth, and heterogeneity includes device heterogeneity and data heterogeneity (statistical heterogeneity). To explore the impact of these factors, we conduct extensive experiments that are shown in detail in Fig. 5. Training settings are the same as in the baseline experiment in RQ1. By conducting experiments defined in Fig. 5, we can observe what happens when the number of participating clients increases, the communication bandwidth is saturated, and when intrinsic heterogeneity is introduced across clients. The results and conclusions for RQ2 experiments are provided in Section IV-B.
#### Iv-A3 Impact of Combined Factors (RQ3)
After observing the impact of resource allocation and heterogeneity individually by addressing RQ2, we aim to explore more realistic scenarios where these two factors appear simultaneously. First, we vary the number of participating clients and increase the degree of heterogeneity in client devices concurrently. Second, we
Fig. 3: Data distribution of the first 24 clients in the CIFAR10 and CIFAR100 datasets.
still vary the number of participating clients in different data heterogeneity settings (IID, Non-IID, and ExNon-IID) to observe the accuracy and convergence speed. Fig. 6 shows these experiments in detail. Additionally, training settings are the same as in the baseline experiment in RQ1. By conducting these experiments, we expect to gain more valuable insights beyond those gained from the RQ2. Also, we aim to figure out the dominant factors towards the behaviors of FL in real-device deployment. The results and conclusions for RQ3 experiments are provided in Section IV-C.
## IV Experimental Results
### _Behaviors of On-Device FL Implementation (RQ1)_
Table IV provides detailed results of experiments in RQ1 where we compare real-device FL implementations to the baseline of simulation. Details of the experimental setup are described in III-D1. All four experiments use the same eight partitions of the CIFAR10 IID dataset and the same training details, it is reasonable that test accuracy and convergence speed in these experiments are consistent. In terms of computational cost, training time exponentially increases when we change the devices from TX2 and Nano to Pi4, then Pi3. From resource utilization, Pi3 devices seem to be overloaded when training a small model like CNN3, while Nano devices can handle the task easier due to the support of GPU. Additionally, update exchange time roughly doubles when we change the devices from Nano to Pi4, then Pi3. These observations raise a need for more efficient FL frameworks which are suitable for low-end devices like Pi3, and even for weaker, lower-cost IoT devices or sensors which were introduced more and more with extremely limited computational capacity.
### _Impact of Single Factor On FL Implementation (RQ2)_
In this set of experiments, we observe the results of experiments in RQ2 and analyze what happens when the number of participating clients increases, the communication bandwidth is constrained, and when intrinsic heterogeneity is introduced across clients.
#### Iv-B1 Impact of the Resource Allocation
**Impact of the Number of Clients**. Fig. 7 and Fig. 8 show the effect of the number of participating clients on the learning performance of communication cost. Generally, increasing the number of clients means more data involved in training the global model, resulting in an improvement in test accuracy. However, this also leads to a high diversity across client model parameters which can slow down the convergence process. We also observe that _when the number of clients increases from 32 to 64, the improvement in test accuracy is negligible, however, the update exchange time goes up dramatically._ From this observation, we can empirically verify an assumption that more participating clients do not guarantee better accuracy but can lead to large congestion in communication and increase the update exchange time. In this setting, it is easy to observe that 32 is the optimal number of participating clients. Therefore, we only use 32 clients in the remaining experiments in RQ2.
Fig. 4: IoT-Edge Federated Learning Testbed.
**Impact of the Communication Bandwidth**. Next, we investigate the effect of connection bandwidth on update exchange time. One interesting point obtained from Fig. 9 is that update exchange time increase linearly when we decrease the bandwidth. Specifically, when we halve the bandwidth from 100Mbps to 50Mbps, the update exchange time increases
Fig. 5: Experiments Setup for Studying the Impact of Single Factor (RQ2).
Fig. 6: Experiments Setup for Studying the Impact of Combined Factors (RQ3).
approximately by 4 times. Furthermore, it increases about by 8 times when the bandwidth is constrained four times from 100Mbps to 25Mbps. This observation promotes FL algorithms that are suitable for low-bandwidth systems.
#### Iv-A2 Impact of the Heterogeneity
**Impact of the Device Heterogeneity**. Following the experiments in Fig. 5, we investigate the impact of heterogeneity across client devices. From Table V below, we can observe that in a federation of heterogeneous devices, more powerful devices such as Nano or TX2 only need a couple of seconds to finish local training while weaker devices like Pi3 and Pi4 need much longer. However, in a naive FedAvg framework, the server needs to wait for all clients regardless of their strengths which is the reason why the update exchange time of more powerful devices is higher than weaker devices, this diminishes all benefits that high-end devices bring. This observation suggests _a need for better client selection strategies based on the client's computational power in realistic systems to leverage the presence of high-end devices._
**Impact of the Data Heterogeneity**. Heterogeneous data or distribution shift is the most challenging issue in FL. Most existing works on this issue only consider conventional Non-IID data scenarios. As discussed above, in this study, we further explore extreme cases of heterogeneity, i.e., ExNon-IID. Figs. 10(a) and 10(b) to show the effect of data heterogeneity on FL for CIFAR10 and CIFAR100 datasets, respectively. As observed from these results, ExNon-IID scenarios degrade the accuracy on test sets significantly compared to IID and Non-IID cases. Additionally, ExNon-IID scenarios tend to cause some fluctuation periods during training and slow down the convergence process. This suggests that the development of FL algorithms needs to tackle not only Non-IID cases but also ExNon-IID.
In summary, we have figured out that increasing the number of participating clients generally leads to an improvement in accuracy due to the increase in data samples used for training. However, when we substantially increase the number of clients
Fig. 8: Impact of the Number of Clients on Update Exchange Time.
Fig. 7: Impact of the Number of Clients on Test Accuracy.
Fig. 9: Impact of the Bandwidth on Update Exchange Time.
(i.e., from 32 to 64), the improvement is not significant but the update exchange time goes up dramatically. Moreover, the data heterogeneity also affects the global model's accuracy significantly, especially in ExNon-IID cases. Besides heterogeneity in labels of local datasets, other types of data heterogeneity such as quantity heterogeneity or distribution heterogeneity are also important and might degrade the model's accuracy much further, however, these types of data heterogeneity are still under-explored. In addition, the update exchange time is linearly affected by communication bandwidth. Also, we show that better client selection strategies are essential when dealing with heterogeneous devices to leverage the presence of high-end devices and reduce the update exchange time. However, _it is quite challenging on a real deployment when the distributions of computing power and data are not known as a prior and can not be simulated in a controlled setting_.
### _Impact of Combined Factors On FL Implementation (RQ3)_
This part reports the experimental results of RQ3 and draws insights when two factors, resource allocation, and heterogeneity, appear simultaneously. Also, we aim to figure out dominant factors towards the FL behaviors in real-device deployment.
**Combined Impact of the Number of Clients and Device Heterogeneity**. We focus on investigating the effect of the number of clients and device heterogeneity across clients on the update exchange time. Fig. 11 shows the average update exchange time of each type of device used in experiments 3.1.4 to 3.1.6. By comparing these results with results in Fig. 8 and Table V, we can draw a fascinating insight that with the same number of clients, heterogeneity in the federation can help reduce the overall update exchange time, and this gap seems more significant with a smaller number of clients. Unlike in homogenous scenarios where clients mostly finish local training and update their local models to the server simultaneously, which causes considerable congestion, in heterogeneous scenarios, clients with more powerful devices complete their work earlier, followed by weaker devices sequentially. This helps reduce the congestion in communication. These observations also suggest that a large number of clients and the congestion have a significantly negative effect on the update exchange time and raise a need for novel FL algorithms capable of handling situations with massive clients.
**Combined Impact of the Number of Clients and Data Heterogeneity**. We continue to study simultaneously the effect of the number of clients and data heterogeneity. Fig. 12 shows the test accuracy of the global model in experiments 3.2.1 to 3.2.16. From Fig. 12(a), 12(c), and 12(e), we can see that when increasing the number of clients from 32 to 64, the improvement in IID case is negligible. However, the improvement is more significant in cases of Non-IID and ExNon-IID which means that a large number of participating clients is essential in heterogenous data scenarios. Moreover, the negative effect of ExNon-IID data on the more challenging dataset, CIFAR100, seems more serious. Therefore, we can conclude that _data heterogeneity is the most dominant factor in the model's test accuracy, especially in challenging datasets_.
Fig. 11: Combined Impact of the Number of Clients and Device Heterogeneity on Update Exchange Time.
Fig. 10: Impact of the Data Heterogeneity.
In summary, we have figured out that the communication congestion caused by a large number of clients has a significant negative effect on the update exchange time. However, increasing the number of clients leads to improvements in accuracy, especially in heterogenous data scenarios. Also, data heterogeneity is the most dominant factor that affects the model's test accuracy, especially in challenging datasets. Going beyond the fundamental image classification task, data heterogeneity might further hurt the model's performance in other advanced tasks, such as object detection or segmentation, which are under-explored in current literature. Interestingly, we also observe that some homogeneous devices can behave differently. This may be caused by various implicit factors such as power supply, network conditions, hardware and software variations, or user behavior.
## V Discussions
In this section, we first discuss the practicality of FL on IoT-Edge devices (based on our experimental results) and then discuss other essential factors to consider while designing an FL system for IoT devices.
### _Practicality of FL on IoT-Edge Devices_
FL requires local processing on the device, which can be challenging on lightweight devices with limited processing power. In addition, storing the model updates locally can be challenging due to the limited storage capacity. Another challenge is the unreliable connectivity of IoT devices. Federated learning requires a stable and reliable network connection for devices to communicate with each other and the aggregation server. However, IoT-Edge devices are often deployed in remote locations with limited network connectivity.
In this study, we observed that the practicality of FL on IoT-Edge devices depends on combined effects from various factors such as device availability (number of participating clients), communication constraints (bandwidth availability), and heterogeneity of data (data distribution) and devices (computational capability and hardware configuration). These factors are interdependent and affect each other, and hence, a comprehensive analysis of the practicality of FL on IoT devices should consider all these factors together. For example, the computational capability of devices can affect communication overhead, as devices with lower computational capability may take longer to process and transmit data, resulting in higher communication latency and overhead. Similarly, the heterogeneity of devices can affect the robustness of FL algorithms, as the presence of devices with varying characteristics can introduce heterogeneity in the data and make it challenging to train accurate models.
To address the processing power and storage capacity issues, we need to design models that are optimized for lightweight devices and implement compression or distillation techniques to reduce the size of the updates. There is also a need to implement techniques such as asynchronous updates and checkpointing to ensure that the training process can continue even when devices are disconnected due to network connectivity issues.
### _Other Considerable Factors_
Besides the factors studied in this work, it is essential to consider other factors that can cause IoT devices not to perform well in FL, such as the power supply of devices and specifications of memory cards, and the performance of the aggregation server when designing FL systems.
Fig. 12: Combined Impact of the Number of Clients and Data Heterogeneity.
#### V-C1 Power Supply
The amount of power available to the device can impact its processing capability. If the device has a limited power supply, it may not be able to perform complex computations or transmit large amounts of data efficiently. Furthermore, the quality and reliability of the power supply can affect the device's stability and longevity. Power surges or outages can cause damage to the device's components, leading to reduced performance and potentially even complete failure. As shown in [26], when the battery life of the devices decreased, the accuracy of the global model also decreased significantly. Hence, it is crucial to ensure that devices used in FL have access to a reliable power supply with sufficient capacity to handle the demands of the learning process.
#### V-C2 Memory Card Usage
The speed and capacity of the memory card can indirectly affect the overall performance of the IoT device itself. If the memory card is slow or has limited capacity, it may result in slower data processing and storage, slowing down the overall FL process. Also, the reliability and durability of the memory card can impact FL performance. For instance, if the memory card fails or becomes corrupted, it can result in the loss of data, which can negatively impact the accuracy and effectiveness of the FL model.
#### V-C3 Performance of the Aggregation Server
The performance of the aggregation server is crucial to the success of the FL process and can bring a significant impact on the participating IoT devices. The aggregation server needs to have sufficient computational resources to process the incoming model updates from IoT devices. If the server is overloaded, this can cause delays or even crashes in the system, affecting the IoT devices involved. This can be particularly problematic if the IoT devices have limited resources themselves, as they may not be able to handle the increased workload.
## VI Conclusions and Future Works
The results of our experiment have revealed several important findings: (1) our simulation of FL has shown that it can be a valuable tool for algorithm testing and evaluation, but its effectiveness in accurately representing the reality of IoT-Edge deployment is very limited, (2) the disparity in computational resources among IoT devices can significantly impact the update exchange time, and (3) data heterogeneity is the most dominant factor in the presence of other factors, especially working in tandem with computation and network factors.
Moving forward, several areas could be explored to expand on the findings of this study. Firstly, considering the diversity of devices used in FL, it would be valuable to test the approach on a more comprehensive range of devices with different hardware, operating systems, and network connections to ensure the effectiveness and robustness of the approach. Secondly, the dataset selection process used for training the FL model could be further optimized to increase accuracy and efficiency and ensure that the results represent all potential use cases. Additionally, to expand the scope of the study's findings, exploring other FL algorithms beyond the standard FedAvg algorithm could be beneficial. These alternative algorithms could be better suited for specific scenarios or applications and may provide insights into how to improve the performance of FL in IoT-Edge devices. Lastly, the study may miss out on the potential benefits of other FL algorithms that are better suited for specific scenarios or applications. For instance, FedProx [11] is designed to handle heterogeneity in data across devices and can improve the convergence rate of the FL process. It is important to note that these future improvements do not affect the objectives and scopes of the current study.
Particularly, we plan to extend our study to a broader range of scenarios by examining the impact of varying network conditions, communication protocols, and resource usage of FL. In addition, we want to conduct a comprehensive analysis to measure the resource consumption of FL, including battery life and network bandwidth usage. We also want to focus on real-world applications of FL on IoT devices, including developing FL-based solutions for specific IoT use cases such as environmental monitoring, predictive maintenance, and evaluating their performance in realistic environments.
|
2309.08431 | Decentralised Finance and Automated Market Making: Predictable Loss and
Optimal Liquidity Provision | Constant product markets with concentrated liquidity (CL) are the most
popular type of automated market makers. In this paper, we characterise the
continuous-time wealth dynamics of strategic LPs who dynamically adjust their
range of liquidity provision in CL pools. Their wealth results from fee income,
the value of their holdings in the pool, and rebalancing costs. Next, we derive
a self-financing and closed-form optimal liquidity provision strategy where the
width of the LP's liquidity range is determined by the profitability of the
pool (provision fees minus gas fees), the predictable losses (PL) of the LP's
position, and concentration risk. Concentration risk refers to the decrease in
fee revenue if the marginal exchange rate (akin to the midprice in a limit
order book) in the pool exits the LP's range of liquidity. When the drift in
the marginal rate is stochastic, we show how to optimally skew the range of
liquidity to increase fee revenue and profit from the expected changes in the
marginal rate. Finally, we use Uniswap v3 data to show that, on average, LPs
have traded at a significant loss, and to show that the out-of-sample
performance of our strategy is superior to the historical performance of LPs in
the pool we consider. | Álvaro Cartea, Fayçal Drissi, Marcello Monga | 2023-09-15T14:32:04Z | http://arxiv.org/abs/2309.08431v3 | # Decentralised Finance and Automated Market Making: Predictable Loss and Optimal Liquidity Provision
###### Abstract
Constant product markets with concentrated liquidity (CL) are the most popular type of automated market makers. In this paper, we characterise the continuous-time wealth dynamics of strategic LPs who dynamically adjust their range of liquidity provision in CL pools. Their wealth results from fee income and the value of their holdings in the pool. Next, we derive a self-financing and closed-form optimal liquidity provision strategy where the width of the LP's liquidity range is determined by the profitability of the pool (provision fees minus gas fees), the predictable losses (PL) of the LP's position, and concentration risk. Concentration risk refers to the decrease in fee revenue if the marginal exchange rate (akin to the midprice in a limit order book) in the pool exits the LP's range of liquidity. When the marginal rate is driven by a stochastic drift, we show how to optimally skew the range of liquidity to increase fee revenue and profit from the expected changes in the marginal rate. Finally, we use Uniswap v3 data to show that, on average, LPs have traded at a significant loss, and to show that the out-of-sample performance of our strategy is superior to the historical performance of LPs in the pool we consider.
keywords: Decentralised finance, automated market making, concentrated liquidity, algorithmic trading, market making, stochastic control, predictable loss, impermanent loss, signals. +
Footnote †: journal: Journal of Applied Economics
,
## 1 Introduction
Traditional electronic exchanges are organised around LOBs to clear demand and supply of liquidity. In contrast, the takers and providers of liquidity in constant function markets (CFMs) interact in liquidity pools; liquidity providers (LPs) deposit their assets in the liquidity pool and liquidity takers (LTs) exchange assets directly with the pool. At present, constant product markets (CPMs) with concentrated liquidity (CL) are the most popular type of CFM, with Uniswap v3 as a prime example; see Adams et al. (2021). In CPMs with CL, LPs specify the rate intervals (i.e., tick ranges) over which they deposit their assets, and this liquidity is counterparty to trades of LTs when the marginal exchange rate of the pool is within the liquidity range of the LPs. When LPs deposit liquidity, fees paid by LTs accrue and are paid to LPs when they withdraw their assets from the pool. The amount of fees accrued to LPs is proportional to the share of liquidity they hold in the pool.
Existing research characterises the losses of LPs, but does not offer tools for strategic liquidity provision. In this paper, we study strategic liquidity provision in CPMs with CL. We derive the continuous-time dynamics of the wealth of LPs which consists of the position they hold in the pool (position value) and fee income. The width of the range where the assets are deposited affects the value of the LP's position in the pool; specifically, we show that the predictable loss (PL) incurred by LPs increases as the width of the liquidity range decreases. PL measures the unhedgeable losses of LPs stemming from the depreciation of their holdings in the pool and from the opportunity costs from locking their assets in the pool; see Cartea et al. (2023). Also, we show that fee income is subject to a tradeoff between the width of the LP's liquidity range and the volatility of the marginal rate in the pool. More precisely, CL increases fee revenue when the rate is in the range of the LP, but also increases _concentration risk_. Concentration risk refers to the risk that the LP faces when her position is concentrated in narrow ranges; the LP stops collecting fees when the rate exits the range of her position.
We derive an optimal dynamic strategy to provide liquidity in a CPM with CL. In our model, the LP maximises the expected utility of her terminal wealth, which consists of the accumulated trading fees and the gains and losses from the market making strategy. The dynamic strategy controls the width and the skew of liquidity that targets the marginal exchange rate. For the particular case of log-utility, we obtain the strategy in closed-form and show how the solution balances the opposing effects between PL and fee collection. When volatility increases, PL increases, so there is an incentive for the LP to widen the range of liquidity provision to reduce the strategy's exposure to PL. In particular, in the extreme case of very high volatility, the LP must withdraw from the pool because the exposure to PL is too high. Also, when there is an increase in the potential
provision fees that the LP may collect because of higher liquidity taking activity, the strategy balances two opposing forces. One, there is an incentive to increase fee collection by concentrating the liquidity of the LP in a tight range around the exchange rate of the pool. Two, there is a limit to how concentrated is the liquidity posted by the LP in the pool because the LP does not collect fees if the exchange rate exits the LP's liquidity range. Finally, when the dynamics of the marginal exchange rate are driven by a stochastic drift (e.g., a predictive signal), the strategy skews the range of liquidity to increase fee revenue by capturing the LT trading flow and to increase the position value by profiting from the excepted changes in the marginal rate.
Finally, we use Uniswap v3 data to motivate our model and to test the performance of the strategy we derive. The LP and LT data are from the pool ETH/USDC (Ethereum and USD coin) between the inception of the pool on \(5\) May \(2021\) and \(18\) August \(2022\). To illustrate the performance of the strategy we use in-sample data to estimate model parameters and out-of-sample data to test the strategy. Our analysis of the historical transactions in Uniswap v3 shows that LPs have traded at a significant loss, on average, in the ETH/USDC pool. We show that the out-of-sample performance of our strategy is considerably superior to the average LP performance we observe in the ETH/USDC pool.
Early works on AMMs are in Chiu and Koeppl (2019), Angeris et al. (2021), Lipton and Treccani (2021). Some works in the literature study strategic liquidity provision in CFMs and CPMs with CL. Fan et al. (2023) propose a liquidity provision strategy in pools with CL, Heimbach et al. (2022) discuss the tradeoff between risks and returns that LPs face in Uniswap v3, Cartea et al. (2023) study the predictable losses of LPs in a continuous-time setup, Milonis et al. (2023) study the impact of fees on the profits of arbitrageurs in CFMs, and Fukasawa et al. (2023) study the hedging of the impermanent losses of LPs.
Our work is related to the algorithmic trading and optimal market making literature. Early works on liquidity provision in traditional markets are Ho and Stoll (1983) and Avellaneda and Stoikov (2008), with extensions in many directions; see Cartea et al. (2014, 2017), Gueant (2017), Bergault et al. (2021), Drissi (2022). We refer the reader to Cartea et al. (2015) and Gueant (2016) for a comprehensive review of algorithmic trading models for takers and makers of liquidity in traditional markets. Also, our work is related to those by Cartea et al. (2018), Barger and Lorig (2019), Cartea and Wang (2020), Donnelly and Lorig (2020), Forde et al. (2022), Bergault et al. (2022) who implement market signals in algorithmic trading strategies.
The remainder of the paper proceeds as follows. Section 2 describes CL pools. Section 3 studies the continuous-time dynamics of the wealth of LPs as a result of the position value in Subsection 3.1 and the fee revenue in Subsection 3.2. In particular, we use Uniswap v3 data to
study the fee revenue component of the LP's wealth and our results motivate the assumptions we make in our model. Section 4 introduces our liquidity provision model and uses stochastic control to derive a closed-form optimal strategy. Next, we study how the strategy controls the width and the skew of the liquidity range as a function of the pool's profitability, PL, concentration risk, and the drift in the marginal rate. Finally, Section 5 uses Uniswap v3 data to test the performance of the strategy and showcases its superior performance.
## 2 Concentrated liquidity
Consider a reference asset \(X\) and a risky asset \(Y\) which is valued in units of \(X.\) Assume there is a pool that makes liquidity for the pair of assets \(X\) and \(Y\), and denote by \(Z\) the marginal exchange rate of asset \(Y\) in units of asset \(X\) in the pool. In a traditional CPM such as Uniswap v2, the trading function, which links the state of the pool before and after a trade is executed, is \(f\left(q^{X},q^{Y}\right)=q^{X}\times q^{Y}=\kappa^{2}\) where \(q^{X}\) and \(q^{Y}\) are the quantities of asset \(X\) and \(Y\) that constitute the _reserves_ in the pool, and \(\kappa\) is the depth of the pool. The marginal exchange rate is \(Z=q^{X}/q^{Y},\) and the execution rate for a quantity \(y\) is \(\tilde{Z}\left(y\right)=Z-Z^{3/2}\,y/\kappa.\) In traditional CPMs, liquidity provision operations do not impact the marginal rate of the pool, so when an LP deposits the quantities \(x\) and \(y\) of assets \(X\) and \(Y\), the condition \(q^{X}/q^{Y}=(q^{X}+x)/(q^{Y}+y)\) must be satisfied; see Cartea et al. (2022).
Figure 1 illustrates the geometry of a CPM. The level function \(\varphi(q^{Y})=\kappa^{2}/q^{Y}\) indicates the various combinations of quantity \(q^{X}\) and quantity \(q^{Y}\) that lead to the same pool depth. Assume the reserves in the pool are the coordinates of point \(B\) in Figure 1. The marginal rate \(Z\) is the absolute value of the slope of the tangent \(q^{X}/q^{Y}=\kappa^{2}/\left(q^{Y}\right)^{2}\) at point \(B\); equivalently, \(Z\) is the slope of the ray \(0B.\) When LTs trade, the reserves in the pool move along the level curve (e.g., from \(B\) to \(C\) or from \(B\) to \(A\)), and when LPs provide liquidity, the level curve moves up along the ray \((0B)\).
In CPMs with CL, LPs specify a range of rates \((Z^{\ell},Z^{u}]\) in which their assets can be counterparty to liquidity taking trades. Here, \(Z^{\ell}\) and \(Z^{u}\) take values in a finite set \(\{Z^{1},\ldots,Z^{N}\}\), the elements of the set are called ticks, and the range \((Z^{i},Z^{i+1}]\) between two consecutive ticks is a _tick range_ which represents the smallest possible liquidity range; see Drissi (2023) for a description of the mechanics of CL.1
Footnote 1: In LOBs, a tick is the smallest price increment.
The assets that the LP deposits in a range \((Z^{\ell},Z^{u}]\) provide the liquidity that supports marginal rate movements between \(Z^{\ell}\) and \(Z^{u}\). The quantities \(x\) and \(y\) that the LP provides verify the key
formulae
\[\begin{cases}x=0&\text{and}\quad y=\tilde{\kappa}\left(\left(Z^{\ell}\right)^{-1/2 }-\left(Z^{u}\right)^{-1/2}\right)&\text{if}\;\;Z\leq Z^{\ell},\\ x=\tilde{\kappa}\left(Z^{1/2}-\left(Z^{\ell}\right)^{1/2}\right)&\text{and} \quad y=\tilde{\kappa}\left(Z^{-1/2}-\left(Z^{u}\right)^{-1/2}\right)&\text{ if}\;\;Z^{\ell}<Z\leq Z^{u},\\ x=\tilde{\kappa}\left(\left(Z^{u}\right)^{1/2}-\left(Z^{\ell}\right)^{1/2} \right)&\text{and}\quad y=0&\text{if}\;\;Z>Z^{u}\,,\end{cases} \tag{1}\]
where \(\tilde{\kappa}\) is the depth of the LP's liquidity in the pool. The depth \(\tilde{\kappa}\) remains constant unless the LP provides additional liquidity or withdraws her liquidity. When the rate \(Z\) changes, the equations in (1) and the prevailing marginal rate \(Z\) determine the holdings of the LP in the pool, in particular, they determine the quantities of each asset received by the LP when she withdraws her liquidity.
Within each tick range, the constant product formula determines the dynamics of the marginal rate, where the depth \(\kappa\) is the total depth of liquidity in that tick range. To obtain the total depth in a tick range, one sums the depths of the individual liquidity positions in the same tick range. When a liquidity taking trade is large, so the marginal rate crosses the boundary of a tick range, the pool executes two separate trades with potentially different depths for the constant product formula. Figure 2 depicts the geometry of CPMMs with CL for two adjacent tick ranges, each with a different value of the depth \(\kappa\). In CPMMs without CL, the level function is a hyperbola, and in CPMMs with CL, it is a series of adjacent segments of hyperbolas, each corresponding to a given value of the depth.
If an LP's liquidity position with depth \(\tilde{\kappa}\) is in a tick range where the total depth of liquidity is
Figure 1: Geometry of CPMs: level function \(x=\varphi\left(y\right)=\kappa^{2}/y\) for two values of the pool depth \(\kappa\).
\(\kappa\), then for every liquidity taking trade that pays an amount \(p\) of fees, the LP earns the amount
\[\tilde{p}=\frac{\tilde{\kappa}}{\kappa}\,p\,\mathbbm{1}_{Z^{\ell}<Z\leq Z^{u}}\,. \tag{2}\]
Thus, the larger the position depth \(\tilde{\kappa}\), the higher is the proportion of fees that the LP earns; e.g., if the LP is the only provider of liquidity in the range \((Z^{\ell},Z^{u}]\) then \(\kappa=\tilde{\kappa}\), so the LP collects all the fees in that range. The equations in (1) imply that for equal wealth, narrow liquidity ranges increase the value of \(\tilde{\kappa}\). However, LPs that maximise fee revenue in a tick range face concentration risk.
To illustrate concentration risk, consider the following example. Two LPs have the same initial wealth \(\tilde{x}\). One LP provides liquidity over the one-tick range \((Z^{i},Z^{i+1}]\) with depth \(\kappa_{1}\), and the other LP provides liquidity with depth \(\kappa_{2}\) over a range \((Z^{i},Z^{i+2}]\) that consists of two tick ranges. The equations in (1) show that the depth of the position of the LP who concentrates her liquidity over one tick range is approximately twice the depth of the liquidity held by the second LP; see Cartea et al. (2022, 2023) for more details. However, when the rate \(Z\) is in the tick range \((Z^{i+1},Z^{i+2}]\), the first LP's liquidity is inactive and does not earn fees, and only the second LP's liquidity facilitates trades. Figure 3 shows the position depth of an LP who uses the same wealth to provide liquidity over ranges of \(1\), \(3\), and \(5\) ticks.
Figure 2: Geometry of CPMMs with CL: two adjacent tick ranges \((Z^{B},Z^{C}]\) and \((Z^{A},Z^{B}]\) with different liquidity depth.
## 3 The wealth of liquidity providers in CL pools
In this section, we consider a strategic LP who dynamically tracks the marginal rate \(Z\). In our model, the LP's position is self-financed, so she does not deposit nor withdraw additional assets throughout a investment window \([0,T].\) Throughout the investment window, the LP repeatedly withdraws her liquidity and collects the accumulated fees, then uses her wealth, i.e., the collected fees and the assets she withdraws, to deposit liquidity in a new range. In the remainder of this work, we work in a filtered probability space \(\big{(}\Omega,\mathcal{F},\mathbb{P};\mathbb{F}=(\mathcal{F}_{t})_{t\in[0,T]} \big{)}\) that satisfies the usual conditions, where \(\mathbb{F}\) is the natural filtration generated by the collection of observable stochastic processes that we define below.
We assume that the marginal exchange rate in the pool \(\left(Z_{t}\right)_{t\in[0,T]}\) is driven by a stochastic drift \(\left(\mu_{t}\right)_{t\in[0,T]}\) and we write
\[\mathrm{d}Z_{t}=\mu_{t}\,Z_{t}\,\mathrm{d}t+\sigma\,Z_{t}\,\mathrm{d}W_{t}\,, \tag{3}\]
where the volatility parameter \(\sigma\) is a nonnegative constant and \((W_{t})_{t\in[0,T]}\) is a standard Brownian motion independent of \(\mu\). We assume that \(\mu\) is cadlag with finite second moment, i.e., \(\mathbb{E}\left[\mu_{s}^{2}\right]<\infty\) for \(t\leq s\leq T\).
Consider an LP with initial wealth \(\tilde{x}_{0},\) in units of \(X,\) and an investment horizon \([0,T],\) with \(T>0.\) At time \(t=0\,,\) she deposits quantities \((x_{0},y_{0})\) in the range \(\left(Z^{\ell},Z^{u}\right]\) so the initial depth of her position is \(\tilde{\kappa}_{0},\) and the value of her initial position, marked-to-market in units of \(X\), is
Figure 3: Position depth for three LP ranges. The first is concentrated over a range of one tick, the second over a range of three ticks, and the last over a range of five ticks.
\(\tilde{x}_{0}=x_{0}+y_{0}\,Z_{0}\). The dynamics of the LP's wealth consist of the position value and fee revenue. We introduce the wealth process \((\tilde{x}_{t}=\alpha_{t}+p_{t})_{t\in[0,T]}\), which we mark-to-market in units of the reference asset \(X\), with \(\tilde{x}_{0}>0\) known and recall that \(\left(\alpha_{t}\right)_{t\in[0,T]}\) is the value of the LP's holdings in the pool and \(\left(p_{t}\right)_{t\in[0,T]}\) is the fee revenue. At any time \(t,\) the LP uses her wealth \(\tilde{x}_{t}\) to provide liquidity. Next, Subsection 3.1 studies the dynamics of the LP's position in the pool and Subsection 3.2 studies the dynamics of the LP's fee revenue.
### Position value
In this section, we focus our analysis on the _position value_\(\alpha\). Throughout the investment window \([0,T]\), the holdings \((x_{t},y_{t})_{t\in[0,T]}\) of the LP change because the marginal rate \(Z\) changes and because she continuously adjusts her liquidity range around \(Z\). More precisely, to make markets optimally, the LP controls the values of \(\left(\delta_{t}^{\ell}\right)_{t\in[0,T]}\) and \(\left(\delta_{t}^{u}\right)_{t\in[0,T]}\) which determine the dynamic liquidity provision boundaries \(\left(Z_{t}^{\ell}\right)_{t\in[0,T]}\) and \(\left(Z_{t}^{u}\right)_{t\in[0,T]}\) as follows:
\[\begin{cases}\left(Z_{t}^{u}\right)^{1/2}=&Z_{t}^{1/2}/\left(1-\delta_{t}^{u} /2\right),\\ \left(Z_{t}^{\ell}\right)^{1/2}=&Z_{t}^{1/2}\left(1-\delta_{t}^{\ell}/2\right),\end{cases} \tag{4}\]
where \(\delta^{\ell}\in(-\infty,2]\), \(\delta^{u}\in[-\infty,2)\), and \(\delta^{\ell}\,\delta^{u}/2<\delta^{\ell}+\delta^{u}\) because \(0\leq Z^{\ell}<Z^{u}<\infty\). Below, our liquidity provision model requires that \(Z_{t}\in(Z_{t}^{\ell},Z_{t}^{u}]\) so \(\delta^{\ell}\in(0,2]\), \(\delta^{u}\in[0,2)\), and \(\delta^{\ell}\,\delta^{u}/2<\delta^{\ell}+\delta^{u}\).
Recall that in practice, \(Z^{\ell}\) and \(Z^{u}\) take values in a finite set of ticks, so \(\delta^{\ell}\) and \(\delta^{u}\) also take values in a finite set. In the liquidity provision problem of Section 4, we use stochastic control techniques to derive an optimal strategy where the controls \(\delta^{\ell}\) and \(\delta^{u}\) are continuous, so we round the values of \(Z^{\ell}\) and \(Z^{u}\) to the nearest ticks in the performance analysis of Section 5.
In the remainder of this paper we define the _spread_\(\delta_{t}\) of the LP's position as
\[\delta_{t}=\delta_{t}^{u}+\delta_{t}^{\ell}\,, \tag{5}\]
and for small position spreads, we use the approximation
\[\left(Z_{t}^{u}-Z_{t}^{\ell}\right)\Big{/}Z_{t}=\left(1-\delta_{t}^{u}/2 \right)^{-2}-\left(1-\delta_{t}^{\ell}/2\right)^{2}\approx\delta_{t}.\]
We assume that the marginal rate process \(\left(Z_{t}\right)_{t\in[0,T]}\) follows the dynamics (3). Cartea et al. (2023) show that the holdings in assets \(X\) and \(Y\) in the pool for an LP who follows an arbitrary
strategy \(\left(Z_{t}^{\ell},Z_{t}^{u}\right)\) are given by
\[x_{t}=\frac{\delta_{t}^{\ell}}{\delta_{t}^{\ell}+\delta_{t}^{u}}\,\alpha_{t}\quad \text{and}\quad y_{t}=\frac{\delta_{t}^{u}}{Z_{t}\left(\delta_{t}^{\ell}+\delta _{t}^{u}\right)}\,\alpha_{t}\,, \tag{6}\]
so the value \(\left(\alpha_{t}\right)_{t\in[0,T]}\) of her position follows the dynamics
\[\mathrm{d}\alpha_{t} =\tilde{x}_{t}\,\left(\frac{1}{\delta_{t}^{\ell}+\delta_{t}^{u}} \right)\left(-\frac{\sigma^{2}}{2}\,\mathrm{d}t+\mu_{t}\,\delta_{t}^{u}\, \mathrm{d}t+\sigma\,\delta_{t}^{u}\,\mathrm{d}W_{t}\right)\] \[=\mathrm{d}\text{PL}_{t}+\tilde{x}_{t}\,\left(\frac{1}{\delta_{t} ^{\ell}+\delta_{t}^{u}}\right)\left(\mu_{t}\,\delta_{t}^{u}\,\mathrm{d}t+ \sigma\,\delta_{t}^{u}\,\mathrm{d}W_{t}\right)\,, \tag{7}\]
where the predictable and negative component \(\text{PL}_{t}=-\frac{\sigma^{2}}{2}\,\int_{0}^{t}\frac{\tilde{x}_{s}}{\delta _{s}}\,\mathrm{d}s\) is the PL of the LP's position which scales with the volatility of the marginal rate. The dynamics in (7) also show that a larger position spread \(\delta\) reduces PL and the overall risk of the LP's position in the pool; see Cartea et al. (2023).
For a fixed value of the spread \(\delta_{t}=\delta_{t}^{\ell}+\delta_{t}^{u}\), the dynamics in (7) show that if \(\mu_{t}\geq 0\), then the LP increases her expected wealth by increasing the value \(\delta^{u}\), i.e., by skewing her range of liquidity to the right. However, note that the quadratic variation of the LP's position value is \(\mathrm{d}\langle\alpha,\alpha\rangle_{t}=\tilde{x}_{t}^{2}\,\sigma^{2}\left( \frac{\delta_{t}^{u}}{\delta_{t}}\right)^{2}\,\mathrm{d}t\,,\) so skewing the range to the right also increases the variance of the LP's position. On the other hand, if \(\mu\leq 0\), then the LP reduces her losses by decreasing the value \(\delta^{u}\) or equivalently increasing the value of \(\delta^{\ell}\), i.e., by skewing her range of liquidity to the left. Thus, the LP uses the expected changes in the marginal rate to skew the range of liquidity and to increase her terminal wealth.
### Fee income
In this section, we first show that fee income is subject to a tradeoff between the spread \(\delta\) of the LP's position and PL, which scales with the volatility of \(Z.\) Next, we show how the LP uses the expected changes in the marginal rate given by the stochastic drift \(\mu\) to skew her liquidity around \(Z.\) Finally, we propose dynamics for the LP's fee income that we use in our model.
#### 3.2.1 Fee income: spread, concentration risk, and pool fee rate
In the three cases of (1), increasing the spread of the LP's position reduces the depth \(\tilde{\kappa}\) of the LP's position in the pool. Recall that the LP fee income is proportional to \(\tilde{\kappa}/\kappa,\) where \(\kappa\) is the pool depth. Thus, decreasing the value of \(\tilde{\kappa}\) potentially reduces LP fee income because the LP holdings
represent a smaller portion of the pool depth around the rate \(Z\). Figure 4 shows the value of \(\tilde{\kappa}\) as a function of the spread \(\delta\).
However, although narrow ranges increase the potential fee income, they also increase concentration risk; a wide spread (i.e., a lower value of the depth \(\tilde{\kappa}\)) decreases fee income per LT transaction but reaps earnings from a larger number of LT transactions because the position is active for longer periods of time (i.e., it takes longer, on average, for \(Z\) to exit the LP's range). Thus, the LP must strike a balance between maximising the depth \(\tilde{\kappa}\) around the rate and minimising the concentration risk, which depends on the volatility of the rate \(Z\).
The dynamics of the fee income in our model of Section 4 uses a fixed depth \(\kappa\) and assumes that the pool generates fee income for all LPs at an instantaneous _pool fee rate_\(\pi\); clearly, these fees are paid by LTs who interact with the pool. The value of \(\pi\) represents the instantaneous profitability of the pool, akin to the size of market orders and their arrival rate in LOBs.
To analyse the dynamics of the pool fee rate \(\pi,\) we use historical LT transactions in Uniswap v3 as a measure of activity and to estimate the total fee income generated by the pool; Appendix A describes the data and Table A.4 provides descriptive statistics. Figure 5 shows the estimated fee rate \(\pi\) in the ETH/USDC pool. For any time \(t,\) we use
\[\pi_{t}=0.05\%\,\frac{V_{t}}{2\,\kappa\,Z_{t}^{1/2}}\,,\]
where \(V_{t}\) is the volume of LT transactions the day before \(t,\)\(2\,\kappa\,Z_{t}^{1/2}\) is the pool value in terms of asset \(X\) at time \(t,\) and \(0.05\%\) is the fixed fee of the pool.2 Figure 5 suggests that the pool fee
Figure 4: Value of the depth \(\tilde{\kappa}\) of the LP’s position in the pool as a function of the spread \(\delta.\) The spread is in percentage of the marginal exchange rate; recall that \(\left(Z_{t}^{u}-Z_{t}^{\ell}\right)/Z_{t}\approx\delta_{t}.\)
rate \(\pi\) generated by liquidity taking activity in the pool is stochastic and mean reverting. Here, we assume that \(\pi\) is independent of the rate \(Z\) over the time scales we consider; see Table 1.
In our model, the LP continuously adjusts her position around the current rate \(Z,\) so we write the continuous-time dynamics of (2), conditional on the rate not exiting the LP's range, as
\[\mathrm{d}p_{t}=\underbrace{\left(\tilde{\kappa}_{t}\,/\,\kappa\right)}_{ \text{Position depth}}\underbrace{\pi_{t}}_{\text{Fee rate}}\underbrace{2\,\kappa\,Z_{t}^{1/2}}_{\text{Pool size}}\,\mathrm{d}t\,,\]
where \((\tilde{\kappa}_{t})_{t\in[0,T]}\) models the depth of the LP's position and \(p\) is the LP's fee income for providing liquidity with depth \(\tilde{\kappa}\) in the pool. The fee income is proportional to the pool size, i.e., proportional to \(2\,\kappa\,Z_{t}^{1/2}.\) Next, use the second equation in (1) and equations (6)-(4) to write the dynamics of the LP's position depth \(\tilde{\kappa}_{t}\) as
\[\tilde{\kappa}_{t}=2\,\tilde{x}_{t}\,\left(\frac{1}{\delta_{t}^{\ell}+\delta_ {t}^{u}}\right)\,Z_{t}^{-1/2}\,,\]
\begin{table}
\begin{tabular}{c c c c} \hline \(\Delta t=1\) minute & \(\Delta t=5\) minutes & \(\Delta t=1\) hour & \(\Delta t=1\) day \\ \hline Correlation & \(-2.1\%\) & \(-2.4\%\) & \(-2.6\%\) & \(-10.9\%\) \\ \hline \end{tabular}
\end{table}
Table 1: Correlation of the returns of the rate \(Z\) and the fee rate \(\pi,\) i.e., \(\left(Z_{t+\Delta t}-Z_{t}\right)/Z_{t}\) and \(\left(Z_{t+\Delta t}-Z_{t}\right)/Z_{t}\) for \(\Delta t=1\) minute, five minutes, one hour, and one day, using data of the ETH/USDC pool between 5 May 2021 and 18 August 2022.
Figure 5: Estimated pool fee rate from February to August 2022 in the ETH/USDC pool. For any time \(t,\) the pool fee rate is the total fee income, as a percentage of the total pool size, paid by LTs on the period \([t-1\,\text{day},t].\) The pool size at time \(t\) is \(2\,\kappa\,Z_{t}^{1/2}\) in terms of asset \(X\), where \(Z_{t}\) is the active rate in the pool at time \(t.\)
so the dynamics in (8) become
\[\mathrm{d}p_{t}=\left(\frac{4}{\delta_{t}^{\ell}+\delta_{t}^{u}}\right)\,\pi_{t} \,\tilde{x}_{t}\,\mathrm{d}t\,. \tag{8}\]
In practice, the LP chooses how often to reposition her liquidity. Thus, the LP faces the risk that the rate exits the spread around \(Z\) in between the times the LP repositions her liquidity. Clearly, the continuous-time dynamics in (8) do not take into account concentration risk. In practice, narrow spreads generate less fee income because the rate \(Z\) may exit the range of the LP's liquidity, especially in volatile markets. Thus, we introduce a concentration cost to reduce the fees collected by the LP. The concentration costs increase (decrease) when the spread narrows (widens), so we modify the dynamics of the fees collected by the LP in (8) to account for the concentration cost and write
\[\mathrm{d}p_{t}=\left(\frac{4}{\delta_{t}^{\ell}+\delta_{t}^{u}}\right)\,\pi_{ t}\,\tilde{x}_{t}\,\mathrm{d}t\,-\gamma\,\left(\frac{1}{\delta_{t}^{\ell}+ \delta_{t}^{u}}\right)^{2}\,\tilde{x}_{t}\,\mathrm{d}t\,, \tag{9}\]
where \(\gamma>0\) is the concentration cost parameter and \(\tilde{x}_{t}\) is the wealth invested by the LP in the pool at time \(t\).
Figure 6 compares the fee income from (8) and from (9), which includes the concentration cost, as a function of the spread of the LP's position. We simulate a driftless rate \(Z\) with \(\sigma=1\%\) and \(\sigma=3\%\) and assume that the LP adjusts her liquidity with a frequency of \(\Delta t=1\) minute. The fee income corresponding to every level of the spread is normalised by the maximum ex-post fee income (the normalised maximum fee revenue is \(1\)). The figure shows that the concentration cost term in (9) captures the loss in fee revenue because of concentration risk. In addition, LPs should adapt the concentration cost parameter \(\gamma\) to the volatility of the rate \(Z\). The LP can also choose the value of \(\gamma\) based on her beliefs about the future realised volatility.
In the remainder of this work, we assume that the pool fee rate process \(\left(\pi\right)_{t\in[0,T]}\) follows the Cox-Ingersoll-Ross-type dynamics
\[\mathrm{d}(\pi_{t}-\eta_{t})=\Gamma\,\left(\overline{\pi}+\eta_{t}-\pi_{t} \right)\mathrm{d}t+\psi\,\sqrt{\pi_{t}-\eta_{t}}\,\mathrm{d}B_{t}\,. \tag{10}\]
where \(\left(\eta_{t}\right)_{t\in[0,T]}\) is a predictable process that we define below, \(\Gamma>0\) denotes the mean reversion speed, \(\overline{\pi}>0\) is the long-term mean of \(\left(\pi_{t}-\eta_{t}\right)_{t\in[0,T]}\), \(\psi>0\) is a non-negative volatility parameter, \(\left(B_{t}\right)_{t\in[0,T]}\) is a Brownian motion independent of \(\left(W_{t}\right)_{t\in[0,T]},\) and \(\pi_{0}-\eta_{0}>0\) is known. In our
model, we set
\[\eta_{t}=\frac{\sigma^{2}}{8}-\frac{\mu_{t}}{4}\left(\mu_{t}-\frac{\sigma^{2}}{2} \right)+\frac{\varepsilon}{4}\,. \tag{11}\]
From (10) it follows that
\[\pi_{t}-\eta_{t}\geq 0\implies 4\,\pi_{t}-\frac{\sigma^{2}}{2}+\mu_{t}\left( \mu_{t}-\frac{\sigma^{2}}{2}\right)\geq\varepsilon>0\,,\quad\forall t\in[0,T]\,, \tag{12}\]
which is a profitability condition that ensures that the spread \(\delta\) of the optimal strategy derived below in Section 4 is well defined. Financially, the inequality in (12) guarantees that fee income is greater than the PL faced by the LP, adjusted by the drift in the marginal rate. The profitability condition (12) is further discussed in Section 4.3.
#### 3.2.2 Fee income: drift and asymmetry
The stochastic drift \(\mu\) indicates the future expected changes of the marginal exchange rate in the pool. In practice, the LP may use a predictive signal so \(\mu\) represents the belief that the LP holds over the future marginal exchange rate in the pool. For an LP who maximises fee revenue, it is natural to consider asymmetric liquidity positions that capture the liquidity taking flow. We define the _asymmetry_ of a position as
\[\rho_{t}=\delta_{t}^{u}/\left(\delta_{t}^{u}+\delta_{t}^{\ell}\right)=\delta_ {t}^{u}/\delta_{t}\,, \tag{13}\]
Figure 6: Fee income based on the dynamics in (8), without concentration cost (i.e., \(\gamma=0\)), and in (9), with concentration cost \(\gamma>0\). We use simulations of the rate \(Z\) in (3) and assume \(\pi\) follows the mean-reverting process \(\mathrm{d}\pi_{t}=a\left(b-\pi_{t}\right)\mathrm{d}t+c\,\mathrm{d}B_{t}\), where \(B_{t}\) is a standard Brownian motion independent of \(W\). For the rate \(Z,\) we use \(\mu=0,\) and \(\sigma=1\,\%\) and \(3\,\%\). The parameters of the pool fee rate \(\pi,\) are obtained with maximum likelihood estimation from the fee rate of the ETH/USDC pool in Figure 5. We obtain \(a=2.71,\)\(b=0.22\%,\) and \(c=0.18\). For every spread value, we use (2) to obtain individual fee incomes and use (8) to obtain the expected terminal fee income.
where \(\delta_{t}^{u}\) and \(\delta_{t}^{\ell}\) are defined in (4). In one extreme, when the asymmetry \(\rho\to 0,\) then \(Z^{u}\to Z\) and the position consists of only asset \(X,\) and in the other extreme, when \(\rho\to 1,\) then \(Z^{\ell}\to Z\) and the position consists of only asset \(Y.\)
In this section, we use Uniswap v3 data to study how the asymmetry and the width of the LP's range of liquidity relate to fee revenue. First, we estimate the realised drift \(\mu\) in the pool ETH/USDC over a rolling window of \(T=5\) minutes.3 Next, for any time \(t\), the fee income for different positions of the LP's liquidity range is computed for various values of the spread \(\delta\) and for various values of the asymmetry \(\rho.\) For each value of the realised drift \(\mu\) during the investment horizon, and for each fixed value of the spread \(\delta,\) we record the asymmetry that maximises fee income. Figure 7 shows the optimal (on average) asymmetry \(\rho\) as a function of the spread \(\delta\) of the position for multiple values of the realised drift \(\mu.\)
Footnote 3: The values of the drift in this section are normalised to reflect daily estimates. In particular, we use \(\mu=\tilde{\mu}\,/\,\Delta t\) where \(\tilde{\mu}\) is the average of the observed log returns and \(\Delta t\) is the observed average LT trading frequency.
Figure 7 suggests that there exists a preferred asymmetry of the position for a given value of the spread \(\delta\) and a given value of the drift \(\mu.\) First, for all values of the spread \(\delta\), the LP skew her position to the right when the drift is positive (\(\rho^{\star}>0.5\)) and she skews her position to the left when the drift is negative (\(\rho^{\star}<0.5\)). Second, for narrow spreads, the liquidity position requires more asymmetry than for large spreads when the drift is not zero. Clearly, an optimal liquidity provision model should take into account the drift \(\mu\) and the spread of the position to adjust the asymmetry \(\rho.\)
In our liquidity provision model of Section 4, the LP holds a belief over the future exchange rate throughout the investment window and controls the spread \(\delta=\delta^{u}+\delta^{\ell}\) of her position. Thus,
Figure 7: Optimal position asymmetry \(\rho^{\star}\) in (13) as a function of the spread \(\delta\) of the position, for multiple values of the drift \(\mu.\) The asymmetry \(\rho^{\star}\) is the value of \(\rho\) that maximises fee income for observed values of \((\delta,\mu)\,.\)
she strategically chooses the asymmetry of her position as a function of \(\delta\) and \(\mu\). We approximate the relationship exhibited in Figure 7 with the asymmetry function
\[\rho_{t}=\rho\left(\delta_{t},\mu_{t}\right)=\frac{1}{2}+\frac{\mu_{t}}{\delta_{ t}}=\frac{1}{2}+\frac{\mu_{t}}{\delta_{t}^{u}+\delta_{t}^{\ell}}\,,\quad \forall t\in\left[0,T\right]. \tag{14}\]
In the next section, we derive an optimal liquidity provision strategy, and prove that the profitability of liquidity provision is subject to a tradeoff between fee revenue, PL, and concentration risk.
## 4 Optimal liquidity provision in CL pools
### The problem
Consider an LP who wants to provide liquidity in a CPM with CL throughout the investment window \([0,T]\). The LP implements a self-financing strategy that constantly repoitions liquidity in a range \((Z_{t}^{\ell},Z_{t}^{u}]\) around the active rate \(Z_{t}\), which requires depositing and withdrawing liquidity continuously in the pool. The two ends of the spread, \((\delta_{t}^{\ell})_{t\in[0,T]}\) and \((\delta_{t}^{u})_{t\in[0,T]}\), are given in (4). The LP marks-to-market her holdings in terms of asset \(X.\) Throughout the trading window, the LP's wealth changes when the value of her holdings in the pool change. Her positions in the pool accumulate earnings from fees that she collects every time she withdraws her liquidity throughout the trading horizon.
We work on the filtered probability space \(\left(\Omega,\mathcal{F},\mathbb{P};\mathbb{F}=(\mathcal{F}_{t})_{t\in[0,T]}\right)\) where \(\mathcal{F}_{t}\) is the natural filtration generated by the collection \((Z,\mu,\pi)\) and \([0,T]\) is the investment window. From the dynamics in (3) and (10), the LP also observes \(W\) and \(B\), and \(\eta\) is determined by \(\mu\), so the LP observes all the stochastic processes of this problem.
The dynamics of the LP's wealth consist of the fees earned and the position value, i.e., the value of the LP's holdings in the pool. Similar to Section 3.1, we denote the wealth process of the LP by \((\tilde{x}_{t}=\alpha_{t}+p_{t})_{t\in[0,T]},\) with \(\tilde{x}_{0}>0\) known, and recall that \(\left(\alpha_{t}\right)_{t\in[0,T]}\) is the value of the LP's position and \(\left(p_{t}\right)_{t\in[0,T]}\) is the fee revenue. At any time \(t,\) the LP uses her wealth \(\tilde{x}_{t}\) to provide liquidity, so the dynamics of the LP's position value are
\[\mathrm{d}\alpha_{t}=\tilde{x}_{t}\left(\frac{1}{\delta_{t}^{\ell}+\delta_{t} ^{u}}\right)\left(-\frac{\sigma^{2}}{2}\,\mathrm{d}t+\mu_{t}\,\delta_{t}^{u} \,\mathrm{d}t+\sigma\,\delta_{t}^{u}\,\mathrm{d}W_{t}\right)\,. \tag{15}\]
Throughout the investment window, we consider that the depth \(\kappa\) of the pool is constant and that liquidity taking activity in the pool generates fee income that is proportional to the pool size,
i.e., proportional to \(2\,\kappa\,Z_{t}^{1/2}\). The proportion of the pool that is generated as fees, which we denote as the pool fee rate, is given by the process \(\left(\pi_{t}\right)_{t\in[0,T]}\) ; see (9). Thus, the pool generates a quantity \(2\,\kappa\,Z_{t}^{1/2}\,\pi_{t}\,\mathrm{d}t\) in terms of asset \(X\) over an infinitesimal time step, which is distributed among LPs proportionally to the liquidity \(\tilde{\kappa}_{t}\) they hold in the pool. The fee income of the agent evolves as
\[\mathrm{d}p_{t}=\,\left(\frac{4}{\delta_{t}^{\ell}+\delta_{t}^{u}}\right)\, \pi_{t}\,\tilde{x}_{t}\,\mathrm{d}t-\gamma\,\left(\frac{1}{\delta^{\ell}+ \delta^{u}}\right)^{2}\,\tilde{x}_{t}\,\mathrm{d}t\,,\]
and recall that \(\gamma\geq 0\) is the concentration cost parameter; see Subsection 3.2. Thus, the dynamics of the LP's wealth \(\tilde{x}=\alpha+p\) are given by
\[\mathrm{d}\tilde{x}_{t}=\tilde{x}_{t}\,\left(\frac{1}{\delta_{t}^{\ell}+ \delta_{t}^{u}}\right)\,\left[-\frac{\sigma^{2}}{2}\,\mathrm{d}t+\mu_{t}\, \delta_{t}^{u}\,\mathrm{d}t+4\,\pi_{t}\,\mathrm{d}t+\sigma\,\delta_{t}^{u}\, \mathrm{d}W_{t}\right]-\gamma\,\left(\frac{1}{\delta_{t}^{\ell}+\delta_{t}^{u }}\right)^{2}\,\tilde{x}\,\mathrm{d}t\,, \tag{16}\]
where \(\pi\) follows the dynamics in (10).
As discussed in Section 3.2, the LP uses the stochastic drift \(\mu\) to improve trading performance so she adjusts the asymmetry \(\delta_{t}^{u}\,/\,\delta_{t}\) as a function of \(\mu_{t}\) and the spread \(\delta_{t}\) of her position, and we write \(\rho_{t}=\rho\left(\mu_{t},\delta_{t}\right)\) where \(\rho\in\mathcal{C}^{1}\left(\mathbb{R}\times\mathbb{R},\mathbb{R}\right)\) is a deterministic function; see Subsection 3.2. Next, use \(\delta_{t}=\delta_{t}^{u}+\delta_{t}^{\ell}\) and \(\delta_{t}^{u}\,/\,\delta_{t}=\rho\left(\mu_{t},\delta_{t}\right)\) in (16) to write the dynamics of the LP's wealth as
\[\mathrm{d}\tilde{x}_{t}=\frac{1}{\delta_{t}}\,\left(4\,\pi_{t}-\frac{\sigma^ {2}}{2}\right)\,\tilde{x}_{t}\,\mathrm{d}t+\mu_{t}\,\rho\left(\delta_{t},\mu_ {t}\right)\,\tilde{x}_{t}\,\mathrm{d}t+\sigma\,\rho\left(\delta_{t},\mu_{t} \right)\,\tilde{x}_{t}\,\mathrm{d}W_{t}-\frac{\gamma}{\delta_{t}^{2}}\,\tilde{ x}_{t}\,\mathrm{d}t\,.\]
### The optimal strategy
The LP controls the spread \(\delta\) of her position to maximise the expected utility of her terminal wealth in units of \(X\), and the set of admissible strategies is
\[\mathcal{A}_{t}=\left\{(\delta_{s})_{s\in[t,T]},\ \mathbb{R}\text{-valued},\ \mathbb{F}\text{-adapted, and }\int_{t}^{T}|\delta_{s}|^{2}\,\mathrm{d}s<+\infty\ \mathbb{P}\text{-a.s.}\right\}\]
with \(\mathcal{A}:=\mathcal{A}_{0}\).
Let \(\delta\in\mathcal{A}\). The performance criterion of the LP is a function \(u^{\delta}\colon[0,T]\times\mathbb{R}^{4}\to\mathbb{R}\) given by
\[u^{\delta}(t,\tilde{x},z,\pi,\mu)=\mathbb{E}_{t,\tilde{x},z,\pi,\mu}\left[U \left(\tilde{x}_{T}^{\delta}\right)\right]\,,\]
where \(U\) is a concave utility function, and the value function \(u:[0,T]\times\mathbb{R}^{4}\to\mathbb{R}\) of the LP is
\[u(t,\tilde{x},z,\pi,\mu)=\sup_{\delta\in\mathcal{A}}u^{\delta}(t,\tilde{x},z, \pi,\mu)\,. \tag{17}\]
The following results solve the optimal liquidity provision model when the LP assumes a general stochastic drift \(\mu\) and adopts a logarithmic utility.
**Proposition 1**: _Assume the asymmetry function \(\rho\) is as in (14) and that \(U(x)=\log(x)\). Then the function \(w\) given by_
\[w\left(t,\tilde{x},z,\pi,\mu\right)= \log\left(\tilde{x}\right)+\left(\pi-\eta\right)^{2}\int_{t}^{T} \mathbb{E}_{t,\mu}\left[\frac{8}{2\,\gamma+\mu_{s}^{2}\,\sigma^{2}}\right] \exp\left(-2\,\Gamma\left(s-t\right)\right)\,\mathrm{d}s \tag{18}\] \[+\left(\pi-\eta\right)\left(2\,\Gamma\,\overline{\pi}+\psi^{2} \right)\int_{t}^{T}\mathbb{E}_{t,\mu}\left[C\left(s,\mu_{s}\right)\right]\exp \left(-\Gamma\left(s-t\right)\right)\,\mathrm{d}s\] \[-\left(\pi-\eta\right)\int_{t}^{T}\mathbb{E}_{t,\mu}\left[\frac{4 \,\varepsilon}{2\,\gamma+\sigma^{2}\,\mu_{s}^{2}}\right]\exp\left(-\Gamma \left(s-t\right)\right)\,\mathrm{d}s\] \[+\int_{t}^{T}\left(\Gamma\,\overline{\pi}\,\mathbb{E}_{t,\mu} \left[E\left(s,\mu_{s}\right)\right]+\psi^{2}\,\mathbb{E}_{t,\mu}\left[\eta_{s }\,C\left(s,\mu_{s}\right)\right]\right)\mathrm{d}s\] \[-\int_{t}^{T}\left(\mathbb{E}_{t,\mu}\left[\frac{1}{2}\frac{ \varepsilon^{2}}{2\,\gamma+\sigma^{2}\,\mu_{s}^{2}}+\frac{\mu_{s}}{2}\right] \right)\mathrm{d}s-\pi\frac{\sigma^{2}}{8}\left(T-t\right)\,,\]
_where \(\eta_{s}=\frac{\sigma^{2}}{8}-\frac{\mu_{s}}{4}\left(\mu_{s}-\frac{\sigma^{2} }{2}\right)+\frac{\varepsilon}{4}\,,\,\eta=\frac{\sigma^{2}}{8}-\frac{\mu}{4} \left(\mu-\frac{\sigma^{2}}{2}\right)+\frac{\varepsilon}{4}\,,\,\mathbb{E}_{t,\mu}\) represents expectation conditioned on \(\mu_{t}=\mu\) (or \(\eta_{t}=\eta\)), and_
\[C\left(t,\mu\right)=\mathbb{E}_{t,\mu}\left[\,\int_{t}^{T}\frac{8}{2\,\gamma+ \mu_{s}^{2}\,\sigma^{2}}\exp\left(-2\,\Gamma\left(s-t\right)\right)\,\mathrm{d }s\right]\,,\]
_and_
\[E\left(t,\mu\right)=\mathbb{E}_{t,\mu}\left[\,\int_{t}^{T}\left(\left(2\, \Gamma\,\overline{\pi}+\psi^{2}\right)C\left(s,\mu\right)+\frac{4\, \varepsilon}{2\,\gamma+\sigma^{2}\,\mu_{s}^{2}}\right)\exp\left(-\Gamma\left( s-t\right)\right)\,\mathrm{d}s\right]\,,\]
_solves the HJB equation associated with problem (17)._
For a proof, see B.1.
**Theorem 1**: _Assume the asymmetry function \(\rho\) is as in (14) and that \(U(x)=\log(x)\). Then, the solution in Proposition 1 is the unique solution to the optimal control problem (17), and the optimal spread \(\left(\delta_{s}\right)_{s\in[t,T]}\in\mathcal{A}_{t}\) is given by_
\[\delta_{s}^{\star}=\frac{2\,\gamma+\mu_{s}^{2}\,\sigma^{2}}{4\,\pi_{s}-\frac{ \sigma^{2}}{2}+\mu_{s}\left(\mu_{s}-\frac{\sigma^{2}}{2}\right)}=\frac{2\, \gamma+\mu_{s}^{2}\,\sigma^{2}}{4\left(\pi_{s}-\eta_{s}\right)+\varepsilon}\,, \tag{19}\]
_where \(\eta_{s}=\frac{\sigma^{2}}{8}-\frac{\mu_{s}}{4}\left(\mu_{s}-\frac{\sigma^{2}} {2}\right)+\frac{\varepsilon}{4}\,.\)_
For a proof, see B.2.
### Discussion: profitability, PL, and concentration risk
In this section, we study how the strategy depends on various model parameters (volatility \(\sigma\) of the pool's marginal rate, fees paid to the LP, and terminal date \(T\)) when \(\mu\equiv 0\). When the LP does not assume a stochastic drift, the position is symmetric so \(\rho=1/2\) and \(\delta_{t}^{u}=\delta_{t}^{\ell}=\delta_{t}/2\),4 so the optimal spread (19) becomes
Footnote 4: The position range is approximately symmetric around the position rate \(Z\) because \(\delta_{t}^{u}=\delta_{t}^{\ell}\) does not imply that \(Z-Z^{\ell}=Z^{u}-Z\); see (4). However, for small values of \(\delta^{\ell}\) and \(\delta^{u},\) one can write the approximation \(Z-Z^{\ell}\approx Z^{u}-Z\), in which case the position is symmetric around the rate \(Z\).
\[\delta_{t}^{\ell\,\star}=\delta_{t}^{u\,\star}=\frac{2\,\gamma}{8\,\pi_{s}- \sigma^{2}}\qquad\Longrightarrow\quad\delta_{t}^{\star}=\frac{4\,\gamma}{8\, \pi_{s}-\sigma^{2}} \tag{20}\]
and the inequality in (12) becomes
\[4\,\pi_{t}-\frac{\sigma^{2}}{2}\geq\varepsilon>0\,,\quad\forall t\in[0,T]\,. \tag{21}\]
The inequality in (21) guarantees that the optimal control (20) does not explode, and ensures that fee income is large enough for LP activity to be profitable. In particular, it ensures that \(\pi>\sigma^{2}/8+\varepsilon.\) Note that when \(\varepsilon\to 0\), i.e., when \(\sigma^{2}/4\rightarrow\pi,\) the spread \(\delta\rightarrow+\infty\). However, we require that the spread \(\delta=\delta^{u}+\delta^{\ell}\leq 4.\) Note that \(\delta=4\) when \(\delta^{\ell}=\delta^{u}=2,\) so the conditions \(\delta^{\ell}\leq 2\) and \(\delta^{u}\leq 2\) become
\[\frac{\gamma}{4\,\pi-\frac{\sigma^{2}}{2}}\leq 2\implies\pi-\frac{\gamma}{8} \geq\frac{\sigma^{2}}{8}\,. \tag{22}\]
When \(\delta^{\ell}=\delta^{u}=2,\) the LP provides liquidity in the maximum range \((0\,,+\infty),\) so the depth of her liquidity position \(\tilde{\kappa}\) is minimal, the PL is minimal, and the LP's position is equivalent to providing liquidity in CPMs without CL; see Cartea et al. (2023) for more details. In that case, the dynamics of PL in (7) are
\[\mathrm{dPL}_{t}=-\frac{\sigma^{2}}{8}\,\alpha_{t}\,\mathrm{d}t\,,\]
so \(\sigma^{2}/8\) is the lowest rate at which the LP's assets can depreciate due to PL.
On the other hand, when \(\delta\leq 4\), the depreciation rate of the LP's position value in (7) is higher. In particular, if \(\delta=\delta^{\text{tick}}\), where \(\delta^{\text{tick}}\) is the spread of a liquidity position concentrated within a single tick range, then the depth of the LP's liquidity position \(\tilde{\kappa}\) is maximal and the PL is maximal.
In that case, the dynamics of PL in (7) are
\[\mathrm{dPL}_{t}=-\frac{\sigma^{2}}{2\,\delta^{\text{tick}}}\,\alpha_{t}\,\mathrm{ d}t\,,\]
so \(\sigma^{2}/2\,\delta^{\text{tick}}\) is the highest rate at which the LP's assets can depreciate due to PL.
LPs should track the profitability of the pools they consider and check if the expected fee revenue covers PL before considering depositing their assets in the pool. When \(\mu=0,\) we propose that LPs use \(\sigma^{2}/8\) as a rule-of-thumb threshold for the pool's rate of profitability because \(\sigma^{2}/8\) is the lowest rate of depreciation of their wealth in the pool.
The condition in (22) ensures that the profitability \(\pi-\gamma/8,\) which is the pool fee rate adjusted by the concentration cost, is higher than the depreciation rate of the LP's assets in the pool. Thus, the condition imposes a minimum profitability level of the pool, so LP activity is viable. An optimal control \(\delta^{\star}>4\) indicates non-viable LP activity because fees are not enough to compensate for the PL borne by the LP. Figure 8 shows the estimated pool fee rate and the estimated depreciation rate in the ETH/USDC pool (from January to August 2022). In particular, the CIR model captures the dynamics of \(\pi_{t}-\sigma^{2}/8.\)
Next, we study the dependence of the optimal spread on the value of the concentration cost coefficient \(\gamma,\) the fee rate \(\pi,\) and the volatility \(\sigma\). The concentration cost coefficient \(\gamma\) scales the spread linearly in (20). Recall that the cost term penalises small spreads because there is a risk that the rate will exit the LP's range. Thus, large values of \(\gamma\) generate large values of the spread. Figure 9 shows the optimal spread as a function of the pool fee rate \(\pi\). Large potential fee income pushes the strategy towards targeting more closely the marginal rate \(Z\) to profit from fees. Finally,
Figure 8: Estimated pool fee rate from February to August 2022 in the ETH/USDC pool. For any time \(t\), the pool fee rate is the total fee income, as a percentage of the total pool size, paid by LTs in the period \([t-1\text{ day},t].\) The pool size at time \(t\) is \(2\,\kappa\,Z_{t}^{1/2}\) where \(Z_{t}\) is the active rate in the pool at time \(t\).
Figure 10 shows that the optimal spread increases as the volatility of the rate \(Z\) increases.
Finally, the optimal spread does not depend on time or the terminal date \(T.\) Note that the LP marks-to-market her wealth in units of \(X,\) but does not penalise her holdings in asset \(Y\). In particular, the LP's performance criterion does not include a running penalty or a final liquidation penalty (to turn assets into cash or into the reference asset). For example, if at the end of the trading window the holdings in asset \(Y\) must be exchanged for \(X\), then the optimal strategy would skew, throughout the trading horizon, the liquidity range to convert holdings in \(Y\) into \(X\) through LT activity.5
Footnote 5: In LOBs, one usually assumes that final inventory is liquidated with adverse price impact and that there is a running inventory penalty, thus market making strategies in LOBs depend on the terminal date \(T.\)
Figure 10: Optimal LP position range \(\left(Z^{\ell},Z^{u}\right]\) as a function of the volatility \(\sigma\) for different values of the concentration cost parameter \(\gamma,\) when \(Z=100,\)\(\pi=0.02,\) and \(\mu=0.\)
### Discussion: drift and position skew
In this section, we study how the strategy depends on the stochastic drift \(\mu\). Use \(\delta_{t}=\delta_{t}^{\ell}+\delta_{t}^{u}\) and \(\rho\left(\delta_{t},\mu_{t}\right)=\delta_{t}^{u}/\delta_{t}\) to write the two ends of the optimal spread as
\[\delta_{t}^{u}\star=\frac{2\,\gamma+\mu_{t}^{2}\,\sigma^{2}}{8\,\pi_{t}-\sigma ^{2}+2\,\mu_{t}\left(\mu_{t}-\frac{\sigma^{2}}{2}\right)}+\mu_{t}\quad\text{ and}\quad\delta_{t}^{\ell}\star=\frac{2\,\gamma+\mu_{t}^{2}\,\sigma^{2}}{8\,\pi_{t}- \sigma^{2}+2\,\mu_{t}\left(\mu_{t}-\frac{\sigma^{2}}{2}\right)}-\mu_{t}\,. \tag{23}\]
The inequality in (12) guarantees that the optimal control in (23) does not explode and ensures that fee income is large enough for LP activity to be profitable. The profitability condition in (22) becomes
\[\pi_{t}-\frac{\gamma}{8}\geq\frac{\sigma^{2}}{8}\left(\frac{\mu_{t}^{2}}{2}+1 \right)-\frac{\mu_{t}}{4}\left(\mu_{t}-\frac{\sigma^{2}}{2}\right)\,,\]
so LPs that assume a stochastic drift in the dynamics of the exchange rate \(Z\) should use this simplified measure of the depreciation rate due to PL as a rule-of-thumb before considering depositing their assets in the pool.
Next, we study the dependence of the optimal spread on the value of the drift \(\mu\). First, recall that the controls in (23) must obey the inequalities6
Footnote 6: The admissible set of controls is not restricted to these ranges. However, values outside these range cannot be implemented in practice.
\[0<\delta_{t}^{\ell}\leq 2\quad\text{and}\quad 0\leq\delta_{t}^{u}<2\,,\]
because \(0\leq Z^{\ell}<Z^{u}<\infty\) and \(Z_{t}\in(Z_{t}^{\ell},Z_{t}^{u}]\), which together with (5) implies \(0\leq\delta_{t}\leq 4\). Next, the asymmetry function satisfies
\[0<\rho\left(\delta_{t},\mu\right)=\frac{\delta_{t}^{u}}{\delta_{t}}<1\,, \tag{24}\]
which implies
\[0\leq\rho\left(\delta_{t},\mu\right)\,\delta_{t}<2\quad\text{and}\quad 0\leq \left(1-\rho\left(\delta_{t},\mu\right)\right)\,\delta_{t}<2\,. \tag{25}\]
Now, use (14) and (25) to write
\[0\leq\left(\frac{1}{2}+\frac{\mu}{\delta_{t}}\right)\,\delta_{t}<2\quad\text{ and}\quad 0\leq\left(\frac{1}{2}-\frac{\mu}{\delta_{t}}\right)\,\delta_{t}<2\,. \tag{26}\]
Finally, use (24) and (26) to obtain the inequalities
\[2\,\left|\mu\right|\leq\delta_{t}\leq 4-2\,\left|\mu\right|\,, \tag{27}\]
so \(\mu\) must be in the range \([-1,1]\) for the LP to provide liquidity. If \(\mu\) is outside this range, concentration risk is too high so the LP must withdraw her holdings from the pool. Recall that the dynamics of \(Z\) are geometric and \(\mu\) is a percentage drift, so values of \(\mu\) outside the range \([-1,1]\) are unlikely. Moreover, when \(\mu=-1,\) the drift of the exchange rate \(Z\) is large and negative so the optimal range is \((0,Z],\) i.e., the largest possible range to the left of \(Z.\) When \(\mu=1,\) the drift of the exchange rate \(Z\) is large and positive so the optimal range is \((Z,+\infty),\) which is the largest possible range to the right of \(Z.\) Condition (27) is always verified when we study the performance of the strategy in the ETH/USDC pool. Figure 11 shows how the optimal spread adjusts to the value of the drift \(\mu.\) Finally, note that
\[\frac{\partial\delta^{u\,\star}}{\partial\sigma}=\frac{\partial\delta^{\ell \,\star}}{\partial\sigma}=\frac{2\,\mu^{2}\,\sigma\left(4\,\pi-4\,\eta+ \varepsilon\right)+4\,\sigma\left(1+\mu\right)\left(2\,\gamma+\mu^{2}\, \sigma^{2}\right)}{\left(4\,\pi-4\,\eta+\varepsilon\right)^{2}}>0\,,\qquad \forall\mu\in[-1,1]\,,\]
shows that the optimal range is strictly increasing in the volatility \(\sigma\) of the rate \(Z,\) which one expects as increased activity that exposes the position value to more PL, and increases the concentration risk.
## 5 Performance of strategy
### Methodology
In this section, we use Uniswap v3 data between 1 January and 18 August 2022 (see data description in A) to study the performance of the strategy of Section 4. We consider execution costs and discuss how gas fees and liquidity taking activity in the pool affect the performance of
Figure 11: Optimal LP position range \(\left(Z^{\ell},Z^{u}\right]\) as a function of the drift \(\mu\) for different values of the concentration cost parameter \(\gamma,\) when \(Z=100,\)\(\pi=0.02,\) and \(\sigma=0.02.\)
the strategy.7
Footnote 7: In practice, LPs pay gas fees when using the Ethereum network to deposit liquidity, withdraw liquidity, and adjust their holdings. Gas is paid in Ether, the native currency of the Ethereum network, and measures the computational effort of the LP operation; see Cartea et al. (2022).
Our strategy in Section 4 is solved in continuous time. In our performance study, we discretise the trading window in one-minute periods and the optimal spread is fixed at the beginning of each time-step. That is, let \(t_{i}\) be the times where the LP interacts with the pool, where \(i\in\{1,\ldots,N\}\) and \(t_{i+1}-t_{i}=1\) minute. For each time \(t_{i},\) the LP uses the optimal strategy in (23) based on information available at time \(t_{i},\) and she fixes the optimal spread of her position throughout the period \([t_{i},t_{i+1});\) recall that the optimal spread is not a function of time.
To determine the optimal spread (23) of the LP's position at time \(t_{i},\) we use in-sample data \([t_{i}-1\text{ day},t_{i}]\) to estimate the parameters. The volatility \(\sigma\) of the rate \(Z\) is given by the standard deviation of one-minute log returns of the rate \(Z,\) which is multiplied by \(\sqrt{1440}\) to obtain a daily estimate. The pool fee rate \(\pi_{t}\) is given by the total fee income generated by the pool during the in-sample period, divided by the pool size \(2\,\kappa\,Z_{t}^{1/2}\) observed at time \(t,\) where \(\kappa\) is the observed active depth at time \(t.\) The concentration cost is \(\gamma=5\times 10^{-7}\) and it penalises small spreads as shown in Figure 12. Finally, prediction of the future marginal rate \(Z\) is out of the scope of this work, thus we set \(\mu=0\) and \(\rho=0.5.\)
To compute the LP's performance as a result of changes in the value of her holdings in the pool (position value), and as a result of fee income, we use out-of-sample data \([t_{i},t_{i+1}].\) For the position value, we use equation (15) to determine the one-minute out-of-sample period. For fee income, we use LT transactions in the pool at rates included in the range \(\left(Z_{t}^{\ell},Z_{t}^{u}\right]\) and equation (2). The income from fees accumulates in a separate account in units of \(X\) with zero risk-free
rate.8 At the end of the out-of-sample window, the LP withdraws her liquidity and earns the accumulated fees, and we repeat the in-sample estimation and out-of-sample liquidity provision described above. Thus, at times \(t_{i},\) where \(i\in\{1,\ldots,N-1\},\) the LP consecutively withdraws and deposits liquidity in different ranges. Between two consecutive operations (i.e., reposition liquidity provision), the LP may need to take liquidity in the pool to adjust her holdings in asset \(X\) and \(Y.\) In that case, we use results in Cartea et al. (2022) to compute execution costs.9 In particular, we consider execution costs when the LP trades asset \(Y\) in the pool to adjust her holdings between two consecutive operations. More precisely, we consider that for every quantity \(y\) of asset \(Y\) bought or sold in the pool, a transaction cost \(y\,Z_{t}^{3/2}/\kappa\) is incurred. We assume that the LP's taking activity does not impact the dynamics of the pool. Finally, we obtain 331,858 individual LP operations from 1 January to 18 August 2022.
Footnote 8: In practice, fees accumulate in both assets \(X\) and \(Y.\)
Footnote 9: The authors in Cartea et al. (2022) show that execution costs in the pool are a closed-form function of the rate \(Z,\) the pool depth \(\kappa,\) and the transaction size.
### Benchmark
We compare the performance of our strategy with the performance of LPs in the pool we consider. We select operation pairs that consist of first providing and then withdrawing the same depth of liquidity \(\tilde{\kappa}\) at two different points in time by the same LP.10 The operations that we select represent approximately \(66\%\) of all LP operations. Figure 13 shows the distribution of key variables that describe how LPs provide liquidity. The figure shows the distribution of the number of operations per LP, the changes in the position value, the length of time the position is held in the pool, and the position spread.
Footnote 10: In blockchain data, every transaction is associated to a unique wallet address.
Finally, Table 2 shows the average and standard deviation of the distributions in Figure 13. Notice that the bulk of liquidity is deposited in small ranges, and positions are held for short
Figure 13: From left to right: distribution of the number of operations per LP, changes in the holdings value as a percentage of initial wealth, position hold time, and position spread. ETH/USDC pool with selected operations from 5,156 LPs between 5 May 2021 and 18 August 2022.
periods of time; \(20\%\) of LP positions are held for less than five minutes and \(30\%\) for less than one hour. Table 2 also shows that, on average, the performance of the LP operations in the pool and the period we consider is \(-1.49\,\%\) per operation.
### Performance results
This subsection focuses on the performance of our strategy when gas fees are zero -- at the end of the section we discuss the profitability of the strategy when gas fees are included. Figure 14 shows the distribution of the optimal spread (20) posted by the LP. The bulk of liquidity is deposited in ranges with a spread \(\delta\) below \(1\%\). Table 2 compares the average performance of the components of the optimal strategy with the performance of LP operations observed in the ETH/USDC pool.11 Table 3 suggests that the position of the LP loses value in the pool (on average) because of PL; however, the fee income would cover the loss, on average, if one assumes that gas fees are zero. Finally, the results show that our optimal strategy significantly improves the PnL of LP activity in the pool and the performance of the assets themselves.
Footnote 11: In particular, performance is given for the selected operations shown in Figure 13.
The results in Table 3 do not consider gas fees. Gas cost is a flat fee, so it does not depend on the position spread or size of transaction. If the activity of the LP does not affect the pool and if the fees collected scale with the wealth that the LP deposits in the pool, then the LP should consider an initial amount of \(X\) and \(Y\) that would yield fees enough to cover the flat gas fees. An estimate of the average gas cost gives an estimate of the minimum amount of initial wealth for a self-financing strategy to be profitable. Recall that, at any point in time \(t,
\begin{table}
\begin{tabular}{r||c c} \hline \hline & Average & Standard deviation \\ \hline Number of & & \\ transactions per LP & \(11.5\) & \(40.2\) \\ Position value performance & & \\ (\(\alpha_{T}/\tilde{x}_{0}-1\)) & \(-1.64\%\) & \(7.5\%\) \\ Fee income & & \\ (\(p_{T}/\tilde{x}_{0}-1\)) & \(0.155\%\) & \(0.274\%\) \\ Hold time & \(6.1\) days & \(22.4\) days \\ Spread & \(18.7\%\) & \(43.2\%\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: LP operations statistics in the ETH/USDC pool using operation data of 5,156 different LPs between 5 May 2021 and 18 August 2022. Performance includes transaction fees and excludes gas fees. The position value performance and the fee income are not normalised by the hold time.
liquidity, adjusts her holdings, and then deposits new liquidity. In the data we consider, the average gas fee is \(30.7\,\) USD to provide liquidity, \(24.5\,\) USD to withdraw liquidity, and \(29.6\,\) USD to take liquidity. Average gas costs are obtained from blockchain data which record the gas used for every transaction, and from historical gas prices. The LP pays a flat fee of \(84.8\,\)USD per operation when implementing the strategy in the pool we consider, so the LP strategy is profitable, on average, if the initial wealth deposited in the pool is greater than \(1.8\times 10^{6}\,\)USD.
Fee income of the LP strategy is limited by the volume of liquidity taking activity in the pool, so one should not only consider increasing the initial wealth to make the strategy more profitable. There are \(4.7\,\)LT transactions per minute and the average volume of LT transactions is \(477,275\,\)USD per minute, so if the LP were to own \(100\%\) of the available liquidity, the average fee income per operation would be 1,431 USD.
Finally, our analysis did not take into account the impact of liquidity provision on liquidity taking activity, however, we expect liquidity provision in CPMs with CL to be profitable in pools where the volatility of the marginal rate is low, where liquidity taking activity is high, and when the gas fee cost to interact with the liquidity pools is low. These conditions ensure that the fees paid to LPs in the pool, adjusted by gas fees and concentration cost, exceed PL so liquidity provision is viable.
## 6 Conclusions
We studied the dynamics of the wealth of an LP in a CPM with CL who implements a self-financing strategy that dynamically adjusts the range of liquidity. The wealth of the LP consists of the position value and fee revenue. We showed that the position value depreciates due to PL and the LP widens her liquidity range to minimise her exposure to PL. On the other hand, the fee
Figure 14: Distribution of the position spread \(\delta\).
revenue is higher for narrow ranges, but narrow ranges also increase concentration risk.
We derived the optimal strategy to provide liquidity in a CPM with CL when the LP maximises expected utility of terminal wealth. This strategy is found in closed-form for log-utility of wealth, and it shows that liquidity provision is subject to a profitability condition. In particular, the potential gains from fees, net of gas fees and concentration costs, must exceed PL. Our model shows that the LP strategically adjusts the spread of her position around the reference exchange rate; the spread depends on various market features including tthe volatility of the rate, the liquidity taking activity in the pool, and the drift of the rate.
\begin{table}
\begin{tabular}{c||c c c} \hline \hline & Position value performance & Fee income & Total performance \\ & per operation & per operation & per operation \\ & & & (with transaction costs, \\ & & & without gas fees) \\ \hline Optimal strategy & \(-0.015\%\) & \(0.0197\%\) & \(0.0047\%\) \\ & \((0.0951\%)\) & \((0.005\%)\) & \((0.02\%)\) \\ Market & \(-0.0024\%\) & \(0.0017\%\) & \(-0.00067\%\) \\ & \((0.02\%)\) & \((0.005\%)\) & \((0.02\%)\) \\ Hold & n.a. & n.a. & \(-0.00016\%\) \\ & & & \((0.08\%)\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Optimal strategy**: Mean and standard deviation of the one-minute performance of the LP strategy (20) and its components. **Market**: Mean and standard deviation of one-minute performance of LP activity in the ETH/USDC pool using data between 1 January and 18 August 2022. **Hold**: Mean and standard deviation of the one-minute performance of holding the assets. In all cases, the performance includes transaction costs (pool fee and execution cost), but does not include gas fees.
## Appendix A Uniswap v3 ETH/USDC pool data statistics
ETH represents _Ether_, the Ethereum blockchain native currency. USDC represents _USD coin_, a currency fully backed by U.S. Dollars (USD). The fees paid by LTs is \(0.05\%\) of trade size; the fee is deducted from the quantity paid into the pool by the LT and distributed among LPs; see equation (2).
Uniswap v3 pools can be created with different values of the LT trading fee, e.g., \(0.01\%,\)\(0.05\%,\)\(0.30\%,\) or \(1\%,\) called fee tiers. Additionally, different pools with the same asset pair may coexist if they have different fee tiers. Once a pool is created, its fee tier does not change.
## Appendix B Proofs
### Proof of Proposition 1
To solve the problem (17), we introduce an equivalent control problem. First, define the process \(\left(\tilde{\pi}_{t}\right)_{t\in[0,T]}=\left(\pi_{t}-\eta_{t}\right)_{t\in[0,T]}\) with dynamics
\[d\tilde{\pi}_{t}=\Gamma\,\left(\overline{\pi}-\tilde{\pi}_{t}\right)\mathrm{d }t+\psi\,\sqrt{\tilde{\pi}_{t}}\,\mathrm{d}B_{t}\,,\]
where \(\tilde{\pi}_{0}=\pi_{0}-\eta_{0}\) and \(\eta\) is in (11).
We introduce the performance criterion \(\tilde{u}^{\delta}\colon[0,T]\times\mathbb{R}^{4}\to\mathbb{R}\) given by
\[\tilde{u}^{\delta}(t,\tilde{x},z,\pi,\mu)=\mathbb{E}_{t,\tilde{x},z,\tilde{ \pi},\mu}\left[\log\left(\tilde{x}_{T}^{\delta}\right)\right]\,,\]
\begin{table}
\begin{tabular}{c||c c} \hline & LT & LP \\ \hline Number of instructions & 2,654,347 & 68,434 \\ Average daily number & & \\ of instructions & 4,720 & 471 \\ \hline Total USD volume & \(\approx\) \$ 262\(\times 10^{9}\) & \(\approx\) \$ 232 \(\times 10^{9}\) \\ Average daily USD volume & \$ 554,624,500 & \$ 863,285 \\ \hline Average LT transaction & & \\ or LP operation size & \$ 98,624 & \$ 3,611,197 \\ Average interaction frequency & 13 seconds & 590 seconds \\ \hline \hline \end{tabular}
\end{table}
Table 4: LT and LP activity in the ETH/USDC pool between 5 May 2021 and 18 August 2022: Total and average daily count of LT transactions and LP operations in the pool, total and average daily size of LT transactions and LP operations in the pool in USD, average LT transaction size and average LP operation size in USD dollars, and average liquidity taking and provision frequency.
and the value function \(\tilde{u}:[0,T]\times\mathbb{R}^{4}\to\mathbb{R}\) given by
\[\tilde{u}(t,\tilde{x},z,\tilde{\pi},\mu)=\sup_{\delta\in\mathcal{A}}u^{\delta}(t,\tilde{x},z,\pi,\mu)\,.\] (B.1)
Clearly, the problems (B.1) and (17) are equivalent, and the value functions satisfy \(u(t,\tilde{x},z,\pi,\mu)=\tilde{u}(t,\tilde{x},z,\tilde{\pi},\mu)\) for all \((t,\tilde{x},z,\pi,\mu)\in:[0,T]\times\mathbb{R}^{4}\) and for all \(\tilde{\pi}=\pi-\eta\in\mathbb{R}\), where \(\eta=\frac{\sigma^{2}}{8}-\frac{\mu}{4}\left(\mu-\frac{\sigma^{2}}{2}\right)+ \frac{\varepsilon}{4}\,.\)
The value function in (B.1) admits the dynamic programming principle, so it satisfies the HJB equation
\[0= \,\partial_{t}w+\frac{1}{2}\sigma^{2}\,z^{2}\,\partial_{zz}w+\mu \,Z\,\partial_{z}w+\Gamma\left(\overline{\pi}-\tilde{\pi}\right)\partial_{ \tilde{\pi}}w+\frac{1}{2}\,\psi^{2}\tilde{\pi}\partial_{\tilde{\pi}\tilde{\pi }}w+\mathcal{L}^{\mu}w\] (B.2) \[\,+\sup_{\delta\in\mathbb{R}^{+}}\!\!\left(\frac{1}{\delta}\left( 4\,\tilde{\pi}+4\,\eta-\frac{\sigma^{2}}{2}\right)\tilde{x}\,\partial_{\tilde{ \pi}}w+\mu\,\rho\left(\delta,\mu\right)\,\tilde{x}\,\partial_{\tilde{\pi}}w+ \frac{1}{2}\,\sigma^{2}\,\rho\left(\delta,\mu\right)^{2}\,\tilde{x}^{2}\, \partial_{\tilde{x}\tilde{x}}w\right.\] \[\qquad\qquad\qquad\left.-\,\frac{\gamma}{\delta^{2}}\,\tilde{x} \,\partial_{\tilde{x}}w+\sigma^{2}\,\rho\left(\delta,\mu\right)\,\tilde{x}\,z \,\partial_{\tilde{x}z}w\right),\]
with terminal condition
\[w(T,\tilde{x},z,\tilde{\pi},\mu)=\log\left(\tilde{x}\right),\quad\forall\left( \tilde{x},z,\tilde{\pi},\mu\right)\in\mathbb{R}^{4}\,,\]
where \(\mathcal{L}^{\mu}\) is the infinitesimal generator of \(\mu\).
To study the HJB in (B.2), use the ansatz
\[w\left(t,\tilde{x},z,\tilde{\pi},\mu\right)=\log\left(\tilde{x}\right)+\theta \left(t,z,\tilde{\pi},\mu\right)\,,\]
to obtain the HJB
\[0= \,\partial_{t}\theta+\frac{1}{2}\sigma^{2}\,z^{2}\,\partial_{zz} \theta+\mu\,Z\,\partial_{z}\theta+\Gamma\left(\overline{\pi}-\tilde{\pi} \right)\partial_{\tilde{\pi}}w+\frac{1}{2}\,\psi^{2}\tilde{\pi}\partial_{ \tilde{\pi}\tilde{\pi}}w+\frac{\mu}{2}-\frac{1}{8}\,\sigma^{2}\] (B.3) \[\,+\mathcal{L}^{\mu}\theta+\sup_{\delta\in\mathbb{R}^{+}}\!\left( \frac{1}{\delta}\left(4\,\tilde{\pi}+4\,\eta-\frac{\sigma^{2}}{2}\right)+ \frac{\mu^{2}}{\delta}-\frac{1}{2}\,\sigma^{2}\,\left(\frac{\mu^{2}}{\delta^{2 }}+\frac{\mu}{\delta}\right)-\frac{\gamma}{\delta^{2}}\right),\]
with terminal condition
\[\theta\left(T,z,\tilde{\pi},\mu\right)=0\,,\quad\forall(z,\tilde{\pi},\mu)\in \mathbb{R}^{3}\,.\]
The supremum in the HJB (B.3) is attained at
\[\delta^{\star}=\frac{2\,\gamma+\mu^{2}\,\sigma^{2}}{4\,\tilde{\pi}+4\,\eta-\frac{ \sigma^{2}}{2}+\mu\left(\mu-\frac{\sigma^{2}}{2}\right)}=\frac{2\,\gamma+\mu^{2 }\,\sigma^{2}}{4\,\tilde{\pi}+\epsilon}\,.\]
Thus, (B.3) becomes
\[0=\partial_{t}\theta+\frac{1}{2}\sigma^{2}\,z^{2}\,\partial_{zz}\theta+\mu\,Z \,\partial_{z}\theta+\Gamma\left(\overline{\pi}-\tilde{\pi}\right)\partial_{ \tilde{\pi}}\theta+\frac{1}{2}\,\psi^{2}\tilde{\pi}\partial_{\tilde{\pi}\tilde {\pi}}\theta+\frac{\mu}{2}-\frac{\sigma^{2}}{8}+\mathcal{L}^{\mu}\theta+\frac{ 1}{2}\frac{\left(4\,\tilde{\pi}+\epsilon\right)^{2}}{2\,\gamma+\mu^{2}\, \sigma^{2}}\,.\]
Next, substitute the ansatz
\[\theta\left(t,z,\tilde{\pi},\mu\right)= \,A\left(t,\mu\right)z^{2}+B\left(t,\mu\right)\tilde{\pi}\,z+C \left(t,\mu\right)\tilde{\pi}^{2}\] \[+D\left(t,\mu\right)z+E\left(t,\mu\right)\tilde{\pi}+F\left(t,\mu \right)\,,\]
in (B.3), collect the terms in \(Z\) and \(\tilde{\pi}\), and write the following system of PDEs:
\[\begin{cases}\left(\partial_{t}+\mathcal{L}^{\mu}\right)A\left(t,\mu\right)=& -\sigma^{2}\,A\left(t,\mu\right)-2\,\mu\,A\left(t,\mu\right)\,,\\ \left(\partial_{t}+\mathcal{L}^{\mu}\right)B\left(t,\mu\right)=&-\mu\,B\left( t,\mu\right)+\Gamma B\left(t,\mu\right)\,,\\ \left(\partial_{t}+\mathcal{L}^{\mu}\right)C\left(t,\mu\right)=&2\,C\left(t, \mu\right)\Gamma-\frac{8}{2\,\gamma+\mu^{2}\,\sigma^{2}}\,,\\ \left(\partial_{t}+\mathcal{L}^{\mu}\right)D\left(t,\mu\right)=&-\mu\,D\left( t,\mu\right)-\Gamma\,\overline{\pi}\,B\left(t,\mu\right)\,,\\ \left(\partial_{t}+\mathcal{L}^{\mu}\right)E\left(t,\mu\right)=&-2\,\Gamma \,\overline{\pi}\,C\left(t,\mu\right)-\psi^{2}\,C\left(t,\mu\right)+\Gamma \,E\left(t,\mu\right)-\frac{4\,\varepsilon}{2\,\gamma+\sigma^{2}\,\mu^{2}}\,, \\ \left(\partial_{t}+\mathcal{L}^{\mu}\right)F\left(t,\mu\right)=&-\Gamma\, \overline{\pi}\,E\left(t,\mu\right)+\psi^{2}\,\eta\,C\left(t,\mu\right)-\frac {1}{2}\frac{\varepsilon^{2}}{2\,\gamma+\sigma^{2}\,\mu^{2}}-\frac{\mu}{2}+ \frac{\sigma^{2}}{8}\,,\end{cases}\]
with terminal conditions \(A(T,\mu)=B(T,\mu)=C(T,\mu)=D(T,\mu)=E(T,\mu)=F(T,\mu)=0\) for all \(\mu\in\mathbb{R}\,.\)
First, note that the PDEs in \(A\), \(B\), and \(D\) admit the unique solutions \(A=B=D=0\,.\) Next, we solve the PDE in \(C\,.\) Use Ito's lemma to write
\[C\left(T,\mu_{T}\right)=C\left(t,\mu_{t}\right)+\int_{t}^{T}\left(\partial_{t }+\mathcal{L}^{\mu}\right)C\left(s,\mu_{s}\right)\mathrm{d}s\,.\]
Next, replace \(\left(\partial_{t}+\mathcal{L}^{\mu}\right)C\left(s,\mu_{s}\right)\) with \(2\,C\left(s,\mu_{s}\right)\Gamma-\frac{8}{2\,\gamma+\mu_{s}^{2}\,\sigma^{2}}\) to obtain
\[C\left(T,\mu_{T}\right)=C\left(t,\mu_{t}\right)+2\,\Gamma\int_{t}^{T}C\left(s,\mu_{s}\right)\mathrm{d}s-\int_{t}^{T}\frac{8}{2\,\gamma+\mu_{s}^{2}\,\sigma^ {2}}\,\mathrm{d}s\,.\]
Take expectations to get the equation
\[C\left(t,\mu_{t}\right)=\mathbb{E}_{t,\mu}\left[-2\,\Gamma\int_{t}^{T}C\left(s, \mu_{s}\right)\mathrm{d}s+\int_{t}^{T}\frac{8}{2\,\gamma+\mu_{s}^{2}\,\sigma^{2 }}\,\mathrm{d}s\right]\,.\]
Now consider the candidate solution function
\[\hat{C}\left(t,\mu_{t}\right)=\mathbb{E}_{t,\mu}\left[\,\int_{t}^{T}\frac{8}{2 \,\gamma+\mu_{s}^{2}\,\sigma^{2}}\exp\left(-2\,\Gamma\left(s-t\right)\right) \,\mathrm{d}s\right]\]
and write
\[\mathbb{E}_{t,\mu}\left[-2\,\Gamma\int_{t}^{T}\hat{C}\left(s,\mu_ {s}\right)\mathrm{d}s+\int_{t}^{T}\frac{8}{2\,\gamma+\mu_{s}^{2}\,\sigma^{2}} \,\mathrm{d}s\right]\] \[= \mathbb{E}_{t,\mu}\left[-2\,\Gamma\int_{t}^{T}\mathbb{E}_{t,\mu} \left[\,\int_{s}^{T}\frac{8}{2\,\gamma+\mu_{u}^{2}\,\sigma^{2}}\exp\left(-2\, \Gamma\left(u-s\right)\right)\,\mathrm{d}u\right]\mathrm{d}s+\int_{t}^{T} \frac{8}{2\,\gamma+\mu_{s}^{2}\,\sigma^{2}}\,\mathrm{d}s\right]\] \[= \mathbb{E}_{t,\mu}\left[\,\int_{t}^{T}\frac{8}{2\,\gamma+\mu_{s} ^{2}\,\sigma^{2}}\exp\left(-2\,\Gamma\left(s-t\right)\right)\,\mathrm{d}s \right]\,.\]
Thus \(\hat{C}\) is a solution to the equation in \(C\) and by uniqueness of solutions, we conclude that \(C=\hat{C}\,.\)
Follow the same steps as above to obtain the solution
\[E\left(t,\mu\right)=\mathbb{E}_{t,\mu}\left[\,\int_{t}^{T}\left(\left(2\, \Gamma\,\overline{\pi}+\psi^{2}\right)C\left(s,\mu\right)+\frac{4\,\varepsilon }{2\,\gamma+\sigma^{2}\,\mu_{s}^{2}}\right)\exp\left(-\Gamma\left(s-t\right) \right)\,\mathrm{d}s\right]\]
to the PDE in \(E\), and the solution
\[F\left(t,\mu\right)=\mathbb{E}_{t,\mu}\left[\,\int_{t}^{T}\left(\Gamma\, \overline{\pi}\,E\left(s,\mu_{s}\right)+\psi^{2}\,\eta_{s}\,C\left(s,\mu_{s} \right)-\frac{1}{2}\frac{\varepsilon^{2}}{2\,\gamma+\sigma^{2}\,\mu_{s}^{2}} -\frac{\mu_{s}}{2}+\frac{\sigma^{2}}{8}\right)\mathrm{d}s\right]\]
to the PDE in \(F\,,\) where \(\eta_{s}=\frac{\sigma^{2}}{8}-\frac{\mu_{s}}{4}\left(\mu_{s}-\frac{\sigma^{2} }{2}\right)+\frac{\varepsilon}{4}\,,\) which proves the result. \(\square\)
### Proof of Theorem 1
Proposition 1 provides a classical solution to (B.2). Therefore, standard results apply and showing that (19) is an admissible control is enough to prove that (18) is the value function (17). Specifically, use the form of the optimal control \(\delta^{\star}\) in (19) and the dynamics for \(\pi\) in (10) to obtain
\[0<\delta^{\star}_{s}\leq\frac{\sigma^{2}\mu_{s}^{2}+2\,\gamma}{\varepsilon}\,, \qquad\forall s\in\left[t,T\right],\]
thus \(\delta^{\star}\) is an admissible control.
|
2306.17394 | Comparing spin supplementary conditions for particle motion around
traversable wormholes | The Mathisson-Papapetrou-Dixon (MPD) equations describe the motion of
spinning test particles. It is well-known that these equations, which couple
the Riemann curvature tensor with the antisymmetric spin tensor S, together
with the normalization condition for the four-velocity, is a system of eleven
equations relating fourteen unknowns. To ``close'' the system, it is necessary
to introduce a constraint of the form V_\mu S^{\mu \nu} = 0, usually known as
the spin supplementary condition (SSC), where V_\mu is a future-oriented
reference vector satisfying the normalization condition V_\alpha V^\alpha = -1.
There are several SSCs in the literature. In particular, the Tulzcyjew-Dixon,
Mathisson-Pirani, and Ohashi-Kyrian-Semer\'ak are the most used by the
community. From the physical point of view, choosing a different SSC (a
different reference vector $V^\mu$) is equivalent to fixing the centroid of the
test particle. In this manuscript, we compare different SSCs for spinning test
particles moving around a Morris-Thorne traversable wormhole. To do so, we
first obtain the orbital frequency and expand it up to third-order in the
particle's spin; as expected, the zero-order coincides with the Keplerian
frequency, the same in all SSCs; nevertheless, we found that differences appear
in the second order of the expansion, similar to the Schwarzschild and Kerr
black holes. We also compare the behavior of the innermost stable circular
orbit (ISCO). Since each SSC is associated with a different centroid of the
test particle, we analyze (separately) the radial and spin corrections for each
SSC. We found that the radial corrections improve the convergence, especially
between Tulzcyjew-Dixon and Mathisson-Pirani SSCs. In the case of
Ohashi-Kyrian-Semer\'ak, we found that the spin corrections remove the
divergence for the ISCO and extend its existence for higher values of the
particle's spin. | Carlos A. Benavides-Gallego, Jose Miguel Ladino, Eduard Larrañaga | 2023-06-30T04:17:54Z | http://arxiv.org/abs/2306.17394v1 | # Comparing spin supplementary conditions for particle motion around traversable wormholes
###### Abstract
The Mathisson-Papapetrou-Dixon (MPD) equations describe the motion of spinning test particles in the pole-dipole approximation. It is well-known that these equations, which couple the Riemann curvature tensor with the antisymmetric spin tensor \(S^{\alpha\beta}\), together with the normalization condition for the four-velocity, is a system of eleven equations relating fourteen unknowns. To "close" the system, it is necessary to introduce a constraint of the form \(V_{\mu}S^{\mu\nu}=0\), usually known as the spin supplementary condition (SSC), where \(V_{\mu}\) is a future-oriented reference vector satisfying the normalization condition \(V_{\alpha}V^{\alpha}=-1\). There are several SSCs in the literature. In particular, the Tulzcyjew-Dixon, Mathisson-Pirani, and Ohashi-Kyrian-Semerak are the most used by the community. From the physical point of view, choosing a different SSC (a different reference vector \(V^{\mu}\)) is equivalent to fixing the centroid of the test particle. In this manuscript, we compare different SSCs for spinning test particles moving around a Morris-Thorne traversable wormhole. To do so, we first obtain the orbital frequency and expand it up to third-order in the particle's spin; as expected, the zero-order coincides with the Keplerian frequency, the same in all SSCs; nevertheless, we found that differences appear in the second order of the expansion, similar to the Schwarzschild and Kerr black holes. We also compare the behavior of the innermost stable circular orbit (ISCO). Since each SSC is associated with a different centroid of the test particle, we analyze (separately) the radial and spin corrections for each SSC. We found that the radial corrections improve the convergence, especially between Tulzcyjew-Dixon and Mathisson-Pirani SSCs. In the case of Ohashi-Kyrian-Semerak, we found that the spin corrections remove the divergence for the ISCO and extend its existence for higher values of the particle's spin
## I Introduction
The dynamics of extended bodies is a crucial problem in any theory of gravity [1; 2]. In the particular case of general relativity (GR), the problem has been investigated since Einstein's theory was published, and two approaches have been developed throughout history [3]. In the first approach, based on Einstein's philosophy, bodies were considered as a set of elementary particles [4; 5; 6]. In the second approach, on the other hand, bodies were treated using multipole moments and assuming them small so that they do not affect the spacetime background. This approach corresponds to the work of Mathisson, Papapetrou, Tulczyjew, and Dixon [3; 7; 8; 9; 10].
In contrast to particles without internal structure, extended bodies do not follow a geodesic1. Therefore, obtaining the equations of motion is more challenging mainly because GR describes physical bodies using a four-dimensional region of spacetime, which is not independent of the relationship between gravity and geometry. In this sense, it is troublesome to establish the laws of motion of the body since its trajectory (world-line) is affected by itself, the gravitational field, and the geometry, which depends on the momentum-energy distribution [1].
Footnote 1: Initially, in the case of point particles, this was a postulate of the theory, but later, Einstein showed it was a consequence of his field equations [11; 12].
In particular, if one considers extended bodies as a set of elementary particles (Einstein's point of view), it is clear that this approach generates problems when facing the question of the motion of celestial bodies because it would be necessary treating them as an assembly of elementary particles. According to Dixon, this requires "_a general relativistic version of statistical mechanics, a formidable task_" [13]. Therefore, from the astronomical point of view, a more suitable approach is to assume sufficiently small individual bodies so that one can treat them as particles [14]. That, however, required answering two important questions [13]. First, what point to choose to describe the position of the body? Second, how to describe its structure? Mathisson was the first to address these questions in 1937. To solve them, he started by selecting an arbitrary world-line within the body to
characterize its motion and position and considering the energy-momentum tensor as an infinite set of multipole moments. In this way, the covariant conservation of the energy-momentum tensor becomes a set of equations representing the evolution of the multipole moments. All these considerations are contained in what Mathisson named "_the gravitational skeleton of a body_" Mathisson (1951). Hence, chronologically speaking, he was the first to introduce the concepts of multipole moments and multipole particles in GR (1951).
In 1951, Papapetrou also considered an extended test body as described by an energy-momentum tensor, \(T^{\mu\nu}\), and he defined in a non-covariant way its multipole moments (1952). Then, using the conservation equation \(\nabla_{\beta}T^{\alpha\beta}=0\), he obtained the equations for a line inside the world-tube of the body under the assumption that all the moments higher than the dipole moment can be neglected. Furthermore, similarly to Mathisson, Papapetrou imposed the supplementary condition \(V_{\alpha}S^{\alpha\beta}=0\) to make the equations fully determinate; however, in contrast to Mathisson, where \(V_{\alpha}\) was the four-velocity, \(u_{\alpha}\), in Papapetrou's development, \(V_{\alpha}\) corresponds to a vector field determined by the metric and not related to the body under consideration. Back then, the advantage of considering such a vector lay in avoiding the nonphysical helical motions allowed by Mathisson's equations. Today, thanks to the work of Costa et al., we know that helical motions are perfectly valid and physically equivalent to the dynamics of a spinning body; the only difference is the choice of the representative point of the particle, a gauge choice (1952).
In 1959, Tulczyjew simplified Mathisson's theory by describing the particle not by a singularity in a linearized disturbance but by a singular energy-momentum tensor (1951). He obtained the same equations as Mathisson without imposing the supplementary condition \(u_{\alpha}S^{\beta\alpha}=0\), and showing that the momentum vector of the particle, \(p^{\alpha}\), and its four-velocity, \(u^{\alpha}\), are not parallel (1953). In a succeed work (1953), Tulczyjew considered Papapetrou's approach in a more covariant form by introducing the world-line of the center of mass, related to the condition \(p_{\beta}S^{\alpha\beta}=0\)(1951, 1952). Finally, in 1964, Dixon proposed a new treatment to the problem of extended bodies in GR, in which he included the effect of the electromagnetic field and a covariant definition of the center of mass. This allowed him to obtain the equations of motion in the pole-dipole approximation (1964, 1965). According to Dixon, these equations are applicable to macroscopic bodies such as a planet orbiting the Sun but not to bodies of atomic scales since GR breaks down when quantum phenomena become important.
Nowadays, the motion of extended bodies in the pole-dipole approximation is described by the Mathisson-Papapetrou-Dixon (MPD) equations (1951, 1952, 1952, 1952),
\[\frac{Dp^{\mu}}{d\lambda}= -\frac{1}{2}R^{\mu}_{\nu\rho\sigma}u^{\nu}S^{\rho\sigma} \tag{1}\] \[\frac{DS^{\mu\nu}}{d\lambda}= p^{\mu}u^{\nu}-u^{\mu}p^{\nu}, \tag{2}\]
where the spinning test particle is characterized by a velocity vector, \(u^{\mu}\), and momentum, \(p^{\mu}\), in a curved spacetime background with the Riemann tensor defined as
\[R^{\mu}_{\nu\kappa\lambda}=\Gamma^{\mu}_{\kappa\alpha}\Gamma^{\alpha}_{\lambda \nu}-\Gamma^{\mu}_{\lambda\alpha}\Gamma^{\alpha}_{\kappa\nu}-\partial_{ \lambda}\Gamma^{\mu}_{\kappa\nu}+\partial_{\kappa}\Gamma^{\mu}_{\lambda\nu}. \tag{3}\]
In Eqs. (1) and (2), \(\frac{D}{d\lambda}\equiv u^{\mu}\nabla_{\mu}\) is the absolute derivative and \(\lambda\) is an affine parameter. The antisymmetric tensor, \(S^{\mu\nu}=-S^{\nu\mu}\), is related to the particle's spin.
As mentioned above, note that in contrast to non-spinning test particles, where the absolute derivative of the four-momentum \(p^{\mu}\) vanishes (the geodesic equation), spinning test particles follow an equation of motion coupled to \(S^{\mu\nu}\) and \(R^{\mu}_{\nu\kappa\lambda}\). From the physical point of view, this means that spinning test particles do not follow a geodesic. As a consequence, \(u^{\mu}\) and \(p^{\mu}\) are not parallel, and one needs to introduce two concepts of mass, the dynamical and the kinematical rest masses, defined by
\[\mu^{2}= -p_{\alpha}p^{\alpha}, \tag{4}\] \[m= -p_{\alpha}u^{\alpha}. \tag{5}\]
The non-parallel behavior between the velocity and the momentum is obtained by contracting the second MPD Eq. (2) with the velocity and taking into account the normalization condition \(u_{\alpha}u^{\alpha}=-1\). Hence, one obtains the following relation
\[p^{\alpha}=mu^{\alpha}+p^{\alpha}_{\rm hidden}, \tag{6}\]
where the term \(p^{\alpha}_{\rm hidden}=u_{\beta}\frac{DS^{\alpha\beta}}{d\lambda}\) is called the _hidden momentum_, which depends on the behavior of the spin tensor along the trajectory.
The MPD system of Eqs. (1) and (2), together with the normalization condition for the velocity, is a set of eleven equations relating fourteen unknown variables: \([p^{\alpha},u^{\alpha},S^{\alpha\beta}]\). To "close" this system, it is necessary to introduce a constraint equation, usually known as the spin supplementary condition (SSC); one can find several SSCs in the literature represented by the general form
\[V_{\mu}S^{\mu\nu}=0, \tag{7}\]
where \(V^{\mu}\) is a future-oriented reference vector satisfying the normalization condition \(V_{\alpha}V^{\alpha}=-1\). As mentioned above, while developing the spinning test particle dynamics, Mathisson, Papapetrou, Pirani, and Tulczyjew used different SSCs (1951, 1952, 1953; 1954). Other examples are the the Ohashi-Kyrian-Semerak (OKS) (1951, 1952, 1952), Corinaldesi-Papapetrou (1953) and the Newton-Wigner (1954) SSCs. From the physical point of view, choosing a particular reference vector \(V^{\mu}\) corresponds to fixing the centroid of the body.
The MPD equations have been used widely in the literature, see [26; 27; 28; 29; 30; 31; 32; 33; 34; 35] and references therein. For example, The effects of the spin-curvature interaction were discussed by Wald (1972) and Barker & O'Connell (1979), Refs. [26] and [27], respectively. The motion of spinning test particles in the field of a black hole was investigated by K. P. Tod et al. in Ref. [28], while the dynamics in Vaidya's radiating metric and the Kerr-Newman spacetime were considered in Refs. [29] and [30], respectively. Latter, Semerak, and K. Kyrian and Semerak numerically solved the MPD equations to investigate the trajectories of spinning particles in the Kerr black hole using the TD SSC [31; 32]. There, when the pole-dipole approximation is considered, the author found that no significant spin effects are expected if one considers astrophysical scenarios. Nevertheless, during the inspiral of a spinning particle onto a rotating compact body, important effects may occur that would modify the gravitational waves generated by the system. The gravitational wave generated by a spinning test particle falling or orbiting a black hole was investigated by Masaru Shibata and Yasushi Mino et al. in Refs. [33] and [34], respectively.
Recently, the MPD equations have been used to investigate the motion of spinning test particles in different spacetimes [36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49]. The Kerr spacetime was considered in Refs.[36; 37; 38; 39], in the last reference the authors also investigated the motion of spinning test particle in the Schwarzschild case. The properties of the innermost circular orbits (ISCO) for spinning test particles were investigated using the Kerr-Newman background in Ref. [40]. The \(\gamma\)-metric, the Maxwell dilaton black hole, the charged Hayward black hole background, and the rotating black hole surrounded by the perfect fluid dark matter were studied in Refs. [41], [42], [43] and [48], respectively. The dynamics of spinning test particles in a quantum-improved rotating black Hole (RBH) spacetime was examined by M. Ladino and E. Larranaga in Ref. [49]. In the case of wormholes, the motion of spinning test particles was investigated in Refs. [45] and [46] using the Morris-Thorne non-rotating traversable wormhole [50] and the rotating traversable wormhole obtained by Teo [51], respectively.
Finally, in Refs. [44] and [47], the authors examine whether the equatorial circular orbits around a massive black hole are affected when the particle's centroid (associated with the SSC) changes to another centroid (different SSC). To do so, the authors established an analytical algorithm to obtain the orbital frequency of a spinning body moving around an arbitrary stationary, axisymmetric spacetime, focusing on three SCCs: the Tulzcyjew-Dixon (TD), the Mathisson-Pirani (MP), and the Ohashi-Kyrian-Semerak SSCs (OKS). In the case of the Schwarzschild black hole, they investigated the discrepancies in the orbital frequency employing a power series expansion of the spin for each SSC, imposing corrections to improve the convergence between the SSC. They found that the shifting from one circular equatorial orbit to another, the coincidence between the SSCs only holds up to the third order in the orbital frequency.
In the Kerr spacetime, on the other hand, the authors considered the convergence of the orbital frequency for prograde and retrograde equatorial circular orbits. Following a similar approach used for the Schwarzschild case, they expanded the orbital frequencies in powers of the particle's spin for the same SSCs, i.e., the TD, MP, and OKS SSCs. The authors also introduced a novel method to compute the ISCO radius for any SCC. Similar to the Schwarszchild case, there is a convergence in the power series of the frequencies of the SSCs. However, there is a limit to this convergence because, in the spinning body approximation, one only considers the first two multipoles (pole-dipole) of the body and ignores the higher ones.
In this work, we follow Refs. [44; 47] to compare the orbital frequency of equatorial circular orbits in the background of a traversable non-rotating wormhole. We consider three of the most well-known SSCs: the TD [3; 10], MP [19; 20], and the OKS [21; 22; 23] supplementary conditions. This work is organized as follows. In Sec. II, we roughly describe the Morris-Thorne traversable wormhole. In Sec. III, we obtain the orbital frequencies for equatorial circular orbits using the analytical algorithm proposed in Ref. [44] and obtain the analytical expressions for the Morris-Thorne wormhole using different centroids. Then, in Sec. IV, we discuss the orbital parameters; i.e., the orbital frequency and the ISCO radius, using different SSCs. In Sec. V, we use the centroid corrections to explain the differences between the SSCs. Here we consider the corrections to the position of the centroid and corrections to the spin. Finally, in Sec. VI, we review and conclude our work.
In the manuscript, we use dimensionless units, the Riemann curvature tensor is defined as Eq. (3), and the metric has the signature \((-,+,+,+)\).
## II Morris-Thorne wormholes
The Morris-Thorne traversable wormhole is a spherically symmetric spacetime given by the line element [50; 52]
\[ds^{2}=-e^{2\Phi(r)}dt^{2}+\frac{dr^{2}}{1-b(r)}+r^{2}\left(d\theta^{2}+\sin^{ 2}\theta d\varphi^{2}\right), \tag{8}\]
where \(\Phi(r)\) and \(b(r)\) are arbitrary functions of the radial coordinate \(r\) known as the "_redshift function_" and the "_shape function_", respectively. The fact that Eq. (8) represents a wormhole with a throat connecting two different regions of the spacetime can be easily depicted by embedding the line element in a three-dimensional space at a fixed time slice \(t\), see Fig. 1.
The metric of Eq. (8) was obtained assuming first the wormhole's spacetime as spherically symmetric, then, via the field equations, computing the corresponding energy-momentum tensor. In contrast to black holes, wormholes
o not have an event horizon or singularities; this means the redshift function \(\Phi(r)\) is everywhere finite, and the spacetime has a throat connecting two asymptotically flat regions of the same universe.
On the other hand, a wormhole is traversable if the tidal gravitational forces experienced by any traveler are bearable small [50]. Moreover, the time needed to cross the wormhole must be finite and reasonably small. From the physical point of view, this means that the proper time measured by a traveler and the observers outside the wormhole must be finite and small. Hence, in the case of a zero-tidal-force solution, the redshift and the shape functions have the form [53]
\[\Phi(r)=-\frac{b_{0}}{r}\quad\text{ and }\quad b(r)=\left(\frac{b_{0}}{r} \right)^{\gamma}, \tag{9}\]
where \(b_{0}\) is the throat of the wormhole, usually associated with the wormhole's mass. We use these functions to compare the different SSCs, focusing on the values \(\gamma=1\) and \(\gamma=2\).
Fig. 2 illustrates the behavior of the shape function \(b(r)\) as a function of \(r\) for various values of the wormhole throat \(b_{0}\), and both \(\gamma=1\) and \(\gamma=2\) cases of the solution. Here we provide a preliminary comparison of the properties of the two wormhole solutions. The figure shows that as the wormhole throat \(b_{0}\) increases, the shape function, \(b(r)\), also increases in both solutions. Furthermore, the value of \(b(r)\) for the \(\gamma=2\) solution is asymptotically smaller and closer to \(r=0\) than that of the \(\gamma=1\) solution. Note that the shape function \(b(r)\) has the same value for all cases at the wormhole throat point \(b_{0}\).
The properties mentioned above are essential for traversable wormholes, what Morris and Thorne refer to as _basic wormhole criteria_. These criteria are deeply related to the form of the energy-momentum tensor, which depends on the matter and fields that generate wormholes. Even though the energy-momentum tensor is not physically reasonable since exotic mater (negative energy density) is required to create the wormhole's spacetime curvature at its throat, it is possible to tune the wormhole's parameters to make its constitution material compatible with the form of matter allowed by the laws of physics [50].
## III Circular equatorial orbits
Without loss of generality, we can consider a spherically symmetric and static spacetime, described by the following line element:
\[ds^{2}=g_{tt}dt^{2}+g_{rr}dr^{2}+g_{\theta\theta}d\theta^{2}+g_{\varphi\varphi} d\varphi^{2}. \tag{10}\]
For circular equatorial orbits, we fix the spatial coordinates as \(r=\text{costant}\), \(\theta=\frac{\pi}{2}\), and \(\varphi=\Omega t\), where \(\Omega=u^{\varphi}/u^{t}\) is the orbital frequency of the spinning test body. To maintain circularity in all the SSCs, the radial and polar four-velocity and four-momentum components must vanish; i.e., \(u^{r}=u^{\theta}=0\) and \(p^{r}=p^{\theta}=0\), respectively. Moreover, the normalization condition of the four-velocity implies
\[u^{t}=\frac{1}{\sqrt{-g_{tt}-g_{\varphi\varphi}\Omega^{2}}}. \tag{11}\]
Considering a spinning test body moving in the equatorial plane, with a spin \(S\) aligned (or anti-aligned) with the total angular momentum \(J_{z}\) (perpendicular to the equatorial plane), it follows that the spin four-vector \(S_{\mu}\) takes the form:
\[S_{\mu}:=-\frac{1}{2}\epsilon_{\mu\nu\rho\sigma}V^{\nu}S^{\rho\sigma}, \tag{12}\]
and
\[S^{\rho\sigma}=-e^{\rho\sigma\nu\kappa}S_{\nu}V_{\kappa}. \tag{13}\]
Therefore, we can write \(S^{\mu}\equiv S^{\theta}\delta^{\mu}_{\theta}\), and then the magnitude of the spin would be \(S=\pm\sqrt{g_{\theta\theta}}S^{\theta}\). Hence, introducing the SSC in the general form of Eq. (7) and
Figure 1: Wormhole spacetime embedded in a three-dimensional space. We consider \(b_{0}=4\).
Figure 2: Plot of b(r), the “_shape function_”, as function of r, for different values of \(b_{0}\) and for both cases of wormhole solution \(\gamma=1,2\).
considering the definition of the spin angular momentum as
\[S^{2}=\frac{1}{2}S^{\mu\nu}S_{\mu\nu}, \tag{14}\]
the only non-vanishing components of the spin tensor are
\[\begin{cases}&S^{t\tau}=-S^{rt}=-S\sqrt{-\frac{g_{\theta\theta}}{g}}V_{\varphi}, \\ &S^{r\varphi}=-S^{\varphi r}=-S\sqrt{-\frac{g_{\theta\theta}}{g}}V_{t},\end{cases} \tag{15}\]
with \(g=g_{tt}g_{rr}g_{\theta\theta}g_{\varphi\varphi}\) the determinant of the metric. To relate the non-vanishing components of the spin tensor with the conserved quantities, we use the Killing equation
\[C_{\xi}=\xi^{\mu}P_{\mu}-\frac{1}{2}S^{\mu\nu}\nabla_{\nu}\xi_{\mu}; \tag{16}\]
where \(\xi^{\mu}\) is a Killing vector field associated with the conserved quantity.
In the case of a static and spherically symmetric spacetime, represented by the line element of Eq. (10), two Killing vectors exist. These killing vectors, given by \(\xi^{\mu}_{(t)}=\delta^{\mu}_{t}\) and \(\xi^{\mu}_{(\varphi)}=\delta^{\mu}_{\varphi}\), are related to the conservation of both the energy \(C_{(t)}=-E\) and the \(z\) component of the total angular momentum of the spinning test particle \(C_{(\varphi)}=J_{z}\), respectively. Using the Eqs. (15), these conserved quantities can be expressed as [44]
\[\begin{cases}&E=-p_{t}-\partial_{r}g_{tt}\frac{S}{2}\sqrt{-\frac{g_{\theta \theta}}{g}}V_{\varphi},\\ &J_{z}=p_{\varphi}-\partial_{r}g_{\varphi\varphi}\frac{S}{2}\sqrt{-\frac{g_{ \theta\theta}}{g}}V_{t}.\end{cases} \tag{17}\]
Hence, with the help of the above considerations and results, the MPD equations for equatorial circular orbits reduce to [44]
\[\Gamma^{r}_{\rho\sigma}u^{\rho}p^{\sigma}= -\frac{1}{2}R^{r}_{\nu\rho\sigma}u^{\nu}S^{\rho\sigma} \tag{18}\] \[\Gamma^{t}_{\rho\tau}u^{\rho}S^{r\varphi}+\Gamma^{\varphi}_{\rho \tau}u^{\rho}S^{tr}= p^{t}u^{\varphi}-p^{\varphi}u^{t}. \tag{19}\]
In the following subsections, we apply the MPD Eqs. (18) and (19) to different SSCs.
### Tulzcyjew-Dixon SSC
In the case of the Tulzcyjew-Dixon SSC (TD-SSC), the reference four-vector, \(V^{\mu}\), is substituted by \(p^{\mu}/\mu\). Therefore, after replacing the non-zero components of the spin tensor \(S^{\mu\nu}\) shown in Eq. (15) into Eqs. (18) and (19), one obtains [44]
\[\begin{split}\frac{p_{t}}{p_{\varphi}}&=f_{1}(r, \Omega;S),\\ \frac{p_{t}}{p_{\varphi}}&=f_{2}(r,\Omega;S).\end{split} \tag{20}\]
After equating \(f_{1}\) and \(f_{2}\), it is possible to obtain a second-order polynomial expression \(\Omega\)[44]
\[\rho_{2}\Omega^{2}+\rho_{1}\Omega+\rho_{0}=0, \tag{21}\]
from which [44]
\[\Omega_{\pm}=\frac{-\rho_{1}\pm\sqrt{\rho_{1}^{2}-4\rho_{2}\rho_{0}}}{2\rho_{2 }}, \tag{22}\]
where \(\Omega_{+}\) corresponds to the circular orbits in which \(E>0\) and \(J_{z}>0\), while \(\Omega_{-}\) to \(E>0\) and \(J_{z}<0\). This is the usual convention valid for all the SSC considered in this paper. In the case of the Morris-Thorne wormhole (8) with \(\gamma=1\), \(\rho_{2}\), \(\rho_{1}\) and \(\rho_{0}\) take the form
\[\begin{split}\rho_{2}&=\frac{1}{2}re^{-\frac{2b_{0} }{r}}\left(b_{0}S^{2}r-2\mu^{2}r^{4}\right),\\ \rho_{1}&=\frac{1}{2}b_{0}\mu Se^{-\frac{3b_{0}}{r}} \sqrt{r\left(r-b_{0}\right)}\left(2b_{0}-5r\right),\\ \rho_{0}&=\frac{b_{0}e^{-\frac{4b_{0}}{r}}\left(b_{0} S^{2}\left(-7b_{0}r+2b_{0}^{2}+4r^{2}\right)+2\mu^{2}r^{5}\right)}{2r^{3}}, \end{split} \tag{23}\]
where we used Eq. (1) in the appendix A of Ref. [44]. The corresponding expressions for \(\gamma=2\) are given in Appendix VII.
On the other hand, from the definition of the dynamical mass (\(\mu:=\sqrt{-p_{\nu}p^{\nu}}\)), the expressions, for \(p^{t}\) and \(p^{\varphi}\) for the Morris-Thorne wormhole are given by the
\[\begin{split} p^{\varphi}&=\frac{\mu}{r\sqrt{r^{4} e^{\frac{2b_{0}}{r}}\mathcal{F}^{2}(\Omega,r;S)-1}},\\ \\ p^{t}&=-\frac{\mu r^{2}e^{\frac{2b_{0}}{r}}\mathcal{F} (\Omega,r;S)}{\sqrt{r^{4}e^{\frac{2b_{0}}{r}}\mathcal{F}^{2}(\Omega,r;S)-1}}, \end{split} \tag{24}\]
where we defined \(\mathcal{F}(\Omega,r;S)\) and \(\mathcal{A}(\Omega,r;S)\) as
\[\mathcal{F}(\Omega,r;S)=\frac{\mathcal{A}(\Omega,r;S)}{b_{0}\left(2b_{0}\mu+S \Omega e^{\frac{b_{0}}{r}}\sqrt{r\left(r-b_{0}\right)}-2\mu r\right)} \tag{25}\]
and
\[\mathcal{A}(\Omega,r;S)=\left(\frac{b_{0}Se^{-\frac{b_{0}}{r}\sqrt{r-b_{0}}\left(-7b _{0}r+2b_{0}^{2}+4r^{2}\right)}}{r^{9/2}}+2\mu\Omega\left(r-b_{0}\right)\right). \tag{26}\]
These expressions were obtained using Eqs. (19), (22), and (23) of Ref. [44].
Under the TD-SSC, the energy and the total angular momentum of the spinning test particle around the wormhole solution with \(\gamma=1\) are
\[\begin{cases}&E=-p_{t}+e^{-\frac{b_{0}}{r}}\frac{S_{0}}{\mu r^{2}}\sqrt{\frac{r -b_{0}}{r^{3}}}p_{\nu},\\ &J_{z}=p_{\varphi}-e^{-\frac{b_{0}}{r}}\frac{S}{\mu}\sqrt{\frac{r-b_{0}}{r}}p_ {t}.\end{cases} \tag{27}\]
To obtain the equatorial circular orbits and the ISCO, we use the effective potential \(V_{\rm eff}^{\rm TD}\). In the TD-SSC, the effective potential is defined from the following relation [45; 46; 41]
\[(p_{r})^{2}\propto(E-V_{+})(E-V_{-}), \tag{28}\]
where \(V_{\pm}\) is the root of \((p_{r})^{2}=0\). Since Eq. (28) is a quadratic equation in \(E\), \(V_{\pm}\) is given by
\[V_{\pm}=-\frac{aJ_{z}}{b}\pm\sqrt{\frac{a^{2}J_{z}^{2}}{b^{2}}+\frac{c-dJ_{z}^ {2}}{b}}, \tag{29}\]
where \(a\), \(b\), \(c\), and \(d\) depend on the metric components and their derivative with respect to the radial coordinate \(r\), see Eq. (35) of Ref. [45]. In the following, we shall focus on the case in which test particles have positive energy and therefore explore the effective potential given by \(V_{\rm eff}^{\rm TD}=V_{+}\). Hence, the ISCO can be obtained by solving (numerically) the system of non-linear equations
\[\frac{dV_{\rm eff}^{\rm TD}}{dr}=0\ \ \text{and}\ \ \frac{d^{2}V_{\rm eff}^{\rm TD }}{dr^{2}}=0 \tag{30}\]
for \(r\) and \(J_{z}\), for a given value of the particle's spin \(S\).
### Mathisson-Pirani SSC
Equation (7) introduces the Mathisson-Pirani condition (MP-SSC) by selecting a future pointing time-like reference vector, which is precisely the four-velocity, \(V^{\mu}=u^{\mu}\). This implies that the spin is defined as spatial for an observer who is moving in the same direction as the particle's four-velocity [54].
Additionally, as it is shown in [22], the equation that provides the evolution of the four-velocity under the MP SSC is
\[\frac{Du^{\mu}}{d\lambda}=-\frac{1}{S^{2}}\left(\frac{1}{2m}R_{\rho\nu\kappa \sigma}S^{\rho}u^{\nu}S^{\kappa\sigma}S^{\mu}+p_{\kappa}S^{\mu\kappa}\right). \tag{31}\]
Following the procedure carried out in [44], and assuming that the MP-SSC holds, we can determine the values of the four-momentum components by using the definition of the kinematical rest mass, \(m=-p^{\mu}u_{\mu}\), and Eqs. (15) and (19). In the case of the solution with \(\gamma=1\), this results in
\[p^{t}=\frac{e^{\frac{b_{0}}{r}}\left[-mr^{3/2}\left(e^{\frac{2b_{0}-1}{r}}r^{ 2}\Omega^{2}\right)+e^{\frac{3b_{0}}{r}}r^{3}S\Omega^{3}\sqrt{r-b_{0}}-e^{ \frac{b_{0}}{r}}Sb_{0}\Omega\sqrt{r-b_{0}}\right]}{\left(r-e^{\frac{2b_{0}}{r} }r^{3}\Omega^{2}\right)^{3/2}}, \tag{32}\]
\[p^{\varphi}=\frac{e^{\frac{b_{0}}{r}}r^{3}\Omega\left[m\sqrt{r}\left(1-e^{ \frac{2b_{0}}{r}}r^{2}\Omega^{2}\right)+e^{\frac{b_{0}}{r}}S\Omega\sqrt{r-b_ {0}}\right]-Sb_{0}\sqrt{r-b_{0}}}{r^{7/2}\left(1-e^{\frac{2b_{0}}{r}}r^{2} \Omega^{2}\right)^{3/2}}, \tag{33}\]
where the contributions of \(p^{t}_{\rm hidden}\) and \(p^{\varphi}_{\rm hidden}\) have already been added. Then, if we replace the two previous expressions in Eq. (18), we can obtain the following quartic equation for the orbital frequency
\[\xi_{0}+\xi_{1}\Omega+\xi_{2}\Omega^{2}+\xi_{3}\Omega^{3}+\xi_{4}\Omega^{4}=0, \tag{34}\]
with
\[\begin{split}\xi_{4}=& 2e^{\frac{b_{0}}{r}}mr^{5},\\ \xi_{3}=&-e^{\frac{3b_{0}}{r}}S\sqrt{r\left(r-b_{0} \right)}\left(2r^{2}-7rb_{0}+2b_{0}^{2}\right),\\ \xi_{2}=&-2e^{\frac{3b_{0}}{r}}mr^{2}\left(r+b_{0} \right),\\ \xi_{1}=&-3e^{\frac{b_{0}}{r}}Sb_{0}\sqrt{1-\frac{b _{0}}{r}},\\ \xi_{0}=& 2mb_{0}.\end{split} \tag{35}\]
Out of the four roots of the orbital frequency polynomial, only two are physically meaningful. These correspond to the corotation frequency \(\Omega_{+}\) and the counterrotation frequency \(\Omega_{-}\). Fortunately, it is possible to obtain these roots for wormhole solutions with \(\gamma=1\) and \(\gamma=2\) analytically. Nevertheless, we do not present them here due to their extensive form.
In Appendix VIII, we show the analogous expressions of \(p^{t}\), \(p^{\varphi}\) and the polynomial coefficients of the orbital frequency for the case of the wormhole solution with \(\gamma=2\).
On the other hand, under the MP-SSC, the energy and z component of the total angular momentum of the spinning test particle around the wormhole solution with \(\gamma=1\) are
\[\begin{cases}&E=-p_{t}+e^{-\frac{b_{0}}{r}}\frac{Sb_{0}}{r^{2}}\sqrt{\frac{r-b _{0}}{r}}u_{\varphi},\\ &J_{z}=p_{\varphi}-e^{-\frac{b_{0}}{r}}S\sqrt{\frac{r-b_{0}}{r}}u_{t},\end{cases} \tag{36}\]
where \(p_{t}=g_{tt}p^{t}\) and \(p_{\varphi}=g_{\varphi\varphi}p^{\varphi}\) can be calculated using Eqs. (32) and (33), respectively. To obtain equatorial circular orbits and determine the ISCO properties, we employ the treatment presented in [39]. This treatment, when applied to the MP-SSC, involves the use of three effective potentials, in contrast to the TD-SSC where only one is needed. The first potential is derived from the radial component of Eq. (31), which yields \(\frac{du^{r}}{d\lambda}=-\frac{V_{\rm eff}^{\rm MP1}}{23g_{rr}\sqrt{-g}}\) where
\[V_{\rm eff}^{\rm MP1}:= 2g_{rr}\sqrt{g_{\theta\theta}}\left(g_{\varphi\varphi}u^{\varphi }p_{t}-g_{tt}u^{t}p_{\varphi}\right) \tag{37}\] \[-S\sqrt{-g}\left[\frac{\partial g_{tt}}{\partial r}\left(u^{t} \right)^{2}+\frac{\partial g_{\varphi\varphi}}{\partial r}(u^{\varphi})^{2} \right]. \tag{38}\]
By rewriting the kinematical rest mass definition, the second potential can be obtained as
\[V_{\rm eff}^{\rm MP2}:=u^{r}p_{r}=-(m+u^{t}p_{t}+u^{\varphi}p_{\varphi}). \tag{39}\]
In the last expressions for the potential, \(p_{t}\) and \(p_{\varphi}\) are replaced using Eqs. (32) and (33) to get \(V_{\rm eff}=V_{\rm eff}(r,E,J_{z},S,u_{t},u_{\varphi})\).
Finally, we can determine the third potential by utilizing the normalization condition of the reference vector with the MP-SCC, which matches with the four-velocity normalization, \(u_{\alpha}u^{\alpha}=-1\). From here we have that \(u^{r}=\pm\sqrt{\frac{V_{\rm eff}^{\rm MP3}}{g_{rr}}}\) where
\[V_{\rm eff}^{\rm MP3}:=-\left[g_{tt}\left(u^{t}\right)^{2}+g_{\varphi\varphi} \left(u^{\varphi}\right)^{2}+1\right]. \tag{40}\]
Hence, to determine the ISCO properties, we solve a system of nine equations derived from the three previous potentials equaled to zero, along with their first and second derivatives with respect to \(r\) also equaled to zero. Then, for a given spin \(S\), this calculation yields the values of nine unknown variables at the ISCO, namely \(r\), \(E\), \(J_{z}\), \(u_{t}\), \(u_{t}^{\prime}\), \(u_{t}^{\prime\prime}\), \(u_{\varphi}\), \(u_{\varphi}^{\prime}\) and \(u_{\varphi}^{\prime\prime}\), where prime denotes the derivation with respect to \(r\).
### Ohashi-Kyrian-Semerak SSC
The Ohashi-Kyrian-Semerak condition (OKS-SSC) is introduced through Eq. (7) by choosing a future pointing time-like reference vector, \(V^{\mu}=w^{\mu}\), satisfying the conditions [21, 22, 23, 54]
\[w_{\mu}w^{\mu}= -1 \tag{41}\] \[\frac{Dw^{\mu}}{d\lambda}= 0. \tag{42}\]
Due to these assumptions, it is straightforward to show that the hidden momentum vanishes so that the momentum and the velocity are proportional [39],
\[p^{\mu}=mu^{\mu}, \tag{43}\]
which implies that \(\mu=m\) is a constant of motion as well as the spin angular momentum, \(S\). The OKS-SSC also implies that the spin tensor Eq. (2) reduces to
\[\frac{DS^{\mu\nu}}{d\lambda}=0. \tag{44}\]
Since the components of the reference vector are not completely constrained by the definition given in (41) and (42), we can choose \(w^{r}=w^{\theta}=0\); a natural choice for equatorial circular orbits. Hence, from the normalization condition (41) it is possible to relate the non-zero components of the reference vector as
\[w^{t}= \sqrt{-\frac{1+g_{\varphi\varphi}(w^{\varphi})^{2}}{g_{tt}}}, \tag{45}\]
while the second MPD Eq. (19) gives the component \(w^{\varphi}\) in terms of the orbital frequency of the test particle, \(\Omega\), as [39]
\[w^{\varphi}= \pm\sqrt{-\frac{g_{tt}(\Gamma_{tr}^{t}+\Omega\Gamma_{\varphi r}^{t}) ^{2}}{g_{\varphi\varphi}^{2}(\Gamma_{tr}^{\varphi}+\Omega\Gamma_{\varphi r}^{ \varphi})^{2}+g_{tt}g_{\varphi\varphi}(\Gamma_{tr}^{t}+\Omega\Gamma_{\varphi r }^{t})^{2}}}. \tag{46}\]
In Appendix IX, we obtain a general expression for the 6th order polynomial (80) which gives the orbital frequency under the OKS-SSC. Once the geometric expressions, such as connections and Riemann tensor, for the metric (8) are replaced, we obtain a very long expression that is not highly elucidating and therefore, it will not be provided in this paper (but is available in the supplementary material). Finally, the energy and the angular momentum for the spinning test particle under the OKS-SSC are given, in terms of the reference vector components (45) and (46), as
\[\begin{cases}&E=-p_{t}+e^{-\frac{b_{0}}{r}}\frac{Sb_{0}}{r^{2}}\sqrt{\frac{r- b_{0}}{r}}w_{\varphi},\\ &J_{z}=p_{\varphi}-e^{-\frac{b_{0}}{r}}S\sqrt{\frac{r-b_{0}}{r}}w_{t},\end{cases} \tag{47}\]
In order to describe equatorial circular orbits and in particular to obtain the ISCO, again, we follow the treatment presented in [39], which in this case is also based on the use of three effective potentials (similarly to the MP-SSC). The first potential is obtained from the normalization of the momentum, which gives \(p_{r}=\pm\sqrt{\frac{V_{\rm eff}^{\rm OKS1}}{g^{rr}}}\) where
\[V_{\rm eff}^{\rm OKS1}:=-\left(m^{2}+g^{tt}p_{t}^{2}+g^{\varphi\varphi}p_{ \varphi}^{2}\right)=0 \tag{48}\]
and we demand to be zero in order to represent circular trajectories.
The second potential arises from the normalization condition for the reference vector of Eq. (41), establishing
\[V_{\rm eff}^{\rm OKS2}:=1+g^{tt}w_{t}^{2}+g^{\varphi\varphi}w_{\varphi}^{2}=0. \tag{49}\]
The third potential is obtained from the radial component of the Eq. (42),
\[\frac{dw_{r}}{d\lambda}=-\frac{V_{\rm eff}^{\rm OKS3}}{2mg_{tt}^{2}g_{\varphi \varphi}^{2}}=0, \tag{50}\]
where
\[V_{\rm eff}^{\rm OKS3}:=w_{t}g_{\varphi\varphi}^{2}p_{t}\partial_{r}g_{tt}+w_ {\varphi}g_{tt}^{2}p_{\varphi}\partial_{r}g_{\varphi\varphi}=0. \tag{51}\]
The ISCO properties are obtained by solving the set of nine equations given by the three potentials and its first and second derivatives with respect to \(r\) equal to zero. This system will give the nine unknown variables \(r\), \(E\), \(J_{z}\), \(w_{t}\), \(w_{t}^{\prime}\), \(w_{t}^{\prime\prime}\), \(w_{\varphi}\), \(w_{\varphi}^{\prime}\) and \(w_{\varphi}^{\prime\prime}\) at the ISCO.
## IV Orbital parameters using the spin supplementary conditions
### Orbital Frequency
As was presented above, each of the SSC gives a polynomial for the orbital frequency. For the TD-SSC, the 2nd-degree polynomial is given by Eq. (21), for the MP-SSC, we obtain the 4th-degree polynomial (34) and in the case of the OKS-SSC, Eq. (80) is a 6th-degree polynomial. Following the method presented in [44], we introduce the dimensionless quantities \(\bar{r}=\frac{r}{b_{0}}\), \(\hat{\Omega}=b_{0}\Omega\) and \(\sigma=\frac{s}{mb_{0}}\) (for the MP-SSC) or \(\sigma=\frac{s}{mb_{0}}\) (for the TD-SSC and the OKS-SSC) in these equations. Then, we expand the resulting polynomials in powers of the dimensionless spin \(\sigma\) by introducing \(\hat{\Omega}=e^{-\frac{1}{s}}\hat{\Omega}_{n}\sigma^{n}+\mathcal{O}(\sigma^{4})\) with \(n=0,1,2,3\) (we only consider terms up to the 3rd order in \(\sigma\) because there we obtain the differences between the results arising from the three SSCs).
It is clear that, depending on the order of the polynomial, we obtain many roots and we need to choose between them which one is physically relevant. In particular, the selection criterion for the order \(\hat{\Omega}_{0}\) will be that the Keplerian frequency is recovered for vanishing spin and for the first order of approximation in the exponential terms (involving the approximation \(r\gg b_{0}\)).
From the results compiled in Table 2 it is clear that the three SSCs produce equivalent results up to the linear order \(\mathcal{O}(\sigma^{1})\). At the order \(\mathcal{O}(\sigma^{2})\), TD- and MP-SSCs give the same contribution to the frequency but the OKS-SSC gives a different result, and when calculating the contribution of order \(\mathcal{O}(\sigma^{3})\) all three results differ.
### ISCO parameters
The ISCO radius, \(\bar{r}_{ISCO}\), is an essential parameter that characterizes the gravitational dynamics near compact objects like wormholes. This quantity plays a crucial role in various possible astrophysical phenomena associated with wormholes, such as the accretion of matter, gravitational lensing, gravitational radiation, and the formation of jets, among others. Thus, the study of \(\bar{r}_{ISCO}\) provides valuable insights into the properties of the wormhole. This is why it is chosen as the first parameter to compare the different SSCs.
Recall that to find \(\bar{r}_{ISCO}\) we have to use a different approach depending on each SSC. For the TD-SSC, we will use the effective potential of Eq. (29) together with the conditions of Eqs. (30). For the MP-SSC, the three potentials of the Eqs. (38), (39) and (40) are used, together with their first and second derivatives equal to zero. And finally, for the OKS-SSC, the three potentials of the Eqs. (48), (49) and (51) will be required, along with their first and second derivatives equal to zero too.
In Fig. 3, we share the ISCO radius \(\bar{r}_{ISCO}\) as a function of the particle's spin \(\sigma\) for each SSC and both wormhole solution cases \(\gamma=1,2\); as a first result, we can see that the value of \(\bar{r}_{ISCO}\) calculated with the three different SSCs behaves very similarly, especially in the vicinity of \(\sigma=0\). In all cases, it is possible to see clearly that \(\bar{r}_{ISCO}\) decreases as the value of spin \(\sigma\) increases. Moreover, although the figure shows that \(\bar{r}_{ISCO}\) behaves similarly for \(\gamma=1\) and \(\gamma=2\), the former generally has slightly larger values. On the other hand, for high values of the spin, note that the MP- and TD-SSCs behave similarly in contrast to the OKS-SSC. However, for the MP-SSC, the numerical computation of \(\bar{r}_{ISCO}\) fails with \(\sigma<-0.6\) and \(\sigma<-0.5\) for the two wormhole solutions, \(\gamma=1\) and \(\gamma=2\), respectively. Meanwhile, in the case of the OKS-SSC, the numerical computation of \(\bar{r}_{ISCO}\) fails when \(\sigma>0.2\) and \(\sigma>0.15\) for \(\gamma=1\) and \(\gamma=2\), respectively. The failures in the numerical calculations for the MP- and OKS-SSCs are similar to those obtained for the ISCO of spinning particles around Kerr black holes in [55]. Indeed, the reason for the different behavior among the SSCs lies in the fact that each condition represents a
different reference point for the position of the centroid. These differences stem from attempting to describe extended bodies using only their first two multipoles. However, it is important to remark that extended bodies have an infinite number of multipoles that are neglected intentionally in the pole-dipole approximation used to obtain the MPD equations [55].
A second way to compare the different SSCs is by using a procedure similar to the one carried out in [55]. There, the authors argue that to provide a full gauge invariant discussion one can use the following ISCO orbital parameter
\[x_{ISCO}\equiv(b_{0}\hat{\Omega}_{ISCO})^{2/3}. \tag{52}\]
Then, the relative difference of the ISCO frequency parameters given by the MP- or OKS-SSCs with respect to the ISCO frequency parameters given by the TD-SSC is defined by
\[\Delta x_{ISCO}=\frac{\left|x_{ISCO}^{SSC}-x_{ISCO}^{TD}\right|}{x_{ISCO}^{TD}}, \tag{53}\]
where \(x_{ISCO}^{SSC}\) can correspond to the ISCO frequency parameter given by the MP- or OKS-SSCs.
In Fig. 4, top panels, we show the ISCO orbital frequency parameter \(x_{ISCO}\) for each SSC. In the lower panels, the figure also shows the relative differences \(\Delta x_{ISCO}\) of MP- and OKS-SSCs with respect to the TD-SSC. The results are plotted as a function of the particle's spin \(\sigma\) and for both wormhole solution cases \(\gamma=1,2\). Since \(x_{ISCO}=x(\bar{r}_{ISCO})\), the behavior of the orbital frequency parameter is similar to that of \(\bar{r}_{ISCO}\) for each SSC. Therefore, the numerical computations for finding \(x_{ISCO}\) have the same validity regions as those for \(\bar{r}_{ISCO}\). Additionally, we observe that the parameter \(x_{ISCO}\) increases until reaching a maximum point, especially for TD- and MP-SSCs, which coincides with the location of the inflection points found in \(\bar{r}_{ISCO}\). This maximum point is greater for the TD-SSC. Notably, the TD- and OKS-SSCs exhibit a divergence at \(\sigma\approx 0.15\) and \(\sigma\approx 1\), respectively, in the case of the solution with \(\gamma=2\). When examining the relative differences near \(\sigma=0\), the differences between the MP- and TD-SSCs are smaller than those between the OKS- and TD-SSCs. In any case, the differences between the SSCs become more significant in the regions close to the inflection points of \(\bar{r}_{ISCO}\).
Each SSC is defined using a different reference centroid. Consequently, to obtain equivalent results from the different SSCs, it is necessary to apply a correction to the centroid. Nevertheless, although centroid corrections give account for the differences between SSCs, it is worth noting that corrections could deviate a circular trajectory from circularity; this is because both the spin and the position of the centroid would undergo changes, and the worldline that was previously on the ISCO for one centroid may not remain on the ISCO for another centroid [55]. Therefore, when approximating the results of each SSC, it's crucial to apply a correction to both the spin and the position of the centroid, as long as the pole-dipole approximation allows it [44].
\begin{table}
\begin{tabular}{||c c c c||} \hline \(\hat{\Omega}_{n}\) & TD & MP & OKS \\ \hline \hline \(\mathcal{O}\left(\sigma^{0}\right)\) & \(\frac{1}{\rho^{3/2}}\) & \(\frac{1}{r^{3/2}}\) & \(\frac{1}{r^{3/2}}\) \\ \hline \(\mathcal{O}\left(\sigma^{1}\right)\) & \(-\frac{(5r-2)\sqrt{r-1}}{4\sqrt{r^{0}}}\) & \(-\frac{(5r-2)\sqrt{r-1}}{4\sqrt{r^{0}}}\) & \(-\frac{(5r-2)\sqrt{r-1}}{4\sqrt{r^{0}}}\) \\ \hline \(\mathcal{O}\left(\sigma^{2}\right)\) & \(\frac{(r-1)(r\)(65r-36)+4)}{32r^{15/2}}\) & \(\frac{(r-1)(r\)(65r-36)+4)}{32r^{15/2}}\) & \(\frac{(5r-2)\left(11r^{2}-r-2)\right)}{32r^{15/2}}\) \\ \hline \(\mathcal{O}\left(\sigma^{3}\right)\) & \(-\frac{\sqrt{r-1}(5r-2)}{8\sqrt{r^{15}}}\) & \(-\frac{\sqrt{r-1}(5r-2)(7r-4)}{8\sqrt{r^{15}}}\) & \(-\frac{(5r-2)\left(32r^{3}-17r^{2}-8r+4\right)}{32\sqrt{r^{15}}\sqrt{r-1}}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Orbital frequency orders in the expansion in powers of \(\sigma\) for arbitrary circular equatorial orbits for the case \(\gamma=1\)
\begin{table}
\begin{tabular}{||c c c c||} \hline \(\hat{\Omega}_{n}\) & TD & MP & OKS \\ \hline \hline \(\mathcal{O}\left(\sigma^{0}\right)\) & \(\frac{1}{r^{3/2}}\) & \(\frac{1}{r^{3/2}}\) & \(\frac{1}{r^{3/2}}\) \\ \hline \(\mathcal{O}\left(\sigma^{1}\right)\) & \(-\frac{(5r-2)\sqrt{r-1}}{4\sqrt{r^{0}}}\) & \(-\frac{(5r-2)\sqrt{r-1}}{4\sqrt{r^{0}}}\) & \(-\frac{(5r-2)\sqrt{r-1}}{4\sqrt{r^{0}}}\) \\ \hline \(\mathcal{O}\left(\sigma^{2}\right)\) & \(\frac{(r-1)(r\)(65r-36)+4)}{32r^{15/2}}\) & \(\frac{(r-1)(r\)(65r-36)+4)}{32r^{15/2}}\) & \(\frac{(5r-2)\left(11r^{2}-r-2)\right)}{32r^{15/2}}\) \\ \hline \(\mathcal{O}\left(\sigma^{3}\right)\) & \(-\frac{\sqrt{r-1}(5r-2)}{8\sqrt{r^{15}}}\) & \(-\frac{\sqrt{r-1}(5r-2)(7r-4)}{8\sqrt{r^{15}}}\) & \(-\frac{(5r-2)\left(32r^{3}-17r^{2}-8r+4\right)}{32\sqrt{r^{15}}\sqrt{r-1}}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Orbital frequency orders in the expansion in powers of \(\sigma\) for arbitrary circular equatorial orbits for the case \(\gamma=1\)
## V Centroids' corrections
The differences in the orbital frequency obtained using the three spin supplementary conditions, shown in Tables 1 and 2, are usually explained by noting that each SSC defines a particular centroid [47, 55] and therefore, all the moments are evaluated correspondingly. We will assume that the choice of the SSC corresponds to a centroid's correction in the form \(z^{\mu}\rightarrow\tilde{z}^{\mu}=z^{\mu}+\delta z^{\mu}\). In order to compare the results obtained using the three SSCs presented in this paper, we will consider the TD-SSC as a reference because the corresponding centroid is uniquely determined. Hence, the quantities under the TD-SSC will be denoted by a tilde, \(\tilde{\cdot}\).
Although the momentum components do not depend on the choice of the centroid, the spin tensor does change
Figure 4: On top panels, we present the ISCO orbital frequency parameter \(x_{ISCO}\) for each SSC as reference. While in the lower panels, we show the relative differences \(\Delta x_{ISCO}\) of MP and OKS SSCs with respect to TD SSC, using the Eq. (53). These quantities are given as function of spin \(\sigma\) and for both cases \(\gamma=1,2\).
Figure 3: ISCO radius for different values of spin \(\sigma\) for both cases \(\gamma=1,2\).
according to
\[S^{\mu\nu}\rightarrow\tilde{S}^{\mu\nu}=S^{\mu\nu}+p^{\mu}\delta z^{\nu}-p^{\nu} \delta z^{\mu}. \tag{54}\]
Hence, following the discussion in the appendix B of [44], we will impose the constraint \(V_{\alpha}\delta^{\alpha}=0\), which is equivalent to the relation
\[\tilde{p}_{\alpha}\delta z^{\alpha}=0 \tag{55}\]
together with the condition that the centroid cannot have non-radial shifts between the TD- and the MP-/OKS-SSCs. This implies that the change in the centroid will
Figure 5: ISCO radius for each SSCs compared to those corrected by \(\tilde{r}\neq\bar{r}\), for different values of spin \(\sigma\) and for both cases \(\gamma=1,2\).
Figure 6: On top panels, we present the ISCO orbital frequency parameter \(x_{ISCO}\) for each SSCs compared to those corrected by \(\tilde{r}\neq\bar{r}\). While in the lower panels, we show the relative differences \(\Delta x_{ISCO}\) of the MP and OKS SSCs with respect to the TD-MP and TD-OKS SSCs, using the Eqs. (57) and (58), respectively. These quantities are given as a function of spin \(\sigma\) and for both cases \(\gamma=1,2\).
be
\[\delta z^{\alpha}=\delta r=\frac{\tilde{p}_{\mu}S^{\mu r}}{\tilde{\mu}^{2}}, \tag{56}\]
where \(\tilde{\mu}^{2}=-\tilde{g}_{\alpha\beta}p^{\alpha}p^{\beta}\). Furthermore, in the previous section, we analyzed the relative differences of the ISCO frequency parameters \(\Delta x_{ISCO}\) for the MP- and OKS-SSCs with respect to the TD-SSC, as done in [55]. Meanwhile, in this section, to compare the different centroid corrections, we will not calculate the relative difference, \(\Delta x_{ISCO}\), with respect to the TD-SSC but in terms of each SSC correction. In this sense, to relate the ISCO frequency parameters \(x_{ISCO}\) given by the MP-SSC centroid correction applied to the TD-SSC (denoted as TD-MP) with \(x_{ISCO}\) given by the MP-SSC without correction, we introduce the relative difference:
\[\Delta x_{ISCO}=\frac{\left|x_{ISCO}^{MP}-x_{ISCO}^{TD-MP}\right|}{x_{ISCO}^{ TD-MP}}. \tag{57}\]
Similarly, the relative differences given by the OKS-SSC without any correction with respect to the OKS-SSC centroid correction applied to the TD-SSC (denoted as TD-OKS) are
\[\Delta x_{ISCO}=\frac{\left|x_{ISCO}^{OKS}-x_{ISCO}^{TD-OKS}\right|}{x_{ISCO}^ {TD-OKS}}. \tag{58}\]
Additionally, we need to keep in mind that in the MP-SSC, the dimensionless spin is expressed as \(\sigma=\frac{S}{mb_{0}}\), while in the TD-SSC, it takes the form of \(\sigma=\frac{S}{\mu b_{0}}\). To obtain the dimensionless spin \(\tilde{\sigma}\) measured in the TD reference frame in terms of \(\sigma\) measured in the MP frame, we need to use the relationship between \(m\) and \(\mu\), which is given by [35, 44]
\[\mu^{2}=m^{2}+\frac{S^{\alpha\lambda}S_{\lambda\beta}p^{\beta}p_{\alpha}}{S^{ 2}}. \tag{59}\]
The last expression is very useful for our numerical analysis of the centroid's corrections to the radius and the orbital frequency of the ISCO orbit that we will present below.
### Corrections to the position of the centroid
In this subsection, we will consider the corrections to the reference position of the centroid applied to the radius of the ISCO (\(\bar{r}_{ISCO}\)) and the corresponding orbital frequency parameter \(x_{ISCO}\). To begin with, restricting to a linear radial correction of the centroid, the ISCO radius will change as
\[\tilde{r}_{ISCO}=\bar{r}_{ISCO}+\delta\bar{r}_{ISCO}, \tag{60}\]
where Eqs. (15) give the quantity
\[\delta r= \frac{p_{t}S^{tr}+p_{\varphi}S^{\varphi r}}{\mu^{2}} \tag{61}\] \[= -\frac{S}{\mu^{2}}\sqrt{-\frac{g_{\theta\theta}}{g}}\left[p_{t}V _{\varphi}-p_{\varphi}V_{t}\right]. \tag{62}\]
The evaluation of this correction for the ISCO depends on the SSC. In the case of the MP-SSC, for example, we use the velocity as the reference vector and Eqs. (32) and (33) for the momentum. Meanwhile, for the OKS-SSC we use the reference vector given in Eqs. (45) and (46).
Figure 5 shows the ISCO radius \(\bar{r}_{ISCO}\) calculated for each SSC and \(\bar{r}_{ISCO}\) derived by making the centroid correction due to the radial shift induced by the MP- and the OKS-SSCs on the TD-SSC. Meanwhile, in Fig. 6, we share the ISCO orbital frequency parameter \(x_{ISCO}\) for each SSC compared to those corrected by \(\tilde{r}\neq\bar{r}\) and the relative differences \(\Delta x_{ISCO}\) of the MP- and OKS-SSCs with respect to the TD-MP- and TD-OKS-SSCs, using the Eqs. (57) and (58), respectively. The plots show that the first-order corrections to the ISCO parameters, \(\bar{r}\) and \(x_{ISCO}\), based on \(\tilde{r}_{ISCO}\), effectively bring the TD-SSC results closer to the behavior of the other SSCs. Comparing the relative differences \(\Delta x_{ISCO}\) obtained in the previous section with those obtained by applying the correction for the radial position of the centroid confirms that the results of TD-MP and TD-OKS-SSCs exhibit behavior closer to that of the MP- and OKS-SSCs, in contrast to the TD-SSC. However, it's important to note that this behavior does not occur for all spin values. For instance, the correction represented by the TD-OKS-SSC for values of \(\sigma<0\) exhibits a behavior that differs significantly from the OKS-SSC, compared to that obtained with the TD-SSC, which already behaves more similarly to the OKS-SSC in any case. Furthermore, the correction provided by \(\tilde{r}\neq\bar{r}\) enables the calculation of ISCO parameters for values of \(\sigma\) that were not previously accessible in the numerical calculations using the MP- and OKS-SSCs. For example, in the TD-OKS-SSC, \(\bar{r}_{ISCO}\) and \(x_{ISCO}\) could be computed for values greater than \(\sigma>0.2\) and \(\sigma>0.15\) for solutions with \(\gamma=1\) and \(\gamma=2\), respectively, where the computational routine had previously failed.
We attempted to calculate the corrections to the second-order of \(\delta r^{2}\) due to the radial shift of the position of the centroid. However, we did not achieve satisfactory results and opt not to share them, mainly because the second-order corrections were further away from the first-order corrections of \(\delta r\); this conclusion is consistent with similar studies on black holes by Schwarzschild and Kerr [44, 47], where they noted that higher order corrections did not improve the approximation between the behavior of different SSCs, since the pole-dipole approximation becomes invalid and is necessary to consider multiples of higher order in the calculations.
### Corrections to the Spin
Equation (54) describes how the spin tensor changes when the centroid is measured relative to another four-vector. Because \(\tilde{S}\neq S\), it is possible to derive an equation that relates the change in the measured spin value to the radial shift. We obtain this expression by applying Eq. (54) to the non-vanishing components of the spin tensor and assuming that both centroids move on circular equatorial orbits, as described in [44]. Then, if we expand the definition of the spin angular momentum of Eq. 14 in terms of the radial shift, this yields [44]
\[\tilde{S}^{2}= S^{2}+\delta r\left\{g_{rr}\Big{[}\partial_{r}g_{\varphi\varphi} \left(S^{r\varphi}\right)^{2}+\partial_{r}g_{tt}\left(S^{tr}\right)^{2}\right.\] \[\left.+2\left(p_{t}S^{tr}-p_{\varphi}S^{r\varphi}\right)\Big{]}+ \frac{S^{2}\partial_{r}g_{rr}}{g_{rr}}\right\}+\mathcal{O}\left(\delta r^{2} \right). \tag{63}\]
Next, depending on the SSC, we need to use the expressions for the spin tensor and the radial displacement.
#### iv.2.1 Spin Corrections for TD-MP SCC
To determine the dimensionless spin \(\tilde{\sigma}\) for the spin transition from the MP- to the TD-SSC, we must divide both sides of the Eq. (63) by \(\tilde{\mu}^{2}b_{0}^{2}\). So, expanding \(1/\tilde{\mu}^{2}b_{0}^{2}\) in the linear approximation of \(\delta r\), and then replaced along with \(\mu\) in terms of \(m\) using the Eq. (59), one obtains [44]
\[\tilde{\sigma}^{2}= \frac{\sigma^{2}}{\sigma^{2}-g_{rr}\left(p_{t}\sigma^{tr}-p_{ \varphi}\sigma^{r\varphi}\right)^{2}/m^{2}}\left\{\sigma^{2}+\delta r\left\{ \frac{\sigma^{4}}{\sigma^{2}-g_{rr}\left(p_{t}\sigma^{tr}-p_{\varphi}\sigma^{ r\varphi}\right)^{2}/m^{2}}\left[\partial rg_{tt}\left(\frac{p_{t}}{mg_{tt}} \right)^{2}+\partial rg_{\varphi\varphi}\left(\frac{p_{\varphi}}{mg_{\varphi \varphi}}\right)^{2}\right]\right.\] \[\left.+g_{rr}\left[\partial rg_{\varphi\varphi}\left(\sigma^{r \varphi}\right)^{2}+\partial rg_{tt}\left(\sigma^{tr}\right)^{2}+\frac{2}{mb_ {0}}\left(p_{t}\sigma^{tr}-p_{\varphi}\sigma^{r\varphi}\right)\right]+\frac{ \sigma^{2}\partial rg_{rr}}{g_{rr}}\right\}\right\}, \tag{64}\]
where
\[\sigma^{\kappa\nu}=\frac{S^{\kappa\nu}}{mb_{0}} \tag{65}\]
is the normalized spin tensor. Applying a power series expansion in \(\sigma\), the previous expression for the wormhole with \(\gamma=1\) reduces to
\[\tilde{\sigma}=\sigma+\frac{\left(5\bar{r}-2\right)\left(2\bar{r}^{2}-\bar{r }-2\right)\sigma^{4}}{4\sqrt{\bar{r}-1}\bar{r}^{7}}+\mathcal{O}\left(\sigma^{ 5}\right) \tag{66}\]
In the case of the wormhole solution with \(\gamma=2\), the power series expansion of the spin correction yields
\[\tilde{\sigma}=\sigma+\frac{\left(1-7\bar{r}^{2}-3\bar{r}^{3}+4\bar{r}^{4}+2 \bar{r}^{5}\right)\sigma^{4}}{\bar{r}^{17/2}\sqrt{\bar{r}^{2}-1}}+\mathcal{O} \left(\sigma^{5}\right) \tag{67}\]
Hence, we will utilize the spin correction provided by \(\tilde{\sigma}\neq\sigma\), as outlined in the previous two expressions, to recalculate the ISCO parameters once more; nevertheless, before doing so, let's see the form of the spin correction for the other relationship represented by TD-OKS SSC.
#### iv.2.2 Spin Corrections for TD-OKS SCC
Because of the relation in Eq. (43), we have \(m=\mu\) in the OKS-SSC case, and using this result in Eq. (63), it is possible to write the following equation in terms of the velocity,
\[\tilde{\sigma}^{2}=\sigma^{2}+\delta r\Bigg{\{}g_{rr}\left[( \sigma^{r\varphi})^{2}\partial_{r}g_{\varphi\varphi}+(\sigma^{tr})^{2} \partial_{r}g_{tt}+\frac{2}{b_{0}}\left(u_{t}\sigma^{tr}-u_{\varphi}\sigma^{r \varphi}\right)\right]\] \[+\sigma^{2}\left[\frac{\partial_{r}g_{rr}}{g_{rr}}+\left(\frac{u _{t}}{g_{tt}}\right)^{2}\partial_{r}g_{tt}+\left(\frac{u_{\varphi}}{g_{\varphi \varphi}}\right)^{2}\partial_{r}g_{\varphi\varphi}\right]\Bigg{\}} \tag{68}\]
where the normalized spin tensor is given again by Eq. (65). Using Eqs. (11), (45) and (46), we obtain for the wormhole spacetime with \(\gamma=1\) the following correc
tion to the normalized spin
\[\tilde{\sigma}=\sigma+\frac{\left(5\bar{r}-2\right)\left(\bar{r}^{2}+2\bar{r}-2 \right)}{2\left(\bar{r}-1\right)\bar{r}^{5}}\sigma^{3}+\mathcal{O}\left(\sigma ^{4}\right). \tag{69}\]
Similarly, the spin correction for the wormhole with \(\gamma=2\) is given by the expression
Figure 8: On top panels, we present the ISCO orbital frequency parameter \(x_{ISCO}\) for each SSC compared to those corrected by taking \(\tilde{\sigma}\neq\sigma\). While in the lower panels, we show the relative differences \(\Delta x_{ISCO}\) of the MP and OKS-SSCs with respect to the TD-MP and TD-OKS-SSCs corrections, using the Eqs. (57) and (58), respectively. These quantities are given as a function of spin \(\sigma\) and for both cases \(\gamma=1,2\).
Figure 7: ISCO radius for each SSC compared to those corrected by taking \(\tilde{\sigma}\neq\sigma\), for different values of spin \(\sigma\) and for both cases \(\gamma=1,2\).
\[\tilde{\sigma}=\sigma+\frac{(5\bar{r}-2)}{2\left(\bar{r}-1\right)\bar{r}^{5}} \left[\frac{(5\bar{r}-2)}{2}+\frac{\left(\bar{r}\left(\bar{r}^{2}+\bar{r}-2 \right)-1\right)}{\bar{r}^{1/2}\sqrt{\bar{r}+1}}\right]\sigma^{3}+\mathcal{O} \left(\sigma^{4}\right). \tag{70}\]
We will now recalculate the ISCO parameters by applying the spin corrections provided by Eqs. (66) and (67) for the TD-MP-SSC, and Eqs. (69) and (70) for the TD-OKS-SSC, for the cases \(\gamma=1\) and \(\gamma=2\), respectively.
Figure 7 displays the calculated ISCO radius \(\bar{r}_{ISCO}\) for each SSC, as well as the \(\bar{r}_{ISCO}\) derived by utilizing the spin correction. On the other hand, Figure 8 presents the ISCO orbital frequency parameter \(x_{ISCO}\) for each SSC, as well as those corrected through the use of \(\tilde{\sigma}\neq\sigma\), and the relative differences \(\Delta x_{ISCO}\) between the MP- and OKS-SSCs with respect to the TD-MP and TD-OKS-SSCs corrections, as obtained through Eqs. (57) and (58), respectively. These plots demonstrate that the spin corrections to the ISCO parameters, \(\bar{r}_{ISCO}\) and \(x_{ISCO}\), based on \(\tilde{\sigma}_{ISCO}\), effectively bring the TD-SSC results closer to the behavior of the MP-SSC. In particular, comparing \(\Delta x_{ISCO}\) obtained in Fig. 4 with those obtained by taking \(\tilde{\sigma}\neq\sigma\) confirms that the results of TD-MP-SSC exhibit a behavior closer to that of the MP-SSCs, in contrast to the TD-SSC. However, the correction represented by TD-OKS-SSC does not exhibit such an improvement. In fact, it is possible to see more discrepancies than the case without the correction.
It is important to point out that the use of the spin correction once more allows for the calculation of the ISCO parameters for values of \(\sigma\) where the computational routine had previously failed, without the spin correction. Therefore, it is clear that the spin corrections for the TD-MP and TD-OKS-SSCs effectively eliminate the divergences observed in both TD-SSC and OKS-SSC before the implementation of these corrections. Furthermore, by comparing the results obtained through the spin correction with those achieved previously using the centroid position correction, we can identify situations where one correction produces better results over the other for specific spin values. For instance, in the case of the TD-MP-SSC spin correction, smaller \(\Delta x_{ISCO}\) values are obtained at \(\sigma=0.3\), compared to those obtained through the centroid position correction presented in Fig. 6. However, for \(\sigma=0.7\), the spin correction produces higher \(\Delta x_{ISCO}\) values than those obtained through the centroid position correction; this confirms that the two corrections approach the ISCO parameters of each SSC differently. Moreover, as discussed in [44], this is because combining the two corrections (simultaneously) does not necessarily improve the approximation between the results of each SSC. Therefore, we will not combine the two corrections simultaneously in this study either.
## VI Conclusion
In this work, we compare different SSCs for spinning test particles moving in equatorial circular orbits around a Morris-Thorne traversable wormhole. We consider two wormhole solutions; \(\gamma=1\) and \(\gamma=2\), and the most known SSCs in the literature; i. e. TD-, MP-, and OKS-SSCs.
We begin by investigating the influence of each SSC on the particle's orbital frequency, \(\tilde{\Omega}\). To do so, we expand the orbital frequency in powers of the dimensionless particle's spin, \(\sigma\); we carried out the expansion up to the third-order, where differences in all the SSCs started to appear. As expected, our results show that the zero-order frequency is the same for all SSCs; this frequency corresponds to well-known Keplerian frequency \(1/\pi^{\frac{1}{2}}\), see Table. 1 and 2, for \(\gamma=1\) and \(\gamma=2\), respectively. Moreover, the equivalence in the orbital frequency extends to the first order in both wormhole solutions; nevertheless, we start seeing some differences when considering the second order of approximation. For example, we found that OKS-SSC begins to differ from the TD- and MP-SSCs, which still have the same behavior at this order of magnitude. Then, when we considered the third order of approximation, we found that all the SSCs differ. The same conclusion appears for the Schwarzschild and Kerr spacetimes [44; 47]. Therefore, the fact that the TD-SSC is more compatible with the MP-SSC than the OKS-SSC also extends to the Morris-Thorne traversable wormhole.
In the case of the ISCO, we found that it decreases as the particle's spin increases; this is the general feature in all SSCs and for both wormhole solutions considered in this paper. Moreover, although \(r_{ISCO}\) has the same behavior in the vicinity of \(\sigma=0\), it is worth noticing that, in the OKS-SSC, the ISCO diverges at some particular value of \(\sigma\); for example, when \(\gamma=1\), the ISCO radius diverges at \(\sigma\approx 0.2\). A similar behavior occurs at \(\sigma\approx 0.15\) when \(\gamma=1\) (note that increasing the gamma shifts the limit value to the left). This behavior is mainly the consequence of choosing a specific SSC. Recall that, from the physical point of view, a different SSC corresponds to a different location of the particle's centroid.
On the other hand, according to the relative difference with respect to the TD-SSC, our results show that the MP-SSC behaves similarly to TD-SSC when the particle's spin belongs to the interval \(-0.2<\sigma<0.4\) of the wormhole solution with \(\gamma=1\). This interval reduces when \(\gamma=2\); in that case, the MP-SSC behaves similarly to the TD-SSC if \(-0.2<\gamma<0.3\), where \(\Delta x_{ISCO}<<0.01\). In the case of the OKS-SSC with \(\gamma=1\), its behavior is similar to TD-SSC in the interval \(-0.4<\sigma<0.0\) (with relative differences smaller than
0.005). When \(\gamma=2\), the OKS-SSC is similar to TD-SSC only in the region near \(\sigma=0\). Therefore, the wormhole parameter \(\gamma\) does influence the ISCO radius depending on the SSC. In particular, when spinning test particles move with \(\sigma>0\).
Through the particles' centroid corrections, it is possible to explain (up to some degree) the differences between each SSC. Therefore, to see how close each SSC is to the other, we investigate (separately) the radial and spin corrections taking as a reference the TD-SSC. Our results show that, in the case of the wormhole solution with \(\gamma=1\), the TD-SSC behaves very closely to the MP-SSC after the radial correction; in this case, the values of \(\Delta x_{ISCO}\) are smaller for a longer interval, even for positive values of the particle's spin \(\sigma\). Nevertheless, the situation is different for the wormhole solution with \(\gamma=2\); although there is an improvement in the convergence, the correction of the TD-SSC to the MP-SSC is not as good as the wormhole solution with \(\gamma=1\). On the other hand, the radial correction of the TD-SSC to OKS-SSC shows an improvement in the interval \(-0.2<\sigma<0.2\) for both wormhole solutions in contrast to the TD-SSC without correction. Hence, the wormhole parameter \(\gamma\) influence on the radial corrections.
As mentioned above, the ISCO radius diverges for some value of \(\sigma\) (depending on the value of the wormhole parameter \(\gamma\)) in the case of the OKS-SSC. However, when considering the spin correction in the TD-SSC, the divergence disappears independently of the wormhole solution. Similarly, the divergence of \(\Delta x_{ISCO}\) found for the solution with \(\gamma=2\) in the TD-SSC vanishes when one considers the spin corrections of the MP-SSC.
Finally, from the algorithm proposed in [44] to investigate the orbital frequencies in the three SSCs, it is clear that the differences are connected deeply with the radial correction given in Eq. (56) since the spin tensor, \(S^{\mu\nu}\), depends on the reference four-vector \(V^{\mu}\). In this sense, an equatorial circular orbit may degenerate into a non-circular one when changing from one SSC to another; this is why the SSCs become different at some order of magnitude (up to the third order for Schwarzschild and Kerr black holes and Morris-Thorne traversable wormholes). However, the fact that the ISCO is a special limit and that additional improvements such as the spin corrections in Eqs. (64) and (68) were not enough to explain the differences at the ISCO, also suggest that the pole-dipole approximation does not work anymore in curved spacetime and becomes necessary to include higher order terms in the multipole expansion.
## Acknowledgements
This work was supported by the Universidad Nacional de Colombia, Hermes Grant Code 57057, and by the Research Incubator No.64 on Computational Astrophysics of the Observatorio Astronomico Nacional. C.A.B.G. acknowledge the support of the Ministry of Science and Technology of China (grant No. 2020SKA0110201) and the National Science Foundation of China (grants No. 11835009).
|
2309.13777 | Diffeomorphic Multi-Resolution Deep Learning Registration for
Applications in Breast MRI | In breast surgical planning, accurate registration of MR images across
patient positions has the potential to improve the localisation of tumours
during breast cancer treatment. While learning-based registration methods have
recently become the state-of-the-art approach for most medical image
registration tasks, these methods have yet to make inroads into breast image
registration due to certain difficulties-the lack of rich texture information
in breast MR images and the need for the deformations to be diffeomophic. In
this work, we propose learning strategies for breast MR image registration that
are amenable to diffeomorphic constraints, together with early experimental
results from in-silico and in-vivo experiments. One key contribution of this
work is a registration network which produces superior registration outcomes
for breast images in addition to providing diffeomorphic guarantees. | Matthew G. French, Gonzalo D. Maso Talou, Thiranja P. Babarenda Gamage, Martyn P. Nash, Poul M. Nielsen, Anthony J. Doyle, Juan Eugenio Iglesias, Yaël Balbastre, Sean I. Young | 2023-09-24T23:16:38Z | http://arxiv.org/abs/2309.13777v2 | # Diffeomorphic Multi-Resolution Deep Learning Registration for Applications in Breast MRI
###### Abstract
In breast surgical planning, accurate registration of MR images across patient positions has the potential to improve the localisation of tumours during breast cancer treatment. While learning-based registration methods have recently become the state-of-the-art approach for most medical image registration tasks, these methods have yet to make inroads into breast image registration due to certain difficulties--the lack of rich texture information in breast MR images and the need for the deformations to be diffeeomorphic. In this work, we propose learning strategies for breast MR image registration that are amenable to diffeomorphic constraints, together with early experimental results from in-silico and in-vivo experiments. One key contribution of this work is a registration network which produces superior registration outcomes for breast images in addition to providing diffeomorphic guarantees.
## 1 Introduction
Globally, breast cancer is the most diagnosed cancer for women, contributing to 11.7% cancer incidence rates [25]. Dynamic Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) is commonly used for detecting breast cancer in women with a high risk of developing breast cancer. This imaging is performed in the prone position to reduce breathing artifacts. Breast Conservation Therapy (BCT) is the most common treatment for patients with early-stage breast cancer. BCT involves localisation of the cancer lesion in the supine position, followed by lumpectomy (excision of the lesion) and often radiotherapy to eliminate any residual disease. The success of BCT depends on the accurate localisation of tumours inside the breast in the supine position. The deformation between prone and supine can vary significantly depending on e.g. breast size, tissue density, age, etc. Carbonaro et al. [8] reports a median lesion displacement between
prone and supine breast MRI as ranging from 30-60mm. Such lesions can range from very small (\(<\)10mm) to very large (\(>\)60mm), however 20mm is the most common size at which breast cancer is diagnosed [24, 19]. Due to breasts intrinsic high-lipidic composition, the non-linear stress-strain relationship of the skin (which highly restricts tissue deformation) and changes in arm positioning, the breast tissue exhibits a large and complex deformation between the diagnostic and pre-operative positions. This makes tracking tumours between the prone and supine positions extremely challenging. Computed Tomography imaging is typically used to guide radiotherapy, however, such intra-operative image guidance is typically not available during lumpectomy.
Techniques for localising tumour positions during lumpectomy, e.g. using guide wires, fail in outlining the tumour in its entirety. Such challenges may contribute to 20-40% reoperation rates reported in the literature [21]. Previous studies proposed the acquisition of an additional pre-operative supine MRI to overcome the challenges of tumour localisation [1]. However, due to respiratory artifacts influencing the image quality, clinicians do not generally acquire contrast-enhanced images in the supine position. Respiratory artifacts in the breast tissue are absent in the prone position as the patients chest is fixed relative to the coils, i.e. the coils are positioned around the breast and the torso against the MR table. In the supine position, even positioning coils against the breast (e.g. air blanket coils) will produce respiratory artifacts as the posterior region of the torso is constrained by the MR table and the expansion of the torso will mainly be manifested in the anterior region. Therefore tumour locations still need to be identified from diagnostic DCE-MRI in the prone position, and mapped to the proposed pre-operative non-contrast MRI in the supine position. This is challenging due to the large and complex deformations that the breast undergoes between these positions, limiting the clinical applicability of this approach. Developing robust techniques to map the breast tissue between the diagnostic and pre-operative positions can potentially improve BCT outcomes by providing accurate tumour localisation.
Figure 1: Breast MR Image Registration (viewed in a transverse plane). We propose a diffeomorphic registration-based approach to localise in-vivo breast tissue between multiple positions. An arms down image (a) is registered to an arms up image (b) in the prone position. We show our predicted arms up image (c) along with the false colour composite of the predicted (magneta), ground-truth (cyan) images, depicting purple in the regions of agreement (most of the tissue) (d).
In this work, we propose a learning-based, diffeomorphic registration method for localising breast tissue across positions (Fig. 1). At the core of our registration method is SVFlowNet, a novel Stationary Velocity Field (SVF)-based registration approach that is constrained to retrieve diffeomorphic transformations. Our results show a better performance in comparison to state-of-the-art non-diffeomorphic deep learning approaches on two breast MRI datasets.
1. We extend the dual-stream network architecture [13] to diffeomorphic registration to provide diffeomorphic guarantees.
2. We introduce a differentiable SVF composition layer based on the BCHD (or Baker-Campbell-Hausdorff-Dynkin) formula [18].
3. We evaluate the effect of supervision strategy and diffeomorphic encoding on the accuracy of breast MR image registration.
## 2 Related Work
In recent years, there has been an effort to map breast deformations between diagnostic and pre-operative positions using a combination of computational biomechanics and medical image registration techniques [10, 11, 3, 16, 4].
While biomechanics approaches show promise, limitations exist in their ability to recover the large deformations the breast can undergo. For example, accounting for the change in relative positions of the pectoral muscles and the base of the breast (deep, superficial fascia) as the individual and their arms change position between the diagnostic and pre-operative positions is a challenge that has not yet been addressed with biomechanical modelling. This results in the shoulder joint and the arms rolling posteriorly, stretching the pectoral muscles, and flexing the ribcage, resulting in complex breast deformation. Developing methods to quantify and understand these complex deformations would help identify approaches to improve predictions from biomechanical models and enable their application for navigational guidance when 3D imaging is unavailable e.g. during surgical interventions.
Image registration is a deeply nonlinear and nonconvex problem, which has been historically solved using iterative methods. The seminal work of Lucas and Kanade [15] and Horn and Schunk [12] showed that a linearization of the equations of motion leads to a linear relationship between the temporal gradients of two images and the motion flow. While this linearization forms the basis of all gradient-descent registration algorithms, it only holds true in the small deformation regime. More advanced deformation models have been proposed to solve large-deformation registration, such as the large deformation diffeomorphic metric mapping (LDDMM) framework [7] and its log-Euclidean variant [2], which assumes SVFs. Both approaches ensure that the resulting deformations are diffeomorphic, and therefore one-to-one and onto.
Iterative approaches have now been superseded by learning-based approaches [6]. Inspired by iterative registration, a number of works have investigated pyramidal representations of the flow field [17, 14, 13, 27]. However, [13, 27] do not enforce bijectivity, and while [17, 14] encode deformations using SVFs, they do
not take advantage of the properties of the Lie algebra when combining flows across scales. In this work, we propose a principled extension of the dual-stream architecture from [13, 27] that properly handles SVFs.
## 3 Computational Framework
Accurately aligning breast MR images relies heavily not only on the proposed registration network (SVFlowNet) but also on the learning strategies (supervised and unsupervised) and loss functions used. We discuss each of these in turn.
### Constructing SVFlowNet
**Flow U-Net Architecture.** SVFlowNet extends Flow U-Net [27], which forms the basis of our approach and serves as our non-diffeomorphic baseline. Flow U-Net [27] proposes two major modifications to the U-Net architecture to form a dual stream pyramid network for registration (see Appendix A). The work of [27] propagates the deformations via addition \(\phi^{(l+1)}=\psi_{\text{up}}(\phi^{(l-1)})+\phi^{(l)}\), which is a first-order approximation of the composition. We extend the work of Young et al. [27], by propagating the deformations via composition (linear interpolation) to avoid unnecessary error introduced by deformation addition. We denote the convolutions that extract the flow and perform upsampling by \(\psi_{\text{conv}}\) and \(\psi_{\text{up}}\), respectively. We implement the upsampling (\(\psi_{\text{up}}\)) operator as linear interpolation between the resolution at layers \(l\) and \(l+1\).
**Baker-Campbell-Hausdorff-Dynkin Layers.** SVFlowNet parameterises the multi-resolution output of the Flow U-Net flow blocks [27] as SVFs. The SVF at each resolution \(\mathbf{v}^{(l)}\) are integrated via scaling and squaring to obtain the corresponding deformation \(\phi^{(l)}\) which by construction is diffeomorphic. With the addition of the SVF parameterisation of the flow block output and the scaling and squaring (denoted by \(\exp\)), a SVF at the layer \(l\) can be expressed as follows
\[\mathbf{v}^{(l-1)^{\prime}} =\psi_{\text{up}}\left(\mathbf{v}^{(l-1)}\right) \tag{1}\] \[\phi^{(l-1)} =\exp\left(\mathbf{v}^{(l-1)^{\prime}}\right)\] (2) \[\mathbf{v}^{(l)^{\prime}} =\psi_{\text{conv}}^{(l)}\left(H\left(f_{0}^{(l)},f_{1}^{(l)} \circ\phi^{(l-1)}\right)\right)\] (3) \[\mathbf{v}^{(l)} =\zeta\left(\mathbf{v}^{(l-1)},\mathbf{v}^{(l)^{\prime}}\right)\, \tag{4}\]
in which the \(\zeta\) operator denotes the series expansion resulting from the work of Baker, Campbell, Hausdorff and Dynkin (BCHD) [18]. In the series limit, the BCHD operator ensures
\[\exp(\zeta(\mathbf{v}^{(l-1)},\mathbf{v}^{(l)}))=\exp(\mathbf{v}^{(l-1)}) \circ\exp(\mathbf{v}^{(l)}); \tag{5}\]
see [18] for details.
The operator \(\zeta\) enables the implicit propagation of deformations through the explicit propagation of SVFs, avoiding integration error introduced by scaling and squaring \(\mathbf{v}\) to obtain \(\phi=\exp(\mathbf{v})\). Additionally, \(\zeta\) propagation accommodates for the propagation of both non-commutative and commutative multi-resolution SVFs. This can be verified by observing that by construction the BCHD formula yields propagation via summation \((\zeta:(\mathbf{v}^{(l-1)},\mathbf{v}^{(l)})\mapsto\mathbf{v}^{(l-1)}+\mathbf{ v}^{(l)})\) for the case where \((\mathbf{v}^{(l-1)},\mathbf{v}^{(l)})\) commute. This is a theoretical improvement on the work of [17] and [14], who propose propagation via summation. In this work will implement \(\zeta\) as the BCHD series truncated after the fourth order term (see Appendix B).
**Hadamard Transform.** As in Flow U-Net [27], we reparametrise the flow block feature input using \(H:(f_{0}^{(l)},f_{1}^{(l)^{\prime}})\mapsto(f_{0}^{(l)}+f_{1}^{(l)^{\prime}}, f_{0}^{(l)}-f_{1}^{(l)^{\prime}})\), which can be thought of as the Hadamard transform [22] of features across channels. It should be noted that we have let \(f_{1}^{(1)^{\prime}}=f_{1}^{(l)}\circ\psi_{\mathrm{up}}(\phi^{(l-1)})\) in this case only, to avoid complicating the expression which has been introduced.
### Learning Strategies
In this work, we consider both supervised and unsupervised learning approaches to optimize Flow U-Net and the SVFlowNet variants over the in-silico dataset. For the in-vivo task we use the unsupervised learning approach as the ground-truth deformation is unknown. Consider a deformation field, computed by a neural network \(g\) with parameters \(\theta\), i.e., \(g_{\theta}(f_{0},f_{1})=\phi\), \(\forall(f_{0},f_{1})\subset\mathcal{D}\) where \(\mathcal{D}\) is a given registration dataset. Using this notation, the supervised and unsupervised learning frameworks are defined in the following manner.
**Supervised Learning.** The supervised learning approach uses the ground-truth deformations \(\hat{\phi}\in\mathcal{D}\) to optimise \(\theta\) via
\[\hat{\theta}=\arg\min_{\theta}\mathcal{L}(g_{\theta}(f_{0},f_{1}),\hat{\phi}) \tag{6}\]
in which case \(\mathcal{L}\) denotes the mean squared error loss between predicted \(\phi=g_{\theta}\) and ground-truth \(\hat{\phi}\) deformations.
**Unsupervised Learning.** The unsupervised learning approach uses the image pair \((f_{0},f_{1})\in\mathcal{D}\) and the predicted deformation \(\phi\) alone to optimise the parameters of the neural network \(\theta\) by exploiting the composition mapping \(f_{1}\circ\phi\) which should approximate \(f_{0}\), i.e. \(f_{1}\circ\phi\approx f_{0}\). Minimising the similarity \(\mathcal{L}_{\mathrm{sim}}\) between \(f_{1}\circ\phi\) and \(f_{0}\) alone leads to an ill-posed problem, therefore an additional smoothness term, or regularizer, \(\mathcal{L}_{\mathrm{smooth}}\) (weighted by \(\lambda\in\mathbb{R}\)) is introduced to improve the posedness of the problem. Thus, the training problem in the unsupervised case can be posed as
\[\hat{\theta}=\arg\min_{\theta}\ (1-\lambda)\mathcal{L}_{\mathrm{sim}}(f_{1} \circ g_{\theta}(f_{0},f_{1}),f_{0})+\lambda\mathcal{L}_{\mathrm{smooth}}(g_{ \theta}(f_{0},f_{1})) \tag{7}\]
in which \(\mathcal{L}_{\text{sim}}(f_{1}\circ\phi,f_{0})\) denotes the negated normalised cross correlation (NCC) of \(f_{1}\circ\phi\) and \(f_{0}\); and
\[\mathcal{L}_{\text{smooth}}=\frac{1}{3|\Omega|}\sum_{\mathbf{x}\in\Omega}|| \nabla^{n}\mathbf{u}(\mathbf{x})||_{2}^{2} \tag{8}\]
is the regularization term. Here, the first-order (\(n=1\)) gradient is used for the in-silico task and the second-order (\(n=2\)) gradient is used in the in-vivo task to accommodate for piece-wise linear intensity boundaries.
**Implementation.** Our method is implemented in PyTorch [20] and experiments are performed on an NVIDIA A100 GPU6 with 80 GB of memory. Stochastic Gradient Descent with momentum (\(\beta=0.9\)) is used as the network optimiser with an initial learning rate of \(10^{-2}\). The Reduce On Plateau [20] learning rate scheduler is applied with a reduction factor of 0.5. The training is stopped once the learning rate is less than \(10^{-6}\). The data is fed to the network in batches of 8 samples. For the in-silico task a ratio of 80/10/10 is used to split the 1000 samples for training, validation and testing. The test dataset is not used during the learning/optimisation process to determine the optimal parameters of the network.
Footnote 6: [https://www.nvidia.com/en-us/data-center/a100/](https://www.nvidia.com/en-us/data-center/a100/)
## 4 Experimental Results
To assess the applicability of SVFlowNet to breast MR image registration, we conduct an extensive quantitative analysis of the deformations produced by SVFlowNet, Flow U-Net [27] and a U-Net [23, 9], comparing their statistics. We hypothesize that SVFlowNet will achieve the best performance as there is no tissue coming in or out of the field of view in this dataset, and the mechanical deformation will not violate mass conservation, preserving breast tissue between poses. This is guaranteed by a bijective transformation.
### In-Silico Experiments
Our T2-weighted MRI (\(\approx 1\text{mm}^{3}\)) in-silico dataset consists of the breast region of a volunteer in a prone position which we deform with \(10^{3}\) randomly sampled B-spline based deformations.
The deformations is generated by B-spline interpolation over the image domain \(\mathbf{x}\in\Omega\) via,
\[\phi_{\text{B-spline}}(\mathbf{x})=\sum_{\gamma\in\Gamma}\gamma\beta^{(n,D)}( \mathbf{x}). \tag{9}\]
where \(\beta^{n}\) is an \(n^{th}\) order multi-variate B-spline [26] over a random grid \(\Gamma\) of control points \(\gamma\) and \(D=3\) is the number of spatial dimensions. We use the \(5^{\text{th}}\) order B-spline and sample a \(3\times 3\times 3\) grid of random control points \(\gamma\sim\text{s
\(\mathcal{N}(0,1)\). This approach leads to large non-linear deformations that are smooth and approximately meet the incomprehensibility requirements of true breast deformation (see Fig. 2). The mean Jacobian determinant and displacement over the in-silico dataset is \(0.95\pm 0.07\) (local volume change) and \(9.56\pm 3.22\) (mm) respectively.
For the in-silico task we apply unsupervised learning over \(\lambda\in(0.1,0.01,0.001)\) to characterise the sensitivity of the approach to the regularisation weight. An analysis of the similarity of \(\hat{\phi}\) (ground-truth deformation) and optimal deformations \(\phi=g_{\hat{\theta}}\) using the SVFlowNet variants and Flow U-Net was performed over the test data using the sum of squared error (SSE) metric to measure flow discrepancy (\(\varepsilon_{\text{flow}}\)).
**Supervised.** Considering the median \(\varepsilon_{\text{flow}}\), slight improvement can be observed from SVFlowNet for the two SVF propagation techniques (i.e. summation and \(\zeta\) propagation) compared to Flow U-Net in the supervised case (Fig. 3). See Appendix C for an ablation study with SVFlowNet variants.
**Unsupervised.** Over all \(\lambda\), the \(\zeta\)-propagation yields the highest accuracy (with \(\lambda=0.1\)) on the deformation discrepancy \(\varepsilon_{\text{flow}}\), achieving an \(\varepsilon_{\text{flow}}\) of \(0.021\pm 0.0074\) voxels (where \(\pm 0.0074\) refers to the standard deviation). This is an improvement from Flow U-Net which achieves an \(\varepsilon_{\text{flow}}\) of \(0.042\pm 0.0012\) voxels. Furthermore, \(\zeta\) propagation out performs summation propagation, as summation achieves an \(\varepsilon_{\text{flow}}\) of \(0.023\pm 0.0080\) voxels. However, summation still outperforms both Flow U-Net. This is evidence that SVFlowNet with implicit propagation (summation or \(\zeta\)) improves the deformation discrepancy results of Flow U-Net. Furthermore, \(\zeta\) propagation is shown to be the best-performing propagation technique with respect to the deformation discrepancy (Fig. 3). See Appendix C for an ablation study with variants of SVFlowNet.
### In-vivo Experiments
For the in-vivo task, a pair of breast images, one with arms up and the other with arms down, is manually obtained from a specified region of interest that
Figure 2: Breast MR Image datasets. (a) and (b) depicted the moving and fixed images from the in-silico and in-vivo datasets, respectively, from the Breast Biomechanics Research Group dataset at Auckland Bioengineering Institute. In each false, colour composite the moving and fixed images are magenta and cyan respectively.
encompasses all tissues of a single breast (see Figure 2). These images are derived from high-resolution (\(1mm^{3}\)) isotropic T1-weighted MR images7 of the full torso in both arms-up and arms-down positions. For the in-vivo task unsupervised learning is applied with \(\lambda=10^{-4}\), to both Flow U-Net and the \(\zeta\) propagation variant of SVFlowNet. To evaluate the compressibility of the optimal deformations obtained using both Flow U-Net and SVFlowNet, we count the number of regions for which self-folding occurs (\(\varepsilon_{\text{reg}}\)) i.e. where the Jacobian determinant \(\det(J_{\phi})\leq 0\). Such an accumulated self-folding over the image is defined as the number of voxels where the tissue collapses with itself, i.e.,
Footnote 7: Ethical approval was obtained for this study from the Auckland Health Research Ethics Committee (AH24096), and written informed consent was obtained from each participant.
\[\varepsilon_{reg}(\phi)=\sum_{\mathbf{x}\in\Omega}F_{\phi}(\mathbf{x}) \tag{10}\]
where \(F\) is the folding at a given voxel \(\mathbf{x}\) defined as
\[F_{\phi}(\mathbf{x})=\begin{cases}1&\det(J_{\phi})(\mathbf{x})\leq 0\\ 0&\text{elsewhere}.\end{cases} \tag{11}\]
Besides field compressibility (i.e. value of Jacobian determinant) and self-folding, we also require \(f_{1}\circ\tilde{\phi}\approx f_{0}\), i.e. the registered image to approximate its pair. Therefore, we evaluate the image error (\(\varepsilon_{\text{img}}\)) using the NCC.
The results of the registration preserved and aligned the anatomical structures of the nipple, pectoral muscle and fibroglandular tissue, without introducing image artifacts (see Figure 4). On the evaluation of the optimal deformations (see Table 1), Flow U-Net and SVFlowNet yield similar accurate results with an NCC error of 0.973 and 0.968 respectively. These are the optimal deformations in the sense of the loss function used in the optimisation performed by the Stochastic Gradient Descent performed during the training phase of the neuronal network. In this context, "optimal" means that the deformation reduces
Figure 3: SVFlowNet variants and Flow U-Net performance on the in-silico data assessed using the deformation discrepancy \(\epsilon_{\text{flow}}\) (SSE). We show the results of \(\epsilon_{\text{flow}}\) (SSE) for supervised learning, and unsupervised learning over a range of regularization weights \(\lambda\).
intensity mismatch after registration while avoiding compressible behaviour in the tissue. Although Flow U-Net and SVFlowNet have similar performance in terms of \(\varepsilon_{\text{img}}\), the deformation predicted by Flow U-Net contains regions of self-folding (see Table 1). This behaviour is incompatible with the deformations of breast tissue as the tissue cannot vanish or interpenetrate itself. On the other hand, SVFlowNet achieves similar performance on the image similarity and uses diffeomorphic constraints avoiding the previous non-physical behaviours yielding, as a consequence, an invertible deformation, i.e., \(\det(J_{\phi})(\mathbf{x})>0\), \(\forall\mathbf{x}\in\Omega\). See Appendix D for more visual results.
## 5 Conclusion
This work has presented learning strategies for breast MR image registration by introducing SVFlowNet, a novel network architecture that integrates diffeomo
\begin{table}
\begin{tabular}{|c|c|c|} \hline Method & \(\varepsilon_{\text{reg}}(\bar{\phi})\) & \(\varepsilon_{\text{img}}(f_{0},f_{1}\circ\bar{\phi})\) \\ \hline Flow U-Net & 219257 & 0.973 \\ SVFlowNet & 0 & 0.968 \\ \hline \end{tabular}
\end{table}
Table 1: Evaluation of in-vivo arms up and down MR image registration
Figure 4: Optimal deformations (a) determined by SFlowNet and Flow U-Net for the unsupervised arms up and down breast MR image task. The Jacobian Determinant (b) and Folding Map (c) show SVFlowNet with zero self-folding compared to Flow U-Net which exhibits regions of self-folding. Data sampled from a sagittal plane. The displacement field is visualised by normalising each component and encoding the anterior, lateral and cranial directions in the red, green and blue channels respectively.
phric constraints into a dual stream pyramid registration network architecture [27]. We have demonstrated that our method of propagating diffeomorphic deformation via the \(\zeta\) operator (BCHD) outperforms the common method of summing SVFs [17], and outperforms the state-of-the-art non-diffeomorphic baseline on an in-silico breast MR image unsupervised registration task (See Appendix C). Furthermore, by construction SVFlowNet complies with underlying breast biomechanics which enforces mass conservation. For such a task, we use a measure of the ground-truth and optimal deformation discrepancy in our analysis of the performance.
On the in-vivo breast MR image arms up and down unsupervised registration task, we showed that the \(\zeta\) variant of SVFlowNet achieves similar image accuracy with a state-of-the-art non-diffeomorphic baseline Flow U-Net. With SVFlowNet, we achieve an image alignment with a normalised cross-correlation (\(\varepsilon_{\mathrm{img}}\)) of 0.968 with zero self-folder. This is an improvement on the non-diffeomorphic baseline, for which \(\varepsilon_{\mathrm{reg}}\approx 2\times 10^{5}\) have a Jacobian determinant less than or equal to zero. Thus, we have demonstrated that our approach can achieve image alignment similar to a state-of-the-art non-diffeomorphic baseline while simultaneously reproducing a physically valid estimation.
## Acknowledgments
The authors are grateful for financial support from the New Zealand Ministry for Business, Innovation and Employment (UOAX1004), the University of Auckland Foundation (F-IBE-BIP), and the New Zealand Breast Cancer Foundation (R1704).
|
2309.03564 | Supervised Learning and Large Language Model Benchmarks on Mental Health
Datasets: Cognitive Distortions and Suicidal Risks in Chinese Social Media | On social media, users often express their personal feelings, which may
exhibit cognitive distortions or even suicidal tendencies on certain specific
topics. Early recognition of these signs is critical for effective
psychological intervention. In this paper, we introduce two novel datasets from
Chinese social media: SOS-HL-1K for suicidal risk classification and
SocialCD-3K for cognitive distortions detection. The SOS-HL-1K dataset
contained 1,249 posts and SocialCD-3K dataset was a multi-label classification
dataset that containing 3,407 posts. We propose a comprehensive evaluation
using two supervised learning methods and eight large language models (LLMs) on
the proposed datasets. From the prompt engineering perspective, we experimented
with two types of prompt strategies, including four zero-shot and five few-shot
strategies. We also evaluated the performance of the LLMs after fine-tuning on
the proposed tasks. The experimental results show that there is still a huge
gap between LLMs relying only on prompt engineering and supervised learning. In
the suicide classification task, this gap is 6.95% points in F1-score, while in
the cognitive distortion task, the gap is even more pronounced, reaching 31.53%
points in F1-score. However, after fine-tuning, this difference is
significantly reduced. In the suicide and cognitive distortion classification
tasks, the gap decreases to 4.31% and 3.14%, respectively. This research
highlights the potential of LLMs in psychological contexts, but supervised
learning remains necessary for more challenging tasks. All datasets and code
are made available. | Hongzhi Qi, Qing Zhao, Jianqiang Li, Changwei Song, Wei Zhai, Dan Luo, Shuo Liu, Yi Jing Yu, Fan Wang, Huijing Zou, Bing Xiang Yang, Guanghui Fu | 2023-09-07T08:50:46Z | http://arxiv.org/abs/2309.03564v3 | Supervised Learning and Large Language Model Benchmarks on Mental Health Datasets: Cognitive Distortions and Suicidal Risks in Chinese Social Media
###### Abstract
In the realm of social media, users frequently convey personal sentiments, with some potentially indicating cognitive distortions or suicidal tendencies. Timely recognition of such signs is pivotal for effective interventions. In response, we introduce two novel annotated datasets from Chinese social media, focused on cognitive distortions and suicidal risk classification. We propose a comprehensive benchmark using both supervised learning and large language models, especially from the GPT series, to evaluate performance on these datasets. To assess the capabilities of the large language models, we employed three strategies: zero-shot, few-shot, and fine-tuning. Furthermore, we deeply explored and analyzed the performance of these large language models from a psychological perspective, shedding light on their strengths and limitations in identifying and understanding complex human emotions. Our evaluations underscore a performance difference between the two approaches, with the models often challenged by subtle category distinctions. While GPT-4 consistently delivered strong results, GPT-3.5 showed marked improvement in suicide risk classification after fine-tuning. This research is groundbreaking in its evaluation of large language models for Chinese social media tasks, accentuating the models' potential in psychological contexts. All datasets and code are made available at: [https://github.com/HongzhiQ/SupervisedVsLLM-EfficacyEval](https://github.com/HongzhiQ/SupervisedVsLLM-EfficacyEval).
Large language model, Deep learning, Natural language processing, Mental health, Social media Further author information: (Send correspondence to Guanghui Fu, guanghui.fu@inria.fr)
## 1 Introduction
The omnipresent specter of mental illness, particularly depression, continues to impose significant challenges globally [1]. According to the World Health Organization (WHO), an estimated 3.8% of the global population experiences depression [1]. Specifically in China, the prevalence of depression is notably high, with estimates around 6.9% [2], underscoreing the escalating mental health concerns in the nation. Such severe depression can often precipitate suicidal behaviors [3]. As digital avenues for communication flourish, social media platforms like Twitter and Sina Weibo have evolved into reflective mirrors, offering glimpses into the emotional landscapes of countless users [4]. Within these platforms, a specific subset of topics recurrently surfaces, with users frequently conveying deep-seated negative emotions and, alarmingly, pronounced suicidal inclinations [5, 6].
Artificial intelligence (AI), especially the branches of deep learning and natural language processing technique, is an avenue that holds promise in addressing this challenge [7]. Over recent years, AI research has resulted in the formulation of several algorithms tailored for emotion recognition within textual data [8]. However, these advancements are not without obstacles [9]. Constructing a potent deep learning model often demands considerable time and financial resources. The intricacies of data labeling, predominantly the need to enlist domain experts and the model's variance in performance when shifted across different application areas, highlight pressing
challenges [10]. This highlights a compelling need for more agile and adaptable algorithmic solutions especially in medical domain [11]. It is in this context that the emergence and proliferation of large language models are particularly noteworthy.
Large language models, characterized by their expansive parameters and the depth of their training datasets, stand as the state-of-the-art in the framework of computational linguistics [12]. Their potential lies in their ability to comprehend and emulate human-like text nuances. Despite their promising potential, several studies have sought to validate their practical implications. For instance, Xu et al. [13] examined four public datasets related to online social media sentiment detection. However, their study focused solely on English data, and the classification granularity was relatively broad. To date, there is a notable gap in research concerning the Chinese context, particularly in the area of fine-grained emotion recognition, which is often of greater significance. The lack of comprehensive evaluations and practical tests has inadvertently led to a cautious approach, especially in sectors demanding high reliability, like medicine and healthcare [14].
Motivated by the need to better understand mental health sentiments on Chinese social media platforms, our research embarks on a rigorous evaluation of supervised learning and large language models. We offer the following contributions:
* We introduce and publicly release two new expertly-manual annotated social media datasets in the mental health domain, specifically focusing on cognitive distortion and suicide risk classification. These datasets not only serve as valuable resources for the community but also have profound real-world implications, potentially informing strategies for suicide prevention and interventions for cognitive distortions.
* We propose a comprehensive benchmark using both traditional supervised learning and large language models on these datasets. By employing a variety of strategies, including zero-shot, few-shot, and fine-tuning, we seek to determine the most effective methods for leveraging these models in the context of mental health tasks on Chinese social media.
* Lastly, our study pioneers the exploration of fine-tuning capabilities of GPT-3.5, leveraging real-world data. This endeavor seeks to determine the adaptability and specialized performance enhancements possible with the model currently unexplored in the literature.
## 2 Related Work
The intertwining of artificial intelligence (AI) with different fields has spurred innovations and transformations at an unprecedented scale. An example of this is the fusion of natural language processing (NLP) tools, notably deep learning based model, with domains as critical as the mental health field [15]. Additionally, as digital interactions burgeon, especially on social media, the urgency to understand and analyze human sentiments becomes paramount. In this section, we will introduce deep learning techniques for sentiment analysis utilizing text data (Section 2.1). Subsequently, we will discuss the evolution, potential, and current research on large language models in this domain (Section 2.2).
### Text sentiment analysis
In the swiftly evolving digital era, social networking platforms have emerged as pivotal channels for expressing emotions globally. These platforms generate vast amounts of unstructured data every second. Accurately and promptly discerning the emotions embedded within this data presents a formidable challenge to computational algorithms [8]. Fu et al. [16] presented a distant supervision method designed to build systems that classify high and low suicide risk levels using Chinese social media data. This approach minimizes the need for human experts of varying expertise levels to perform annotations. By integrating this model with crucial psychological features extracted from user blogs, they attained an F1 score of 77.98%. Singh et al. [17] employed a BERT-based model for sentiment analysis on tweets sourced globally and another dataset specifically from India, both focusing on the topic of COVID-19. They reported achieving an accuracy of 94%. Wan [18] introduced a method for sentiment analysis of comments on Weibo platforms, leveraging deep neural networks. The data undergoes feature extraction through multilevel pooling and convolution layers. Comments are preprocessed and transformed into
text representations using the word2vec algorithm. Subsequently, key features are extracted from the feature matrix using a CNN. For the final classification and sentiment analysis, the softmax logistic regression method is employed. Zhang et al. [19] explored the correlations among emotion labels, social interactions, and temporal patterns within an annotated Twitter dataset. They introduced a factor graph-based emotion recognition model that seamlessly integrates these correlations into a unified framework. This model adeptly identifies multiple emotions by applying a multi-label learning approach to Twitter datasets. Wang et al. [20] introduced a topic modeling technique, termed LDA, to examine the primary concerns expressed on Weibo during the COVID-19 pandemic. They assessed the emotional inclinations of these topics, determined their proportional distributions, and conducted user behavior analysis based on metrics such as likes, comments, and retweets. Furthermore, they explored shifts in user concerns and variations in engagement among residents from different regions of mainland China. Such insights guide public sentiment and actions during health emergencies, emphasizing the importance of vigilant social media monitoring.
Although deep learning algorithms typically demonstrate impressive results, they often require a significant volume of labeled data to perform optimally. The distant supervision approach highlighted in Fu et al.'s research [16] aims to reduce the need for labeling, but it still requires the involvement of three different expert groups at various expertise levels to yield desired results. Nonetheless, when applying these models to new datasets or tasks, domain adaptation issues often arise. These trained models can see a decline in their efficacy, making deep learning algorithms both costly and inflexible. Given these hurdles, there's a growing demand for efficient and user-centric methods to assist individuals in emotion detection on social media platforms. The recent advancements in large language models present a potential solution to this challenge, but their precise impact still warrants examination from multiple perspectives and specialists.
### Large language model and its applications in medical domain
The advent of Large Language Models (LLMs), such as OpenAI's ChatGPT [12], has revolutionized the field of natural language processing [21]. These LLMs demonstrate emergent abilities that significantly outperform those of their smaller, pre-trained models [22]. Initially conceived for understanding and generating human-like text, LLMs have found diverse applications ranging from content generation [23], medical report assistant [24], coding assistance [25], education [26], and answering medical related questions [27]. The sheer scale of these models enables them to generate complex, contextually relevant content. LLMs have garnered significant attention in medical domain [14]. For instance, Jiang et al. [28] developed a clinical LLM named NYUTron to assist physicians and healthcare administrators in making time-sensitive decisions. This model can process on unstructured clinical notes from electronic health record. And it can achieve good performance with AUC score ranging from 78.7-94.9%. The model has been successfully deployed in a prospective trial, indicating its potential for real-world application in providing point-of-care guidance to physicians.
Concurrently, research in psychology-related domains has also been conducted by other researcher [29]. Qin et al. [30] devised an interpretable and interactive depression detection system employing large language models (LLMs). This innovative approach allows for the detection of mental health indicators through social media activity and encourages users to interact with the system using natural language. While this facilitates a more personalized understanding of an individual's mental state, it also raises ethical concerns. The absence of human oversight could lead to biased outcomes, thereby posing potential risks to users. Additionally, if this system were to become a foundational diagnostic tool for future psychological counseling, issues related to user privacy could become a point of concern. Chen et al. [31] developed a tool designed to improve the realism of psychiatist-patient simulations using ChatGPT-based chatbots. Their approach involved using distinct prompts to enable large language models (LLMs) to emulate the roles of both a depressed patient and a psychiatist. The study confirmed the feasibility of utilizing ChatGPT-driven chatbots in psychiatric contexts. However, the research also acknowledged limitations: individual patients and counselors have unique communication styles, and some patients may be reluctant to engage in conversation. These nuances present a challenge for achieving truly realistic simulations with ChatGPT. Addressing the simulation of diverse personalities in a meaningful way remains a key area for further investigation. Fu et al. [32] developed a counseling support system designed to augment the capabilities of non-professional counselors. The system provides multiple features, including mental health analysis, evaluation of therapist responses, and suggested interventions. This application serves as a valuable use case for language models in the mental health sector. Ten professional psychologists assessed the system on five
critical dimensions, and the findings were favorable, with a 78% expert approval rate indicating that the system can deliver effective treatment strategies. Ayers et al. [33] developed a ChatGPT-based chatbot and compared its responses with those of physicians to patient inquiries on a social media forum. Notably, 78.6% of the evaluators preferred the chatbot's responses, citing their speed and greater empathetic tone. However, a key limitation of this study lies in its exclusive focus on interactions within online forums. Such settings may not accurately reflect the nuances of real-world patient-physician dialogues, as physicians often tailor their responses based on pre-existing relationships and the context of a clinical setting. In summary, there is active research into the utilization of LLMs in the field of psychology, and these research demonstrate considerable potential. However, delineating the limitations of LLMs remains a crucial issue that warrants further investigation. Additional studies are needed to comprehensively evaluate the capabilities and boundaries of LLMs in psychological applications.
Xu et al. [13] present a pioneering evaluation of multiple Large Language Models (LLMs) across various mental health prediction tasks using four publicly available online text datasets. Their insights offer guidance to practitioners on optimizing the use of LLMs for specific applications. While their research stands as a monumental verification of LLMs' potential in the mental health domain, it is noteworthy that their datasets are exclusively in English and do not address multi-label classification tasks. Yang et al. [34] assessed ChatGPT's capabilities in mental health analysis and emotional reasoning by evaluating its performance on 11 datasets across five tasks. The study also investigated the impact of different emotion-based prompting strategies. Experimental results indicate that while ChatGPT surpasses traditional neural network-based approaches, it still lags behind more advanced, task-specific methods. Nevertheless, ChatGPT demonstrates significant potential in the area of explainable mental health analysis. In conclusion, while the integration of LLMs in medicine presents compelling prospects, there's an imperative to ensure privacy and uphold ethical standards. Responses generated may not always be flawless [35]. Particularly in mental health, relying solely on LLM-driven systems for diagnosis or support introduces numerous unpredictable variables. It's crucial to recognize that LLMs warrant meticulous scrutiny and validation [36]. Evaluation should be considered an essential discipline to facilitate the more effective development of large language models (LLMs) [37].
## 3 Methods
We conducted experiments to classify suicide risk and cognitive distortions on Chinese social media data using supervised learning methods and large language models (LLMs). Within the framework of supervised learning, we explored two models BERT [38] and LSAN [39] as baseline, detailed in Section 3.1. For the large language models, we utilized zero-shot prompt, few-shot prompt, and fine-tuning methods. Subsequent sections provide a comprehensive introduction of these methods.
### Baseline supervised learning model
We experimented with two representative models: LSAN [39] and BERT [38]. LSAN is adept at uncovering the relationships between labels, making it particularly suitable for our cognitive distortion recognition task. On the other hand, BERT represents a groundbreaking pre-trained model architecture that had achieved state-of-the-art (SOTA) on 11 distinct NLP tasks. We discuss each in detail below:
* LSAN: The LSAN model is engineered to utilize label semantics for identifying the relationships between labels and documents, thereby creating a label-specific document representation. The model also employs a self-attention mechanism to focus on this representation, which is derived from the document's content. An adaptive fusion strategy integrates these components effectively, facilitating the generation of a comprehensive document representation suitable for multi-label text classification. The LSAN model has proven effective, particularly in predicting low-frequency labels.
* BERT: Bidirectional Encoder Representations from Transformers (BERT) has been a pivotal development in natural language processing (NLP). Unlike traditional NLP models that process text unidirectionally, BERT uses a bidirectional approach, facilitated by the Transformer architecture, to understand the full context of each word. It is pre-trained using a masked language model objective, where random words are replaced with a '[MASK]' token and the model predicts the original word. This design has enabled BERT to set new performance standards in diverse NLP tasks, such as question-answering and sentiment analysis, especially when fine-tuned on specific task data.
### Large language models
Given that our data is in Chinese, we explored the open-source models ChatGLM2-6B and GLM-130B[40], both of which support Chinese language processing. The primary distinction between these two models lies in the number of parameters they possess. GPT-3.5[41] stands as a flagship large-scale language model. We experimented with various prompt word constructions and sought to integrate prior knowledge from the psychological domain, along with the most recent public fine-tuning functionalities. GPT-4[42], being the latest iteration, was also included in our assessment. Detailed introduction on these models are provided in the subsequent sections.
* **ChatGLM2-6B:** ChatGLM2-6B is an open-source bilingual language model with 6.2 billion parameters, optimized for Chinese question-answering and dialogue. It employs similar technology to ChatGPT and is trained on roughly 1TB of Chinese and English text data. The model can be fine-tuned through various techniques like supervised learning and human feedback. It also features an efficient tuning method based on P-Tuning v2, requiring at least 7GB of GPU memory for customization. Due to quantization techniques, it can run on consumer-grade graphics cards with only 6GB of memory.
* **GLM-130B:** GLM-130B is a bilingual pre-trained language model optimized for both English and Chinese, boasting a substantial 130 billion parameters. This model aims to provide an open-source alternative of a scale comparable to GPT-3, while shedding light on the complexities of training such large-scale models. Impressively, GLM-130B surpasses GPT-3 175B on multiple English benchmarks and outperforms ERNIE TITAN 3.0 260B[43], the largest existing Chinese language model, on relevant benchmarks. A distinctive feature of GLM-130B is its capability for INT4 quantization without substantial performance degradation, thus facilitating efficient inference on widely available GPUs.
* **GPT-3.5:** GPT-3.5 is a cutting-edge language model developed by OpenAI, designed to offer enhanced conversational capabilities. Building on the foundation of its predecessor, GPT-3, this iteration introduces improvements in both performance and cost-efficiency. OpenAI's commitment to refining and advancing the capabilities of their models is evident in GPT-3.5, which provides users with a more coherent, context-aware, and responsive conversational experience. As part of OpenAI's mission to ensure that artificial general intelligence benefits all of humanity, GPT-3.5 is a testament to the organization's dedication to innovation and excellence in the realm of natural language processing.
* **GPT 4:** GPT-4 is a groundbreaking multimodal model capable of processing both image and text inputs to generate text-based outputs. Marking a significant advancement over its predecessors, GPT-4 exhibits human-level performance across a range of professional and academic benchmarks, including a top 10% score on a simulated bar exam. Built upon the Transformer architecture, the model is initially trained to predict subsequent tokens in a given sequence and later undergoes a post-training alignment process to improve its factuality and behavior. A critical component of the project involved the development of scalable infrastructure and optimization techniques that function consistently across various sizes, allowing the team to extrapolate GPT-4's performance metrics based on smaller models. Despite its notable capabilities, GPT-4 does inherit certain limitations from earlier versions, such as occasional content "hallucinations" and a constrained context window.
The large language model is widely recognized as being pre-trained on vast amounts of text data. However, the manner in which prompt are inputted is crucial, as it directly influences the LLM's comprehension and output for a given task. In light of this, we have formulated the following prompts.
LLM Zero-shot PromptingWe initiate our exploration with prompt design tailored for tasks within a zero-shot paradigm. This process encompasses various strategies, including direct task requests (acting as the basic), role-definition, scene-definition, and hybrid approaches. For illustrative purposes, the cognitive distortion classification task serves as the focal point. The design is elaborated as follows:
1. **Basic:** A direct task directive devoid of specific contextual emphasis.
1. English translation: "Please conduct a multi-classification task to ascertain if it encompasses any of the specified 12 cognitive distortions ([list of cognitive distortions])." 2. Formulaic representation: \(M(T,12CD)\), where \(M\) stands for multi-classification, \(T\) symbolizes the task, and \(12CD\) represents the 12 cognitive distortions.
2. **Role-definition Prompting:** The prompt delineates the role of the respondent (in this case, a psychologist) and emphasizes reliance on psychological insights. 1. English translation: "Assuming the role of a psychologist and leveraging psychological insights, please conduct a multi-classification task to discern if it integrates any of the 12 cognitive distortions ([list of cognitive distortions])." 2. Formulaic representation: \(R(M(T,12CD))\), where \(R\) embodies the role-definition of being a psychologist.
3. **Scene-definition Prompting:** The context of a social media setting is introduced, highlighting user identifiers to preclude ambiguity. 1. English translation: "Considering the provided user ID and the associated posts on social media, please based on the post content, engage in a multi-classification task to determine the presence of any of the 12 cognitive distortions ([list of cognitive distortions])." 2. Formulaic representation: \(S(M(T,12CD))\), with \(S\) denoting the scene, which in this scenario, pertains to the user's ID and corresponding social media posts.
4. **Hybrid Prompting:** A synthesis of both role and scene definitions, offering an integrative instruction. 1. English translation: "With the given user ID and their respective social media posts, and adopting the role of a psychologist fortified with psychological expertise, please execute a multi-classification task to verify the inclusion of any of the 12 cognitive distortions ([list of cognitive distortions])." 2. Formulaic representation: \(S+R(M(T,12CD))\), intertwining the scene context (\(S\)) with the role-definition (\(R\)).
LLM Few-shot PromptingIn this segment, few-shot prompting is construed as the provision of prior knowledge or a batch of \(n\) training instances to LLMs, thereby enabling them to internalize this information and adeptly execute the stipulated task. This methodology unfolds as:
1. **Background Knowledge:** The model is furnished with psychological definitions supplemented by emblematic cases, followed by one of the four prompting strategies devised from zero-shot prompting. Prompts that integrate background knowledge and employ the hybrid strategy from zero-shot prompting are detailed as follows: 1. English translation: "Given the definitions of cognitive distortions denoted by \(D\) and the prototypical cases represented by \(C\), and in light of the supplied user ID and associated social media posts, you are assumed to be a psychological expert well-versed in the aforementioned definitions and cases. Drawing from this backdrop, please conduct a multi-classification task to evaluate the correlation with any of the 12 cognitive distortions ([list of cognitive distortions])." 2. Formulaic representation: \(D+C+S+R(M(T,12CD))\), where \(D\) encapsulates the background definition, and \(C\) signifies the prototypical instances from academic literature, \(S\) represents scene-definition and \(R\) stands for role-definition.
2. **Training with \(n\) Samples per Category:** In this approach, \(n\) training instances are randomly selected for each category to train the LLM, followed by one of the four prompting strategies designed from zero-shot prompting. These instances are represented as \(train_{n}\) in the following tables. Prompts that incorporate the training instances employ the hybrid strategy from zero-shot prompting are detailed as follows:
1. English translation: "You are provided with learning samples denoted by \(T\).In light of the supplied user ID and associated social media posts, and assuming your role as a psychologist with the relevant expertise.Drawing from this backdrop, please conduct a multi-classification task to evaluate the correlation with any of the 12 cognitive distortions ([list of cognitive distortions])." 2. Formulaic representation: \(T+S+R(M(T,12CD))\), integrating \(T\) as the training set with the scene-definition (\(S\)) and role-definition (\(R\)).
3. **Background knowledge and training with \(n\) samples per category:** This approach investigates whether enhancing sample diversity in few-shot prompting augments the LLM's comprehension of psychological health tasks. It incorporates psychological definitions, symbolic examples, and provides \(n\) training instances per category for LLM training. A command is subsequently issued using a previously described few-shot prompting strategy. The following example integrates background knowledge and training instances, and poses a query using the hybrid strategy from zero-shot prompting: 1. English translation: "Given the definitions of cognitive distortions represented by \(D\) and the prototypical cases denoted by \(C\), you are also provided with learning samples represented by \(T\). Assuming your expertise as a psychological expert familiar with the aforementioned definitions and cases, and in consideration of the supplied user ID and associated social media posts, please conduct a multi-classification task to evaluate the correlation with any of the 12 cognitive distortions ([list of cognitive distortions])." 2. Formulaic representation: \(D+C+T+S+R(M(T,12CD))\), where \(D\) encapsulates the background definition, \(C\) signifies the prototypical instances from academic literature, \(T\) is integrated as the training set, \(S\) denotes the scene-definition, and \(R\) represents the role-definition.
LLM Fine-tuningFine-tuning represents a potent paradigm provided by OpenAI, enabling users to optimize the performance of pre-trained models, such as GPT-3.5. While GPT-3.5 is inherently trained on an expansive text corpus, the fine-tuning process sharpens its proficiency for specialized tasks by exposing it to additional task-specific instances. Following the fine-tuning, our evaluation paradigm retained the role, scene and hybrid definitions from the zero-shot prompting for consistency and comparative assessment:
1. **Role-definition Prompting:** Post fine-tuning with relevant training samples, we employed the prompt delineated in the role-definition section (refer to Section 3.2).
2. **Scene-definition Prompting:** Analogously, after the fine-tuning process, we reverted to the prompt illustrated in the scene-definition segment of the zero-shot prompting.
3. **Hybrid Prompting:** Similarly, after the fine-tuning process, we adopted the prompt presented in the hybrid strategy segment of the zero-shot prompting.
## 4 Experiments and Results
### Datasets and Evaluation Metrics
We undertook two psychology-related classification tasks: suicide risk and cognitive distortion. The suicide risk task primarily differentiates between high and low suicide risks, while the cognitive distortion task focuses on classifications defined by Burns [44]. We sourced our data by crawling comments from the "Zoufan" blog within the Weibo social platform. Subsequently, a team of qualified psychologists were enlisted to annotate the data. Given that this data is publicly accessible, there are no concerns related to privacy breaches.
For the suicide detection data, there were 648 records with low suicide risk and 601 records with high suicide risk. The dataset for cognitive distortion consists of a total of 910 entries. The classification labels employed for this data are as follows: all-or-nothing thinking, over-generalization, mental filter, disqualifying the positive, mind reading, the fortune teller error, magnification, emotional reasoning, should statements, labeling and mislabeling, blaming oneself and blaming others. For both sets of data, the training set and test set are divided according
to the ratio of 4:1. The statistics of these two datasets are listed in Table 1, where \(N_{train}\) and \(N_{test}\) denote the number of training and test samples, respectively. \(L\) is the total number of classes, \(\overline{L}\) is the average number of labels per sample, and \(\overline{W}\) is the average number of words per sample. We utilize three evaluation metrics to measure the performance of different algorithms for our two tasks: precision, recall, and \(F_{1}\) score. Precision is the ratio of correctly predicted positive observations to the total predicted positives and recall (or sensitivity) represents the ratio of correctly predicted positive observations to all the actual positives. These two metrics provide a comprehensive view of the algorithm's performance in terms of its positive predictions. The \(F_{1}\) score offers a more holistic view of the model's performance, especially when the distribution of the true positive and true negative rates is uneven.
### Experiment design
Our experimental methodology is both hierarchical and greedy. Using cognitive distortions as an example to show our points, our evaluations spanned several dimensions:
* Prompt Design Perspective: Initially, we assessed four prompting strategies within the zero-shot learning framework. Subsequently, based on their performance metrics, the top two strategies were selected for further evaluation in the few-shot learning setting across various LLMs.
* LLM Performance Perspective: Across all zero-shot prompts, ChatGLM2-6B's performance was found to be lacking, resulting in our decision to omit it from subsequent few-shot prompting experiments. For GPT-3.5, its token limitation prevented us from entering five samples for each category during few-shot prompting. Consequently, we reserved the \(train_{5}\) approach exclusively for GPT-4.
* Fine-tuning Perspective: A discernible performance gap exists between GPT-3.5 and GPT-4. However, OpenAI's recent introduction of fine-tuning capabilities for GPT-3.5 and reports from official channels suggest that, under specific conditions, GPT-3.5 might outperform GPT-4 post fine-tuning. Consequently, our attention was centered on the fine-tuning of GPT-3.5. Regrettably, the current iteration of GPT-4 lacks fine-tuning functionalities, curtailing our capacity to assess its potential in this dimension.
The detailed experimental setup is as follows:
* **LSAN:** We used word2vec to train 300-dimensional embeddings for both document and randomly-initialized label texts. The attention mechanism helped us compute word contributions to labels and create label-specific document representations. Dot products between these document and label vectors refined these relationships further. These two types of document representations were then fused using weighted combinations. For predictions, we employed a fully connected layer, followed by RELU and a sigmoid function. Losses were calculated using a cross-entropy function during training.
* **BERT:** We employ BERT to extract 768-dimensional vectors from Chinese sentences. To mitigate over-fitting, a dropout function is applied to these sentence vectors. Subsequently, a fully connected layer is introduced to independently classify suicide risk and cognitive distortions. The sigmoid function serves as the activation function for the output layer. Both the BERT layer and the fully connected layer are trained simultaneously.
* **LLM-zero shot:** Both GPT-3.5 and GPT-4 are closed-source and available through API provided by OpenAI. We picked the gpt-3.5-turbo, one of the most capable and cost-effective models in the GPT-3.5 family, and the gpt-4, more capable than any GPT-3.5 model, able to do more complex tasks, and optimized for chat. As for the GLM models, we employed the smaller, open-source variant, ChatGLM2-6B, suitable
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline Task & \(N_{train}\) & \(N_{test}\) & \(L\) & \(\overline{L}\) & \(\overline{W}\) \\ \hline cognitive distortion & 728 & 182 & 12 & 1.27 & 53 \\ \hline suicide detection & 999 & 250 & 1 & 1 & 47.79 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary of Experimental Datasets
for deployment on consumer-grade hardware. Given the extensive parameter count of GLM-130B, it posed deployment challenges due to its elevated operational costs. Furthermore, its API lacked the capability to handle cognitive distortion multi-label classification task, leading us to conduct tests via its official website. Acknowledging the inherent variability in LLM's outputs, our experimental design involved averaging the outcomes over five runs. For GPT-3.5, GPT-4, and ChatGLM2-6B, we adjusted the temperature to values of 0.1, 0.3, 0.5, 0.7, and 0.9, conducting experiments at each setting. Given the absence of a temperature setting for GLM-130B on its platform, we simply executed five repeated runs and computed the mean performance. For zero-shot evaluations, we initiated performance validation on the basic strategy across the LLMs, subsequently examining the efficacy of role-definition, scene-definition, and hybrid strategies, aiming to discern the influence of domain-specific information on LLM's performance.
* **LLM-few shot:** We conducted an assessment using the top two performing prompt strategies from the zero-shot tests, determined by their F1-scores. The impact on performance was assessed when augmenting these strategies with background, \(train_{n}\), and their combination (background + \(train_{n}\)). Specifically, background strategy denotes the incorporation of prior knowledge, \(train_{n}\) represents the addition of training samples, where \(n\) is the number of positive samples chosen for each category. background + \(train_{n}\) suggests simultaneous enrichment with prior knowledge and training samples. Given the varying token input constraints among different models, the sample size selected for each model differed. In addition, we also experimented with the integration of basic, role, scene, and hybrid strategies in the zero-shot prompting scenario.
* **LLM-fine-tunning:** We fine-tuned the GPT-3.5 Turbo model for predicting suicide risk and cognitive distortions using the API interface provided by OpenAI. We utilized three types of prompts: role-based, scene-based, and hybrid strategies.
## 5 Results
In our study, we focused on two specific tasks: suicide classification and multi-label classification of cognitive distortions. And the results can be seen in Table 2 and Table 3 respectively. Our analysis examined these two tasks in Section 5.1 and Section 5.2 respectively from three distinct aspects: training strategy, the construction of prompt, and a comparative evaluation across various LLMs. Ultimately, we assessed and compared the model's performance on these two psychological tasks to draw conclusions in Section 5.3. Considering the intricate nature and distinctiveness of the cognitive distortion task, LLMs demonstrate suboptimal performance. We have included a human evaluation stage conducted by psychology experts regarding the predictions of the large models in Section 5.4.
### Suicide Risk
Training strategiesIn our training strategy comparison, we observed varying degrees of effectiveness across different models. The pre-trained BERT model exhibited a performance enhancement over the LSAN model, registering a 2.23% increase in F1-score. In contrast, fine-tuning GPT-3.5 led to a substantial performance gain, achieving an F1-score of 78.45%. This represented a notable 11.5% improvement in F1-score when compared to its base model (fine-tuning hybrid vs. zero-shot hybrid), bringing its performance closer to that of supervised learning models.
Design of promptsOur investigation into prompt design for large language models revealed nuanced outcomes across different strategies and models. In the context of zero-shot prompts, we found that while the hybrid strategy yielded satisfactory results, the performance differences among various types of prompts were not statistically significant. Upon enhancing the basic strategy with three additional strategies (role-define, scene-define, and hybrid), the performance differences in comparison to the basic strategy are illustrated in Table 4. For few-shot prompts, adding more data did not consistently improve performance; this was evident in the ChatGLM2-6B model where additional data sometimes reduced effectiveness. Conversely, GPT-4's performance remained stable irrespective of the data size. Notably, the background+train\({}_{n}\)+hybrid strategy emerged as the most effective across multiple models.
We also studied the impact of extra training data in few-shot scenarios and observed that using role-define and train\({}_{n}\)+role-define prompts often led to diminished performance. The role of background knowledge was model-dependent; in smaller models like ChatGLM2-6B, incorporating background knowledge led to a performance increase from 53.74% to 64.41%. However, this could not be universally verified due to token limitations. Finally, our comparison between few-shot and zero-shot prompts showed that few-shot prompts did not significantly outperform their zero-shot counterparts.
Comparison of LLMsIn our comparative analysis of large language models, we observed several trends that highlight the complexities of model performance. Generally, GPT-4 outperformed GPT-3.5, and GLM-130B excelled over ChatGLM2-6B, suggesting the benefits of larger model architectures and more extensive training data. Yet, this trend was interrupted when GPT-3.5 underwent fine-tuning, outperforming GPT-4 by a differential of 2.64%.Additionally, GLM-130B demonstrated a performance comparable to GPT-4 and superior to GPT-3.5 for the specific task under study. These findings indicate that while larger models typically offer advantages, fine-tuning and task-specific capabilities can alter the performance landscape significantly.
### Cognitive Distortion
Training strategiesOur investigation into training strategies for large language models revealed nuanced performance outcomes. Initially, the pre-trained BERT model demonstrated a 2.83% performance advantage over LSAN trained from scratch. However, this difference was not statistically significant, implying that the
\begin{table}
\begin{tabular}{|c|c|l|l|c|c|c|c|} \hline Model category & Model name & Type & \multicolumn{1}{c|}{Sub-type} & \multicolumn{1}{c|}{Train data} & \multicolumn{1}{c|}{Test data} & \multicolumn{1}{c|}{Precision} & \multicolumn{1}{c|}{Recall} & \multicolumn{1}{c|}{F1-score} \\ \hline Supervised learning & \multicolumn{1}{c|}{LSAN} & train from search & \multicolumn{1}{c|}{599} & \multicolumn{1}{c|}{74.59\%} & 87.40\% & 80.23\% \\ \cline{3-8} Supervised learning & BERT & fine-tuning & \multicolumn{1}{c|}{-} & \multicolumn{1}{c|}{999} & 88.42\% & 77.78\% & 82.70\% \\ \hline \multirow{9}{*}{CartGLM2-6B} & \multirow{9}{*}{CartGLM2-6B} & \multirow{9}{*}{zero-shot} & \multirow{9}{*}{24.41\%-score-define} & \multirow{9}{*}{0} & 60.07\% & 31.70\% & 80.07\% \\ & & & & \multicolumn{1}{c|}{-} & 65.77\% & 35.84\% & 66.15\% \\ & & & second-define & \multicolumn{1}{c|}{0} & 64.92\% & 43.16\% & 53.01\% \\ & & & hybrid & \multicolumn{1}{c|}{0} & 66.08\% & 46.13\% & 32.74\% \\ \cline{3-8} & \multirow{9}{*}{CartGLM2-6B} & \multirow{9}{*}{few-shot} & \multirow{9}{*}{24.41\%-score-define} & \multirow{9}{*}{0} & 55.36\% & 47.26\% & 54.51\% \\ & & & background+hybrid & \multicolumn{1}{c|}{0} & 63.77\% & 76.64\% & 64.41\% \\ & & & tracking+score-define & \multicolumn{1}{c|}{24} & 67.19\% & 50.25\% & 63.05\% \\ & & & \multicolumn{1}{c|}{} & 64.12\% & 92.09\% & 55.56\% \\ & & & background+train\({}_{12}\)+score-define & \multicolumn{1}{c|}{24} & 49.76\% & 56.61\% & 52.70\% \\ & & & background+train\({}_{12}\)+hybrid & \multicolumn{1}{c|}{24} & 35.91\% & 73.22\% & 64.17\% \\ & & & tracking+score-define & \multicolumn{1}{c|}{60} & 77.18\% & 26.77\% & 39.14\% \\ & & & tracking+hybrid & \multicolumn{1}{c|}{0} & 30.06\% & 24.84\% & 22.06\% \\ & & & background+train\({}_{12}\)+hybrid & \multicolumn{1}{c|}{0} & 69.27\% & 52.90\% & 57.02\% \\ & & & background+train\({}_{12}\)+hybrid & \multicolumn{1}{c|}{0} & 64.14\% & 47.10\% & 51.88\% \\ \hline \multirow{9}{*}{LIM} & \multirow{9}{*}{CartGLM-130B} & \multirow{9}{*}{zero-shot} & \multirow{9}{*}{24.41\%-score-define} & \multirow{9}{*}{0} & 54.51\% & 64.51\% & 60.22\% \\ & & & radio-define & \multicolumn{1}{c|}{0} & 53.51\% & 94.84\% & 0.02\% \\ & & & score-define & \multicolumn{1}{c|}{0} & 250.50\% & 50.38\% & 60.40\% \\ & & & hybrid & \multicolumn{1}{c|}{0} & 37.37\% & 97.42\% & 72.20\% \\ \cline{3-8} & \multirow{9}{*}{few-shot} & \multirow{9}{*}{24.41\%-score-define} & \multirow{9}{*}{0} & 56.35\% & 50.25\% & 60.55\% \\ & & & background+hybrid & \multicolumn{1}{c|}{0} & 56.91\% & 92.42\% & 40.43\% \\ & & & train\({}_{12}\)+red-define & \multicolumn{1}{c|}{24} & 53.18\% & 83.25\% & 64.89\% \\ & & & tracking+hybrid & \multicolumn{1}{c|}{24} & 53.30\% & 88.39\% & 68.02\% \\ & & & background+train\({}_{12}\)+red-define & \multicolumn{1}{c|}{24} & 53.80\% & 88.30\% & 68.03\% \\ & & & background+train\({}_{12}\)+red-define & \multicolumn{1}{c|}{24} & 69.88\% & 90.00\% & 27.61\% \\ \cline{3-8} & \multirow{9}{*}{CartGLM2-5} & \multirow{9}{*}{zero-shot} & \multirow{9}{*}{24.41\%-score-define} & \multirow{9}{*}{0} & 52.00\% & 88.23\% & 64.42\% \\ & & & scale-define & \multicolumn{1}{c|}{0} & 53.31\% & 96.15\% & 68.03\% \\ & & & search+define & \multicolumn{1}{c|}{0} & 52.16\% & 80.03\% & 65.70\% \\ & & & hybrid & \multicolumn{1}{c|}{0} & 52.25\% & 52.26\% & 66.95\% \\ \cline{3-8} & \multirow{9}{*}{CartGLM2-5} & \multirow{9}{*}{few-shot} & \multirow{9}{*}{24.41\%-score-define} & \multirow{9}{*}{0} & 54.90\% & 88.23\% & 67.70\% \\ & & & background+test\({}_{12}\)+red-define & \multicolumn{1}{c|}{0} & 52.42\% & 80.03\% & 68.19\% \\ & & & train\({}_{12}\)+red-define & \multicolumn{1}{c|}{24} & 53.36\% & 83.39\% & 67.22\% \\ & & & background+train\({}_{12}\)+red-define & \multicolumn{1}{c|}{24} & 57.19\% & 84.68\% & 68.27\% \\ & & & background+train\({}_{12}\)+red-define & \multicolumn{1}{c|}{24} & 59.78\% & 81.61\% & 68.71\% \\ & & & background+train\({}_{12}\)+red-define & \multicolumn{1}{c|}{24} & 58.26\% & 82.80\% & 64.15\% \\ \cline{3-8} & \multirow{9}{*}{few-shot} & \multirow{9}{*}{24.41\%-score-define} & \multirow{9}{*}{0} & 54.76\% & 71.77\% & 77.73\% \\ & & & scale-define & \multicolumn{1}{c|}{599} & 84.11\% & 72.25\% & 77.79\% \\ & & & hybrid & \multicolumn{1}{c|}{999} & 84.26\% & 73.89\% & 78.45\% \\ \cline{3-8} \cline{8-8} & \multirow{9}{*}{CartGLM2-5} & \multirow{9}{*}{24.41\%-score-define} & \multirow{9}{*}{0} & 57.42\% & 59.84\% & 71.72\% \\ & & & scale-
observed discrepancy may not be meaningful. On the other hand, fine-tuning GPT-3.5 surprisingly led to a decrease in performance rather than the anticipated improvement. This underscores the complexity of model training and the need for careful consideration when implementing fine-tuning strategies.
Design of promptsIn the design of prompts for large language models, our study examined the performance of both zero-shot and few-shot prompts. For zero-shot prompts, we found that a meticulous design focusing on scene and role settings is crucial; otherwise, a basic task-oriented prompt is generally more effective. The changes in performance metrics for various strategies are shown in Table 4. In the realm of few-shot prompts, we observed that prompts providing specific data points outperformed those that simply offered background knowledge. Interestingly, increasing the training data in these prompts did not lead to better performance. A comparative analysis revealed that although few-shot prompts outperformed zero-shot prompts, they still fell
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Model** & **Suicide** & **Cognitive Distortion** \\ \hline \(\Delta\_\)ChatGLM2-6B\_role-define & \(\downarrow\) -1.92\% & — \\ \(\Delta\_\)ChatGLM2-6B\_scene-define & \(\uparrow\)+4.94\% & — \\ \(\Delta\_\)ChatGLM2-6B\_hybrid & \(\uparrow\)+5.67\% & — \\ \(\Delta\_\)GLM-130B\_ role-define & \(\uparrow\)+0.5\% & \(\downarrow\) -0.61\% \\ \(\Delta\_\)GLM-130B\_ scene-define & \(\downarrow\)-0.12\% & \(\downarrow\) -1.2\% \\ \(\Delta\_\)GLM-130B\_ hybrid & \(\uparrow\)+2.68\% & \(\uparrow\)+0.92\% \\ \(\Delta\_\)GPT-3.5\_ role-define & \(\uparrow\)+3.17\% & \(\downarrow\) -0.31\% \\ \(\Delta\_\)GPT-3.5\_ scene-define & \(\uparrow\)+0.34\% & \(\downarrow\) -1.15\% \\ \(\Delta\_\)GPT-3.5\_ hybrid & \(\uparrow\)+1.53\% & \(\downarrow\) -0.11\% \\ \(\Delta\_\)GPT-4\_ role-define & \(\uparrow\)+0.38\% & \(\downarrow\) -0.32\% \\ \(\Delta\_\)GPT-4\_ scene-define & \(\uparrow\)+1.67\% & \(\uparrow\)+1.25\% \\ \(\Delta\_\)GPT-4\_ hybrid & \(\uparrow\)+0.58\% & \(\downarrow\) -1.34\% \\ \hline \end{tabular}
\end{table}
Table 4: Performance Differences in Zero-Shot Enhancement Strategies Compared to Basic Strategy
\begin{table}
\begin{tabular}{|c|c|l|c|c|c|c|} \hline Model category & Model name & Type & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} \\ \hline \multirow{2}{*}{Supervised learning} & \multirow{2}{*}{LSAR} & \multirow{2}{*}{train from scratch} & \multirow{2}{*}{-} & \multirow{2}{*}{728} & \multirow{2}{*}{76.95\%} & \multirow{2}{*}{77.35\%} & \multirow{2}{*}{76.08\%} \\ & & BERT & fine-tuning & & 728 & 79.85\% & 80.49\% & 78.91\% \\ \hline \multirow{11}{*}{LIM} & \multirow{6}{*}{GLM-130B} & \multirow{6}{*}{zero-shot} & basic & 0 & 9.96\% & 67.33\% & 16.39\% \\ & & & scale-define & 0 & 9.90\% & 67.31\% & 15.78\% \\ & & & scale-define & 0 & 8.75\% & 60.43\% & 15.19\% \\ & & & hybrid & 0 & 9.93\% & 67.83\% & 17.31\% \\ \cline{2-7} & & \multirow{6}{*}{zero-shot} & train+back & 12 & 7.70\% & 30.31\% & 12.09\% \\ & & & train+back & 12 & 8.18\% & 4.409\% & 13.73\% \\ \cline{2-7} & & & basic & 0 & 10.25\% & 4.87\% & 6.49\% \\ & & & scale-define & 0 & 12.89\% & 4.85\% & 6.18\% \\ & & & scene-define & 0 & 9.2\% & 3.91\% & 5.34\% \\ & & & hybrid & 0 & 8.61\% & 5.39\% & 6.38\% \\ \cline{2-7} & & \multirow{6}{*}{four-shot} & \multirow{6}{*}{two-shot} & background+light & 0 & 2.90\% & 7.93\% & 17.68\% \\ & & & background+light & 0 & 24.21\% & 10.09\% & 14.00\% \\ & & & train+bbb & 24 & 182 & 12.32\% & 10.09\% & 11.46\% \\ & & & train+back & 24 & 12.31\% & 10.00\% & 11.10\% \\ & & & scale-define & 278 & 10.80\% & 8.20\% & 3.30\% \\ & & & second-define & 728 & 13.17\% & 10.43\% & 11.85\% \\ & & & hybrid & 728 & 11.73\% & 10.00\% & 10.80\% \\ \cline{2-7} & & \multirow{6}{*}{GPT-4} & basic & 0 & 16.46\% & 46.09\% & 24.18\% \\ & & & role-define & 0 & 16.69\% & 42.09\% & 28.86\% \\ & & & semi-define & 0 & 18.12\% & 43.22\% & 25.43\% \\ & & & hybrid & 0 & 16.36\% & 38.87\% & 22.84\% \\ \cline{2-7} & & \multirow{6}{*}{four-shot} & background+scene-define & 0 & 21.70\% & 31.04\% & 25.54\% \\ & & & background+scene-define & 0 & 22.89\% & 34.18\% & 25.95\% \\ & & & train+bb & 24 & 31.25\% & 34.17\% & 32.47\% \\ & & & train+b & 27.00\% & 27.74\% & 27.00\% \\ & & & background+train+b & 24 & 24.35\% & 23.00\% & 27.63\% \\ & & & background+train+b & 24 & 24.39\% & 28.09\% & 25.95\% \\ & & & train+b & 26.92\% & 35.65\% & 29.57\% \\ & & & train+b & 29.46\% & 34.61\% & 31.57\% \\ \hline \end{tabular}
\end{table}
Table 3: Result for cognitive distortion multi-label classification task.
short of fully meeting the task requirements, as evidenced by GPT-4's F1-score of approximately 30%.
Comparison of LLMsConsistently, larger models like GPT-4 outperformed their smaller counterparts such as GPT-3.5. When it came to the complex tasks in our study, The performance of ChatGLM2-6B was insufficient for handling complex tasks, while GLM-130B fared better but was still outdone by GPT-4. Given that our dataset consists of comments from social networks, the text is generally concise. As a result, token length did not substantially affect the performance of the models in our tasks. Rather, the selection of representative data for prompt construction emerged as a more crucial factor than merely increasing the number of tokens.
### Cross-Task Comparison
As task complexity increased from binary to multi-label classification, large language models did not sustain their performance. In contrast, supervised learning models maintained a relatively stable F1-score close to 80% across both types of tasks. This highlights the limitations of large language models in replacing supervised learning for specialized tasks. While fine-tuning may benefit simpler tasks, it does not adequately address the challenges posed by complex tasks, calling for further investigation into fine-tuning mechanisms for large language models.
### Expert evaluation and feedback
Owing to the subpar classification results of cognitive distortions by LLMs, we engaged in a manual analysis of these classification outcomes with the expertise of psychological scholars, focusing primarily on the most efficacious strategy in GPT-4, the train\({}_{2}\)+basic strategy. Based on the analysis, it was observed that, given sufficient textual information, GPT-4 can aptly identify cognitive distortion categories. The brevity and directness inherent in social media texts often deprive them of ample contextual information. However, GPT-4 can introduce relevant conjunctions and modality markers to infer the context (as delineated in Example 1 of Figure 1). Yet, in certain specific scenarios, the model does demonstrate errors:
* Most prominently regarding the categorization of "the fortune teller error" instances arise where patients articulate negative anticipations and feelings of desolation about their future, provide retrospectives of their past experiences, or convey apprehensions about potential challenges in forthcoming life events. Such articulations primarily embody the patients' reflections and should not be deemed conclusive. Yet, GPT-4 has mistakenly classified these under the "the fortune teller error" category (refer to Example 2 of Figure 1).
* Additionally, challenges arose in the categorization of "should statements". Such statements predominantly manifest in patients' regrets regarding past events. However, GPT-4 erroneously categorized patients' expectations about the future as "should statements" as well (see Example 3 of Figure 1).
* In specific contexts, GPT-4 mistakenly classified patients' negative self-assessments as "blaming oneself". However, such classifications lacked the reasoning that ascribes the responsibility for external events to oneself, leading to misjudgments. The appropriate labels for these instances might be "disqualifying the positive" or "mental filter" (refer to Example 4 of Figure 1).
* The model occasionally exhibits ambiguity among the categories of "over-generalization", "all-or-nothing thinking", and "magnification". Instances inherently aligned with Category A are often misclassified into Category B or Category C (refer to Example 5 of Figure 1).
Overall, due to the distinct characteristics of social media data, the task of discerning cognitive distortions within such data is inherently challenging. Even specialists within the domain of psychology inevitably introduce certain subjectivity when categorizing and discerning cognitive distortions in social media texts. In certain contexts, the LLMs can be more detailed, occasionally eliminating biases that may arise during human annotation (as illustrated in Example 6 of Figure 1).
Figure 1: Typical examples of true labels versus GPT-4 predicted labels in cognitive distortion.
## 6 Discussion
Our study systematically evaluated the effectiveness of large language models (LLMs) across two mental health related tasks on Chinese social media: suicide risk classification and cognitive distortion multi-label classification. Our results also reveal the nuanced role of prompt design. While the 'hybrid' prompt performed well in zero-shot settings, the benefits of increasing data in few-shot prompts were not universally beneficial. For more straightforward tasks, adding background knowledge appeared to help smaller models (ChatGLM2-6B), but its utility diminished in more complex models or tasks. This calls for a more customized approach to prompt engineering tailored to the specific task and the size of the model being used. If high-quality data is unavailable or prompt design proves challenging, allowing a LLM to directly handle the task may still yield acceptable performance. Larger language models like GPT-4 and GLM-130B generally outperform smaller variants such as GPT-3.5 and ChatGLM2-6B. However, it's important to note that these large models are not always competent at handling complex tasks and should not be seen as replacements for supervised learning algorithms. For simpler tasks, such as the suicide risk classification task examined in our study, the performance of LLMs is satisfactory. Interestingly, after fine-tuning, GPT-3.5 even outperforms GPT-4, achieving results that are nearly on par with those obtained through supervised learning methods. While there is often a preference for large input limits in large language models (LLMs), it's crucial to tailor these settings to the specific task at hand. For tasks involving shorter text, such as our study on sentiment analysis of social network data, the long input capability of an LLM may not be a primary concern. Our experiments indicate that extending the input data to construct few-shot prompts does not necessarily lead to improved performance. Therefore, it is important to carefully consider the nature of the task when configuring the input parameters of an LLM.
Our study does have some limitations. For instance, due to token constraints, we were unable to conduct certain tests--particularly those involving smaller models supplemented with background knowledge--across all tasks. Looking ahead, we plan to conduct more comprehensive studies that encompass a wider variety of tasks and models. This will allow us to draw more definitive conclusions regarding the comparative effectiveness of large language models and supervised learning algorithms. Additionally, the fine-tuning mechanisms of LLMs warrant further exploration, particularly for more efficient handling of complex tasks. The development of advanced prompt engineering techniques could also help optimize the performance of LLMs across various tasks.
## 7 Conclusion
In this study, we evaluated the performance of multiple large language models (LLMs) in two psychology-related tasks and compared their efficacy with that of supervised learning algorithms. Although LLMs show promise in various natural language processing applications, they are not yet a comprehensive substitute for supervised learning, particularly in domain-specific tasks. Fine-tuning LLMs can enhance performance on simpler tasks but is less effective for more complex challenges. The success of different training strategies and prompt designs is highly contingent on both the task and the size of the model, underscoring the necessity for task-specific customization. In summary, our research suggests that while LLMs offer considerable potential, significant work remains to make them universally effective across a broad array of complex tasks.
## 8 Dataset and Code Availability
The experimental texts for our study are sourced from comments on a Sina Weibo post by the user "Zoufan," which can be viewed at: [https://www.weibo.com/xiaofan1167is_all=1](https://www.weibo.com/xiaofan1167is_all=1). An expert-annotated dataset for the study of cognitive distortions and suicide risk, along with the prompt of the large language model and the supervised learning model code, are now available at: [https://github.com/HongzhiQ/SupervisedVsLLM-EfficacyEval](https://github.com/HongzhiQ/SupervisedVsLLM-EfficacyEval).
Here are the models mentioned earlier along with their corresponding source code and online demo links:
* Source code: [https://github.com/thudm/chatglm2-6b](https://github.com/thudm/chatglm2-6b)
* Unofficial demo: [https://huggingface.co/spaces/mikeee/chatglm2-6b-4bit](https://huggingface.co/spaces/mikeee/chatglm2-6b-4bit)
* **GLM-130B:**
* Source code: [https://github.com/THUDM/GLM-13OB](https://github.com/THUDM/GLM-13OB)
* Official online demo: [https://chatglm.cn/detail](https://chatglm.cn/detail)
* **GPT series:*
* Web application: [https://chat.openai.com/](https://chat.openai.com/)
* GPT-3.5 Fine-tuning details: [https://platform.openai.com/docs/guides/fine-tuning](https://platform.openai.com/docs/guides/fine-tuning)
## 9 Acknowledgments
This work was supported by grants from the National Natural Science Foundation of China (grant numbers:72174152, 72304212 and 82071546), Fundamental Research Funds for the Central Universities (grant numbers: 2042022kf1218; 2042022kf1037), the Young Top-notch Talent Cultivation Program of Hubei Province. Guanghui Fu is supported by a Chinese Government Scholarship provided by the China Scholarship Council (CSC).
## 10 Authors contributions
Hongzhi Qi were responsible for the experiment design and programming. Qing Zhao and Jianqiang Li collaborated in the proposal of the AI-related aspects of the project, with Zhao focusing on data analysis and interpretation and Li serving as the leader of the computer science aspect of the project. Both also reviewed the manuscript. Dan Luo and Huijing Zou contributed to the manuscript writing, carried out experimental verification, and collected data. Changwei Song and Wei Zhai were responsible for code development and served as auxiliary programmers. Guanghui Fu proposed the central idea of the study and was a major contributor in writing the manuscript. Shuo Liu, Yi Jing Yu and Fan Wang took the lead in result evaluation and contributed psychological perspectives to the idea proposal. Bing Xiang Yang proposed the psychological aspects of the idea, performed experimental verification, and led the project from the psychology angle. All authors read and approved the final manuscript.
## 11 Competing interests
All authors declare no financial or non-financial competing interests.
|
2309.15306 | The Beyond-Halo Mass Effects of the Cosmic Web Environment on Galaxies | Galaxy properties primarily depend on their host halo mass. Halo mass, in
turn, depends on the cosmic web environment. We explore if the effect of the
cosmic web on galaxy properties is entirely transitive via host halo mass, or
if the cosmic web has an effect independent of mass. The secondary galaxy bias,
sometimes referred to as ``galaxy assembly bias'', is the beyond-mass component
of the galaxy-halo connection. We investigate the link between the cosmic web
environment and the secondary galaxy bias in simulations. We measure the
secondary galaxy bias through the following summary statistics: projected
two-point correlation function, $\wprp$, and counts-in-cylinders statistics,
$\Pncic$. First, we examine the extent to which the secondary galaxy bias can
be accounted for with a measure of the environment as a secondary halo
property. We find that the total secondary galaxy bias preferentially places
galaxies in more strongly clustered haloes. In particular, haloes at fixed mass
tend to host more galaxies when they are more strongly associated with nodes or
filaments. This tendency accounts for a significant portion, but not the
entirety, of the total secondary galaxy bias effect. Second, we quantify how
the secondary galaxy bias behaves differently depending on the host halo
proximity to nodes and filaments. We find that the total secondary galaxy bias
is relatively stronger in haloes more associated with nodes or filaments. We
emphasise the importance of removing halo mass effects when considering the
cosmic web environment as a factor in the galaxy-halo connection. | Kuan Wang, Camille Avestruz, Hong Guo, Wei Wang, Peng Wang | 2023-09-26T23:35:17Z | http://arxiv.org/abs/2309.15306v2 | # The Beyond-Halo Mass Effects of the Cosmic Web Environment on Galaxies
###### Abstract
Galaxy properties primarily depend on their host halo mass. Halo mass, in turn, depends on the cosmic web environment. We explore if the effect of the cosmic web on galaxy properties is entirely transitive via host halo mass, or if the cosmic web has an effect independent of mass. The secondary galaxy bias, sometimes referred to as "galaxy assembly bias", is the beyond-mass component of the galaxy-halo connection. We investigate the link between the cosmic web environment and the secondary galaxy bias in simulations. We measure the secondary galaxy bias through the following summary statistics: projected two-point correlation function, \(w_{\rm p}(r_{\rm p})\), and counts-in-cylinders statistics, \(P(N_{\rm CIC})\). First, we examine the extent to which the secondary galaxy bias can be accounted for with a measure of the environment as a secondary halo property. We find that the total secondary galaxy bias preferentially places galaxies in more strongly clustered haloes. In particular, haloes at fixed mass tend to host more galaxies when they are more strongly associated with nodes or filaments. This tendency accounts for a significant portion, but not the entirety, of the total secondary galaxy bias effect. Second, we quantify how the secondary galaxy bias behaves differently depending on the host halo proximity to nodes and filaments. We find that the total secondary galaxy bias is relatively stronger in haloes more associated with nodes or filaments. We emphasise the importance of removing halo mass effects when considering the cosmic web environment as a factor in the galaxy-halo connection.
keywords: cosmology: large-scale structure of Universe - galaxies: formation - galaxies: haloes - galaxies: statistics - methods: numerical
## 1 Introduction
Numerical simulations and galaxy surveys have shown that the large-scale structure of the Universe can be described by an intricate network of voids, sheets, filaments, and nodes, which is known as the _cosmic web_(Joeveer et al., 1978; de Lapparent et al., 1986; Bond et al., 1996). The cosmic web originates from primordial density fluctuations and evolves under gravitational interactions, creating a variety of cosmic environments (see Bond et al., 2010; Cautun et al., 2014, and references therein). In general, matter tends to flow out of voids and onto surrounding sheets, and accrete through filaments into nodes. On smaller, nonlinear scales, virialised dark matter haloes populate the cosmic web, and galaxies form and evolve in the potential wells of these haloes (see, e.g., Mo et al., 2010). While voids dominate the volume of the Universe, filaments and nodes contain most of the mass, as well as haloes and galaxies (Pimbblet et al., 2004; Aragon-Calvo et al., 2010).
Tidal forces from the environment affect dark matter haloes. Depending on the type of environment, haloes experience different tidal effects and display different assembly characteristics (e.g., Gottlober et al., 2001; Jing et al., 2007; Hahn et al., 2009; Paranjape et al., 2018). Studies have revealed that the formation time, spin, concentration, and shape of haloes are related to their position in the cosmic web, as measured by their distances to neighbouring structures and / or local densities (e.g., Sheth & Tormen, 2004; Wechsler et al., 2006; Hahn et al., 2007; Wang et al., 2011). For example, halo shapes tend to align with neighbouring sheets and / or filaments, leading to alignments between haloes, while halo spins have a mass-dependent tendency to be parallel or perpendicular to their parent structure (e.g., Kasun & Evrard, 2005; Hahn et al., 2007; Zhang et al., 2009; Trovland et al., 2013; Forero-Romero et al., 2014). More recent work (e.g., Borzyszkowski et al., 2017; Tojeiro et al., 2017; Yang et al., 2017; Musso et al., 2018; Ramakrishnan et al., 2019) has also shown that the cosmic web environment has an influence on halo assembly bias, the dependence of halo clustering on halo properties other than mass (Gao et al., 2005; Gao & White, 2007; Li et al., 2008).
Haloes are the main drivers of galaxy formation and evolution (White & Rees, 1978; Blumenthal et al., 1984). By modelling the statistical relationship between galaxy properties and the properties of their host haloes, we can interpret cosmological observations (e.g., Zehavi et al., 2011; Guo et al., 2015; Vakili et al., 2016; Lange et al.,
2019; Wechsler and Tinker 2018). One of the simplest forms of this galaxy-halo connection assumes that the mass of a halo completely determines the characteristics of the galaxies it contains (e.g., Zheng et al. 2007). However, this mass-only assumption is insufficient for precision cosmology (e.g., Wu et al. 2008; Zentner et al. 2014; McCarthy et al. 2019). Subsequently, more recent galaxy-halo models include an additional dependence of galaxy properties on secondary halo properties at a fixed halo mass (e.g., Hearin et al. 2016; Lehmann et al. 2017).
We can differentiate between the internal and environmental halo properties. The connection between galaxy properties and internal halo properties, such as halo concentration, is known as galaxy assembly bias (e.g., Croton et al. 2007). Galaxy assembly bias has an effect on galaxy clustering, which can be detected in observations (e.g., Cooper et al. 2010; Wang et al. 2013; Zentner et al. 2019). On the other hand, environmental halo properties, such as matter density on intermediate scales, are naturally linked to halo clustering. Any dependence of galaxy properties on these environmental halo properties will be reflected in galaxy clustering as well (e.g., Artale et al. 2018; Zehavi et al. 2018; Xu et al. 2021). Since internal and environmental halo properties are usually correlated, these two types of dependencies are also connected. Following the ideas of Mao et al. (2018), we suggest the use of the term secondary galaxy bias (SGB) to refer to all dependencies of galaxy properties on internal or environmental halo properties at a fixed halo mass.
Since the cosmic web has a major impact on dark matter haloes, it is reasonable to expect that the cosmic web also plays a role in shaping galaxy properties. In fact, studies have demonstrated that star formation, colour, morphology, and stellar mass are all strongly and non-trivially dependent on the type and density of the environment (e.g., Dressler 1980; Kodama et al. 2001; Blanton et al. 2005; Gonzalez and Padilla 2009; Sobral et al. 2011; Eardley et al. 2015; Kraljic et al. 2018; Alam et al. 2019; Aragon Calvo et al. 2019). Additionally, there is evidence of statistical alignments between galaxies and their large-scale environment (e.g., Sales and Lambas 2004; Azzaro et al. 2007; Faltenbacher et al. 2009; Hahn et al. 2010; Zhang et al. 2013). Furthermore, both numerical and observational studies have confirmed the correlation between galaxy spins and the environment (Navarro et al. 2004; Paz et al. 2008; Tempel et al. 2013; Tempel and Libeskind 2013), which is caused by tidal torques (Efstathiou and Jones 1979; White 1984).
We note that the majority of research on the relationship between galaxies and their environment does _not_ differentiate between the environmental effect and the halo mass effect. This leads us to ask: _Does the galaxy-halo connection have a component driven by the cosmic web independent of halo mass?_ We consider two aspects: (i) how much of the total SGB can be attributed to the cosmic web environment as a secondary halo property; and (ii) whether the SGB effect behaves differently in different cosmic web environments.
In this paper, we utilise the IllustrsTNG hydrodynamical simulation (e.g., Pillepich et al. 2018; Nelson et al. 2019) to explore the connection between the SGB and the cosmic web. We quantify the environment of galaxies by measuring their proximity to nodes or filaments in the cosmic web, identified using the DisPerSE cosmic web finder (Sousbie 2011; Sousbie et al. 2011). We measure the strength of the SGB effect using the shuffling procedure developed in Croton et al. (2007) (see also McCarthy et al. 2019; Xu et al. 2021; Yuan et al. 2022, for recent applications of this technique). Our findings shed light on the relationship between the secondary galaxy bias and the cosmic web environment, thereby helping to elucidate the physics of galaxy formation and evolution in the context of the large-scale structure.
This paper is organised as follows. In Section 2, we introduce the data set and methods that we use in our analyses. In Section 3, we examine the dependence of the directly measured halo occupation distribution on different cosmic environments. In Section 4, we treat the cosmic web environment as a secondary halo property, and study its contribution to the total secondary galaxy bias. In Section 5, we study how the secondary galaxy bias differs in different cosmic web environments. We discuss our findings in Section 6 and draw conclusions in Section 7. Appendix A and Appendix B describe additional tests.
## 2 Data and Methods
### IllustrsTNG simulation
This work utilises the TNG300-1 run of the IllustrsTNG simulation suite (Marinacci et al. 2018; Naiman et al. 2018; Nelson et al. 2018; Pillepich et al. 2018; Springel et al. 2018; Nelson et al. 2019), which is a set of large-scale, cosmological, grav-magnetohydrodynamical simulations conducted with the AREPRO code (Springel 2010). The simulations are based on the Planck 2015 cosmology (Planck Collaboration et al. 2016), with \(\Omega_{\Lambda,0}=0.6911\), \(\Omega_{m,0}=0.3089\), \(\Omega_{b,0}=0.0486\), \(\sigma_{8}=0.8159\), \(n_{s}=0.9667\) and \(h=0.6774\). The TNG300-1 run is the high-resolution full-physics run with the largest volume, having a box size of \(L_{\rm box}=205h^{-1}{\rm Mpc}\) and dark matter and baryon mass resolution of \(4\times 10^{7}h^{-1}{\rm M}_{\odot}\) and \(7.6\times 10^{6}h^{-1}{\rm M}_{\odot}\), respectively.
In the TNG simulation, the haloes are identified using the standard friends-of-friends (FoF) algorithm (e.g., Davis et al. 1985), and the virial masses of the haloes are taken from the group catalogue. Subhaloes, which contain individual galaxies, are identified with the Subfind algorithm (Springel et al. 2001). The stellar masses and positions of the galaxies are obtained from Subfind, and we focus on galaxies with stellar masses higher than \(10^{8}{\rm M}_{\odot}\) in this paper, unless otherwise specified.
### Cosmic web classification
We use the DisPerSE cosmic web finder (Sousbie 2011; Sousbie et al. 2011) to identify structures in the simulation volume. DisPerSE provides automatic identification of topological structures such as nodes, filaments, walls and voids, namely the cosmic web, based on the Discrete Morse theory (e.g., Forman 2002). DisPerSE uses discrete distributions of particles in simulations or sparse observational catalogues to estimate a density field. Nodes are critical points in the density field, with filaments being the unique integral lines connecting them. Saddle points are minima along filaments. In this study, under the consideration of the mass resolution of the simulation, sample size and be able to compare with the observational data, we choose galaxies with stellar masses above \(10^{8.5}h^{-1}{\rm M}_{\odot}\) as the input tracers of DisPerSE for the cosmic web searching. Our application of the DisPerSE algorithm is the same as that of Galarraga-Espinosa et al. (2020), setting the signal to noise of \(3\sigma\) as the criterion to identify filaments, and we verify that the results are quite similar to their catalogues. We obtained a total of 11,446 filaments, and the distance of each galaxy to its nearest filament or node is recorded.
We quantify the cosmic web environment of galaxies in terms of their proximity to nearby dense structures, namely, the distance to the nearest node, \(d_{\rm node}\), and the nearest filament, \(d_{\rm filament}\). We investigate the effects of nodes and filaments separately. Galaxies that are close to nodes are mainly affected by the node environment
and are not as sensitive to the filaments around them, and in our analyses of \(d_{\rm filament}\), we exclude galaxies with \(d_{\rm node}<2h^{-1}\)Mpc. We refer to this sample as the non-node sample, in contrast to the full sample, which includes all galaxies with stellar masses greater than \(10^{8}\)M\({}_{\odot}\). In Figure 1, we illustrate these distances with a scatter plot of galaxies in a thin slice of the simulation box. We observe that most galaxies are distributed around nodes and along filaments, forming a web-like structure, as expected. In Figure 2, we show the fractions of galaxies with different distances to nodes and filaments, as functions of galaxy stellar mass. It is clear from the figure that more massive, brighter galaxies tend to inhabit node and filament environments. These proxies do not capture all the information from the cosmic web environment, and we will discuss other possibilities in Section 7.
### Statistics
The connection between galaxies and haloes, together with the halo distribution, determines the spatial distribution of galaxies, and we measure galaxy clustering through summary statistics. We select two distinct statistics, the projected two-point correlation function \(w_{\rm p}(r_{\rm p})\) and the counts-in-cylinders statistic \(P(N_{\rm CIC})\), both of which are based on finding pairs of galaxies. We use the real-space positions of galaxies from the simulation, disregarding peculiar velocities. To make it easier to compare our results with those from observations, which will be explored in our future work, we still use projected statistics, which are less affected by redshift space uncertainties. We take the \(z\)-axis as the line-of-sight direction for our measurements. We use the halotools package (Hearin et al., 2017) to make our measurements.
Two-point correlation functions encode the majority of information in near-Gaussian fields and are used as standard statistics in the literature. We measure the projected two-point correlation function,
\[w_{\rm p}(r_{\rm p})=2\int_{0}^{\pi_{\rm max}}d\pi\,\xi(r_{\rm p},\pi), \tag{1}\]
where \(\xi(r_{\rm p},\pi)\) is the excess probability of finding galaxy pairs with projected and line-of-sight separations \(r_{\rm p}\) and \(\pi\), respectively. We choose \(\pi_{\rm max}=40h^{-1}\)Mpc, and compute \(w_{\rm p}(r_{\rm p})\) in 10 logarithmically spaced radial bins between \(r_{\rm p}=0.1h^{-1}\)Mpc and \(r_{\rm p}=31.6h^{-1}\)Mpc.
In Wang et al. (2019, 2022), we have demonstrated that the counts-in-cylinders statistics are an informative complement to the two-point statistics, because they encode higher-order information of the galaxy field. We measure the counts-in-cylinders statistic for a sample of galaxies by constructing cylinders of radius \(r_{\rm cyl}\) and half-length \(l_{\rm cyl}\) along the line of sight, centring them on each galaxy in the sample. We then count the number of companion galaxies in the sample that fall into each cylinder and use the distribution of this companion count as our summary statistic of the galaxy spatial distribution (illustrated in Figure 3). In real space, we use the same \(r_{\rm cyl}\) and \(l_{\rm cyl}\) of \(5h^{-1}\)Mpc to probe sufficiently large scales in the spatial distribution of galaxies, and denote the count statistics with this cylinder size as \(P_{S}(N_{\rm CIC})\). We have tested that different cylinder sizes do not affect our qualitative results (see Appendix B).
### Catalogue shuffling
We measure the strength of the SGB with the shuffling technique, first proposed by Croton et al. (2007). Here, we present a detailed description of the shuffling method, which is also illustrated in Figure 4.
#### 2.4.1 Mass shuffling
The SGB is the dependence of galaxy occupation on some secondary halo property (denoted as \(x\) in Figure 4), which is either internal or environmental. This effect can be detected in galaxy clustering through its combination with the underlying halo clustering. To isolate the SGB, galaxies are randomly shuffled among haloes of the same mass, while preserving the phase-space distribution of satellite galaxies with respect to the central galaxy. This erases any dependence of galaxy occupation on halo properties other than mass. Comparing the shuffled and original clustering provides a quantification of the SGB. If there is no SGB present, the shuffling has no impact on the measured galaxy clustering. However, if there is SGB in the sample, the shuffling alters the galaxy clustering. In practise, galaxies are shuffled among haloes in narrow mass bins of 0.1 dex, over which the scatter introduced by the mass dependence is typically small.
#### 2.4.2 Double shuffling
It is possible to further investigate the origins of the SGB with the double shuffling technique. This technique fixes the halo mass and a secondary halo property when reassigning galaxies among haloes. This eliminates any dependence of galaxy occupation on halo properties other than mass and the fixed secondary halo property. By comparing the mass shuffles, the double shuffles, and the original galaxy distribution, it is possible to determine the portion of the SGB that can be attributed to the secondary halo property, such as \(d_{\rm node}\) or \(d_{\rm filament}\), and the portion that cannot, thus indicating its relative importance in determining galaxy occupation.
### Secondary galaxy bias strength
#### 2.5.1 Ratio measurement
We measure the strength of the secondary galaxy bias (SGB) by comparing the statistics of the original and shuffled galaxy distributions, \(\mathbf{d}_{1}/\mathbf{d}_{2}\), where \(\mathbf{d}\) is the data vector. The deviation of these ratios from 1 can be used to detect the SGB, as the difference reflects the effect of the SGB on the statistics. For instance, ratios greater than 1 indicate that the SGB present in the original sample increases the value of the statistics, and vice versa. When \(\mathbf{d}_{1}\) and \(\mathbf{d}_{2}\) are measured from the original sample and mass shuffle, respectively, \(\mathbf{d}_{1}/\mathbf{d}_{2}\) reveals the full extent of SGB from all sources; when \(\mathbf{d}_{1}\) and \(\mathbf{d}_{2}\) are measured from the double shuffle and the mass shuffle, respectively, \(\mathbf{d}_{1}/\mathbf{d}_{2}\) indicates the SGB that can be attributed to the second halo property.
#### 2.5.2 Uncertainty estimation
We provide an estimate of the statistical significance of our SGB signal by computing jackknife uncertainties of the ratios, as was done in Hadzhiyska et al. (2021). We divide the original simulation box and the shuffled simulation box into \(5\times 5\) cuboid cells, each of size \(41\ h^{-1}\)Mpc\(\times\)41 \(h^{-1}\)Mpc\(\times\)205 \(h^{-1}\)Mpc. The long axis of each cuboid is the same as the length of the simulation box and is assumed to lie along the line of sight. We calculate the data vector for the jackknife subsamples, excluding one cuboid at a time. The jackknife ratios are calculated between pairs of jackknife subsamples that exclude the same cuboid, so that the ratios are only dependent on the changes in the galaxy occupation of haloes, not differences in host haloes themselves. We use the jackknife errors from these ratios to represent the uncertainty.
## 3 Measured halo occupation distribution
The Halo Occupation Distribution (HOD) (e.g., Berlind & Weinberg, 2002; Kravtsov et al., 2004; Zheng et al., 2007) is a commonly used technique for modelling the relationship between galaxies and haloes. In its simplest form, the HOD assumes that the number of galaxies in a halo is determined solely by the halo mass. Secondary galaxy bias, however, implies a violation of this assumption. The HOD depends on the selection of the galaxy sample, as galaxies of different properties inhabit haloes in different ways. In this section, we will look at the HODs measured from the galaxy samples in IllustrisTNG and investigate whether and how they vary depending on the environment.
### HODs of the entire samples
The HODs of central and satellite galaxies are usually modelled separately since dark matter haloes acquire them in distinct ways. In the top row of Figure 5, we display the central and satellite occupation as a function of halo mass, for both the full sample and the non-node sample. These are calculated from the ratio of galaxy number to halo number within each narrow halo mass bin, in a nonparametric form. The measured HODs are in agreement with expectations. Each halo can have either 0 or 1 central galaxy, but any number of satellite galaxies, and more massive haloes host more galaxies. The occupa
Figure 1: In this figure we show a thin slice of the simulation projected in the \(z\) direction. We plot the spatial distribution of galaxies with stellar masses above \(10^{8}\)M\({}_{\odot}\), where each scatter point represents a galaxy. Left panel shows galaxies colour coded by distance to the nearest node, \(d_{\rm node}\). Right panel shows galaxies colour coded by distance to the nearest filament, \(d_{\rm filament}\). Orange dots in the right panel show “node galaxies” (with host haloes within \(2h^{-1}\)Mpc of the nearest node). We exclude node galaxies from our analysis with \(d_{\rm filament}\) to isolate the impact of filaments, because neighbouring nodes dominate their environment.
Figure 2: In this figure we show the respective fractions of galaxies in different distance bins. Left panel shows distances to nodes, \(d_{\rm node}\), as a function of the galaxy stellar mass M\({}_{\rm star}\). Right panel shows distances to filaments, \(d_{\rm filament}\), as functions of stellar mass. We label the \(d_{\rm node}\) and \(d_{\rm filament}\) bins within the figure. The right panel excludes all node galaxies to isolate the impact of the filament environment.
tion for the non-node sample is limited to lower halo masses, since the most massive haloes and their galaxies are excluded.
### Environmental dependence of the HOD
If the properties of galaxies depend not only on the mass of the halo, but also on a secondary halo property, \(x\) (in this case, \(d_{\rm node}\) or \(d_{\rm filament}\)), haloes of the same mass with different values of \(x\) will have different numbers of galaxies in a given sample. We divide the haloes in each narrow mass bin into four quartiles of \(d_{\rm node}\) (\(d_{\rm filament}\)), and by comparing the HODs of the quartiles, the SGB associated with \(d_{\rm node}\) (\(d_{\rm filament}\)) can be determined.
We compare the occupation of the central galaxy for each quartile with respect to the entire sample in the middle row of Figure 5. We find that haloes closer to dense structures are more likely to host a central galaxy, with the preference being stronger for the full sample with nodes. The mean central number transitions from 0 at low halo masses to 1 at high halo masses, and the difference between quartiles is most prominent at masses slightly below \(10^{11}h^{-1}\)M\({}_{\odot}\). Furthermore, the dependence on node (filament) proximity is stronger at lower \(d_{\rm node}\) (\(d_{\rm filament}\)), indicating that nodes and filaments mostly affect their immediate surroundings, and environments become less distinct from each other when they are far from these dense structures.
In the bottom row of Figure 5, we compare the mean satellite occupation of each quartile to that of the entire sample, \(\left<N_{\rm sat}\right>_{Q}/\left<N_{\rm sat}\right>\), as a function of the halo mass, \(M_{\rm h}\). The satellite statistics are noisy due to the low number densities of massive objects at the high mass end and the incomplete halo sample at the low mass end, caused by the finite-mass resolution of the simulation. Nevertheless, we still observe a trend similar to the central occupation, where haloes closer to nodes or filaments tend to host more satellite haloes, with the dependence weakening as the distance increases. This measurement suggests that the effects of nodes and filaments on satellite occupation are comparable.
In conclusion, we have demonstrated that there is a greater presence of node-related SGB than filament-related SGB in the central galaxy component of the TNG galaxy sample. Nevertheless, this method is restricted to individual secondary halo properties and cannot measure the total SGB from all sources. To address this, we will use the shuffling technique to determine the relative contribution of the environment-related SGB to the total SGB in the following sections.
## 4 Environment as secondary halo property
We seek to answer two questions in this section: (i) what is the total amount of secondary galaxy bias (SGB) present in the IllustrisTNG galaxy sample, and (ii) how much of the total SGB can be attributed to the environmental properties, as quantified by the distances to nodes and filaments, \(d_{\rm node}\) and \(d_{\rm filament}\). To answer the first question, we compare \(w_{\rm p}(r_{\rm p})\) and \(P(N_{\rm CIC})\) measurements between the original galaxy sample and the mass shuffled sample. To answer the second question, we compare measurements from the sample shuffled by both halo mass and the environment property, \(d_{\rm node}\) or \(d_{\rm filament}\), and the sample shuffled by halo mass alone.
### Original measurements
We first measure the statistics of the original galaxy sample from TNG300-1. In Figure 6, we show the measurements for both the full and non-node samples, with \(w_{\rm p}(r_{\rm p})\) in the left panel and \(P(N_{\rm CIC})\) in the right panel. The two-point clustering is significantly reduced when node galaxies are excluded, and the fraction of groups with higher companion counts also decreases, resulting in a higher probability of having fewer companion galaxies in a cylinder, which is in line with our expectation, as node galaxies are a major contributor to the abundance of pairs and neighbours.
### Secondary galaxy bias signal
We now repeat the measurements for different galaxy catalogues and compare the results between the original and shuffled samples. We make four sets of comparisons:
Figure 4: In this figure, we explain the catalogue shuffling technique with which we measure the strength of secondary galaxy bias (SGB), where \(x\) is some secondary halo property. We schematically demonstrate the effect of shuffling galaxies between haloes with the same masses, and compare between cases with and without SGB. A detailed account of the method can be found in Section 2.4.
Figure 3: In this figure, we illustrate the definition of the counts-in-cylinders statistics. The cylinder is placed around each galaxy in the sample along the line of sight, and has a radius of \(r_{\rm cyl}\) and half-length of \(l_{\rm cyl}\). The number of companions that fall in the cylinder is then counted for each galaxy. Counts-in-cylinders provide sensitivity to higher order statistics and to less dense regions of the galaxy distribution, both of which are complementary to information in the two-point correlation function.
1. The original full sample versus the full sample shuffled by mass;
2. The full sample shuffled by mass and \(d_{\rm node}\) versus the full sample shuffled by mass;
3. The original non-node sample versus the non-node sample shuffled by mass;
4. The non-node sample shuffled by mass and \(d_{\rm filament}\) versus the non-node sample shuffled by mass.
For the full sample, (i) evaluates the total SGB and (ii) evaluates the SGB that can be attributed to \(d_{\rm node}\). For the non-node sample, (iii) evaluates the total SGB, and (iv) assesses the SGB that can be attributed to \(d_{\rm filament}\).
The results are shown in terms of the ratio between the statistics taken from the samples we are comparing. As we have discussed in Section 2.5, any difference from unity in the ratio can be seen as a sign of SGB in the sample, and the errors are determined from jackknife subsamples of the simulation box, providing an estimate of the statistical importance of the signal. Although the results here are based on one random shuffle of each type, we have tested that our results are consistent regardless of the random seed used in the shuffling process.
#### 4.2.1 Full sample and \(d_{\rm node}\) effect
We first examine the SGB in the full sample, along with the contribution of \(d_{\rm node}\). In the top row of Figure 7, we show the results of comparisons (i) and (ii), which indicate the strength of the total SGB present in the full sample. The solid curves with error bars represent comparison (i), the total SGB. We observe considerable SGB in the sample, as indicated by both \(w_{\rm p}(r_{\rm p})\) and \(P(N_{\rm CIC})\). The \(w_{\rm p}(r_{\rm p})\) results demonstrate that SGB increases clustering in the range of scales that we investigate. The most prominent effect is seen at intermediate scales, since the shuffling process preserves the 1-halo term in the clustering, diminishing the difference at small scales, while the underlying secondary halo bias weakens at large scales.
The \(P(N_{\rm CIC})\) results show that SGB increases the likelihood of
Figure 5: In this figure, we present the HOD measured from the full sample (left column) and the non-node sample (right column). In each column, the top panel shows the mean number of central and satellite galaxies in the entire sample as functions of halo mass, in solid and dashed curves respectively, as labelled in the panels. The middle panel shows the dependence of the central galaxy occupation on \(d_{\rm node}\) or \(d_{\rm filament}\) at fixed halo masses. We plot the dependence in terms of the difference of the central galaxy occupation between each \(d_{\rm node}\) or \(d_{\rm filament}\) quartile, \((N_{\rm cen}(M_{\rm h}))_{Q}\), and the entire sample, \(\langle N_{\rm cen}(M_{\rm h})\rangle\). Line colours correspond to different quartiles, labelled in the middle panels. Similarly, the bottom panel shows the dependence for the satellite occupation, in terms of \((N_{\rm sat})_{Q}/\langle N_{\rm sat}\rangle\) (\(M_{\rm h}\)), and in logarithmic scale. Both central and satellite galaxies preferentially populate haloes that are more strongly associated with either nodes or filaments.
having a large and a small number of companion galaxies, while reducing the proportion of intermediate companion counts. This suggests that the overall effect of SGB is that the more clustered haloes tend to host more galaxies, adding to the large groups of galaxies in the \(N_{\rm CIC}\) distribution. At the same time, the less clustered haloes host fewer galaxies, resulting in more empty space in the galaxy distribution, which is reflected in the increased probability of low \(N_{\rm CIC}\).
The dotted lines with error bars represent the SGB associated with \(d_{\rm node}\). The comparison between the dotted and solid lines shows that \(d_{\rm node}\) has an effect on galaxy occupation in the same way as the total effect, namely, haloes with lower values of \(d_{\rm node}\) (which are closer to their neighbouring nodes) tend to host more galaxies at the same mass. This is in agreement with the results from Section 3. Although \(d_{\rm node}\) contributes significantly to the total SGB, it is not the only factor. It is not possible to determine the exact amount of contribution from \(d_{\rm node}\) due to scale dependences and the fact that the ratios do not translate directly to a physical fraction.
#### 4.2.2 Non-node sample and \(d_{\rm filament}\) effect
We investigate the SGB in the non-node sample, from which node galaxies are excluded, and the contribution from \(d_{\rm filament}\). The bottom row of Figure 7 displays the results. The solid curves with error bars represent comparison (iii), the total SGB, and the dotted curves correspond to comparison (iv), the \(d_{\rm filament}\)-related SGB. The signals we detect are similar to those in Section 4.2.1, with \(w_{\rm p}(r_{\rm p})\) enhanced on all scales and \(P(N_{\rm CIC})\) increased for small and large companion counts. This implies that even when node galaxies are excluded, haloes that are more clustered tend to host more galaxies. For \(d_{\rm filament}\) in particular, more galaxies are found in haloes closer to the filaments, which is consistent with Section 3. The effect of \(d_{\rm filament}\) can explain a significant part of the total secondary bias, but not all of it.
The discrepancies between the full sample and the non-node sample are evident. The uncertainties in the ratios are lower for the latter, suggesting that the effect of SGB is less reliant on the environment when nodes are excluded, implying that in the extreme environment of nodes, galaxy occupation has more varied behaviour. The peak of the difference in \(w_{\rm p}(r_{\rm p})\) is on a slightly smaller scale than for the full sample, which is likely due to the smaller radii of haloes farther away from the nodes, and thus the earlier emergence of the 2-halo term. The effect of SGB on the small scale \(w_{\rm p}(r_{\rm p})\) is reduced by the exclusion of node regions, which is likely the cause of the small discontinuity at a few \(h^{-1}\)Mpc. The \(P(N_{\rm CIC})\) measurements are cut off at a smaller \(N_{\rm CIC}\) (around \(N_{\rm CIC}\sim 400\)) for the non-node sample, due to the reduced group sizes without the node galaxies.
## 5 Dependence of secondary galaxy bias on environment
In the preceding section, we have studied the relative contribution of environmental measures as secondary halo properties to the total SGB in a galaxy sample. In this section, we explore the role of the cosmic web environment in the SGB from a different angle: whether galaxy samples with similar halo mass distributions but different environments display different levels of SGB. It is well known that both the secondary halo bias and the secondary galaxy bias are sensitive to halo mass (e.g., Wechsler et al., 2006; Wang et al., 2022). This, combined with the fact that halo masses are strongly correlated with the environment, presents a challenge for our analysis. Therefore, when comparing the SGB in different cosmic web environments, we need to separate the effect of \(d_{\rm node}\) or \(d_{\rm filament}\) from the effect of the halo mass. To accomplish this, we divide the galaxy sample at the 50th percentile of the host \(d_{\rm node}\) or \(d_{\rm filament}\) within each narrow bin of host halo masses, instead of percentiles in the entire sample. This approach ensures that the split subsamples have similar distributions of halo masses and prevents the halo mass dependence of SGB from masquerading as a dependence on \(d_{\rm node}\) or \(d_{\rm filament}\).
### Secondary galaxy bias at different \(d_{\rm node}\)
We investigate the dependence of SGB on the node environment by comparing the SGB signals in the two galaxy subsamples with low and high \(d_{\rm node}\). The upper row of Figure 8 shows our measurements of the SGB effect of both statistics. The two subsamples with low
Figure 6: In this figure we present the measurements of the projected two-point correlation function \(w_{\rm p}(r_{\rm p})\) (left panel) and counts-in-cylinders statistics \(P(N_{\rm CIC})\) with a cylinder size of \(5h^{-1}\)Mpc (right panel). Orange lines indicate measurements from the full galaxy sample for the \(d_{\rm node}\) analysis and magenta lines indicate measurements from the non-node sample for the \(d_{\rm filament}\) analysis, as labelled in the left panel. We plot the projected two-point correlation function \(w_{\rm p}\) as a function of the projected separation \(r_{\rm p}\). The counts-in-cylinders statistics are represented as the probability distribution of the number of companions \(N_{\rm CIC}\), normalised by the bin widths. The exclusion of node galaxies suppresses the two-point correlation function and reduces the fraction of groups with higher companions.
Figure 8: In this figure, we show the strength of the secondary galaxy bias (SGB) signal for galaxies in different node or filament environments. The solid and dotted curves correspond to the respective total and distance related SGB as in Figure 7. But, the different colours here correspond to subsamples for the low and high \(d_{\rm node}\) or \(d_{\rm filament}\) subsamples separately, as labelled in the left column. The total SGB is stronger for the subsample closer to nodes or filaments. Within each subsample, the environmental component weakens.
Figure 7: In this figure we show the relative contribution of environmental measures in dotted lines, compared to the total secondary galaxy bias (SGB) in solid lines. We plot the measured signals for both the full sample (top row) and the non-node sample (bottom row). The left column shows results from \(w_{\rm p}(r_{\rm p})\), and the right column shows results from \(P(N_{\rm CIC})\). We illustrate the SGB signal in terms of the ratio between the original and shuffled measurements, where any deviation of the ratio from unity is a signal of SGB. The error bars are the jackknife errors of the ratios. The dotted lines illustrate a statistically significant measurement of the node and filament contribution to the total SGB.
and high \(d_{\rm node}\) are represented by different colours, as labelled in the top left panel. The solid curves and the dotted curves correspond to the total SGB and the \(d_{\rm node}\)-related SGB, respectively, as labelled in the top right panel, similar to Figure 7. We can see that the SGB signal is stronger in the high \(d_{\rm node}\) subsample than in the low \(d_{\rm node}\) subsample.
We find that for both subsamples, the total SGB increases the two-point clustering in the range of scales we investigate, and shifts companion counts in cylinders towards more extreme values1, similar to the results from the entire sample. This can be interpreted as a positive correlation between halo clustering and galaxy occupation. In other words, in both subsamples, the haloes that are more strongly clustered also contain more galaxies.
Footnote 1: The only exception is in the lowest \(N_{\rm CIC}\) bin for the subsample closer to nodes, where the effect of the 5GB reduces the probability, suggesting that the SGB in dense environments disfavours extreme isolation of galaxies.
By comparing the two subsamples, it is evident that the total SGB is significantly stronger at lower \(d_{\rm node}\). The contrast in \(w_{\rm p}(r_{\rm p})\) between the two subsamples is most noticeable at small scales, where the low \(d_{\rm node}\) subsample shows a positive signal, while the high \(d_{\rm node}\) curve is consistent with zero. This can be explained by the shuffling procedure, which preserves the 1-halo term and the larger separations between haloes in environments farther away from nodes, resulting in the 2-halo term appearing at larger scales. For \(P(N_{\rm CIC})\), there is an overall decrease in counts in the high \(d_{\rm node}\) subsample compared to the low \(d_{\rm node}\) subsample, due to the lower number density of galaxies away from nodes, as well as the weakening of the SGB signal.
We now investigate the \(d_{\rm node}\)-related SGB, which is represented by the dotted lines. In the low \(d_{\rm node}\) subsample, there is a weak \(d_{\rm node}\)-related SGB signal, while the high \(d_{\rm node}\) subsample shows little evidence of \(d_{\rm node}\)-related SGB. This is in agreement with our findings in Section 3, which suggest that the HDOs of samples further away from the nodes are less distinct from each other. Additionally, for both subsamples, the proportion of \(d_{\rm node}\)-related SGB to the total SGB in the subsamples is lower than in the entire sample, indicating that the dependence of galaxy occupation on \(d_{\rm node}\) is largely explained by the coarse division of galaxies into low and high \(d_{\rm node}\) subsamples. This also shows that the signal of \(d_{\rm node}\) and \(d_{\rm filament}\)-related in Figure 7 can be largely ascribed to galaxy pairs across different environments.
### Secondary galaxy bias at different \(d_{\rm filament}\)
We investigate the dependence of the SGB on \(d_{\rm filament}\) in the non-node sample. The bottom row of Figure 8 displays the results. The \(d_{\rm filament}\)-related SGB effect is similar to the \(d_{\rm node}\) effect discussed in Section 5.1, with the total SGB being stronger at lower \(d_{\rm filament}\) than higher \(d_{\rm filament}\), although the difference is not as pronounced as in the \(d_{\rm node}\) case. The \(w_{\rm p}(r_{\rm p})\) statistic reveals a weak signal of \(d_{\rm filament}\)-related SGB in the low \(d_{\rm filament}\) subsample, while \(P(N_{\rm CIC})\) hardly shows a signal. In contrast, neither statistic detects a strong \(d_{\rm filament}\)-related SGB in the high \(d_{\rm filament}\) subsample.
In summary, regardless of whether the sample is divided into low or high \(d_{\rm filament}\), it is evident that more clustered haloes contain more galaxies. However, the preference is more pronounced in the low \(d_{\rm filament}\) subsample. Furthermore, the amount of \(d_{\rm filament}\)-related SGB is significantly reduced in both subsamples after the splitting, implying that the number of galaxies is mainly determined by the general type of environment in relation to nearby filaments, rather than by minor \(d_{\rm filament}\) variations.
## 6 Discussion
In this study, we investigated the impact of the cosmic web on the SGB by examining its effect on galaxy clustering measurements. We will now discuss the implications of our findings, as well as some of the restrictions of this study.
In Section 3, we compared the halo occupation distribution in different environments. We found that at fixed halo mass, haloes close to nodes and filaments host more galaxies. Rather than parametrised fits, this halo occupation distribution measurement is a direct measurement of the SGB attributable to the distance between the host halo and dense cosmic web structures. Previous studies, for example, Croft et al. (2012); Zehavi et al. (2018) and Bose et al. (2019), have also found that the halo occupation distribution is higher for haloes in environments with higher intermediate-scale overdensities. Their findings are broadly consistent with ours, although we use different proxies for the environment compared to the overdensity criteria used in these works: haloes located near nodes and filaments tend to have surrounding overdensities higher than those of other haloes.
Our research is one of the first to investigate the role of the cosmic web in the secondary galaxy bias effect. Hadzhiyska et al. (2020) used the IllustrisTNG simulation to explore the effect of local environment by employing a proxy of the local mass density and found that galaxies in similar environments tend to cluster together, which is in line with our results that the coarse division of the galaxy sample by types largely explains the environment-related secondary galaxy bias. Xu et al. (2021) studied the relative contribution of cosmic web environment types to the total secondary galaxy bias using the Millennium simulation and concluded that the environment type measured on scales of \(5-10h^{-1}\)Mpc constitutes a considerable portion of the total secondary bias signature. This is in agreement with our findings from Section 4, although we use different indicators of the environment. As we were completing this manuscript, we became aware of an independent analysis by Montero-Dorta and Rodriguez (2023), who also used distances from cosmic web structures to describe the environment. They found that at fixed halo mass, objects closer to dense structures cluster more strongly, which accounts for a significant portion of the dependence of galaxy clustering on halo formation time, also in qualitatively agreement with our findings.
In Section 5, we present a novel element in the relation between the cosmic web and the SGB: to what extent haloes in different cosmic web environments exhibit different SGB behaviours. We find that haloes close to nodes and filaments are subject to stronger SGB. We argue that this is an important component of the cosmic web effect, as it sheds light on fundamental differences in the physics of galaxy formation and evolution between different environments.
In this work, we have focused on the connection between the cosmic web and the SGB, in other words, the response of the galaxy-halo connection to the environment. We note that there is a relatively larger volume of work on the influence of the environment on the halo bias. For example, Pujol et al. (2017) and Shi and Sheth (2018) claimed that halo clustering is completely determined by the local environment. Paranjape et al. (2018) found that haloes in isotropic and anisotropic environments show different halo assembly biases, Ramakrishnan et al. (2019) showed that halo clustering depends on internal halo properties only through tidal anisotropy, and Mansfield and Kravtsov (2020) proposed the tidal and gravitational effects of the surrounding large-scale structure as main causes of low-mass halo assembly bias. These results indicate that the environments of haloes play a physically fundamental role in determining the halo clustering, which is connected to the traditionally studied halo assembly bias.
We discuss how the cosmic web effect on SGB connects to some
of the more commonly studied secondary halo properties. In particular, studies have extensively examined the secondary galaxy bias associated with halo concentration, formation time, spin, etc. Each of these halo properties affects the galaxy occupation beyond halo mass (see, e.g., Xu et al., 2021, for a systematic study). Haloes in different cosmic web environments have systematically different assembly histories that are reflected in their secondary properties. For example, haloes that frequently merge are likely to have later formation times and lower concentrations. It has also been shown that low-concentration haloes tend to host more satellite galaxies (e.g., Wang et al., 2022), consistent with late formers having more frequent recent mergers. Although the cosmic web and traditional secondary properties are connected, we argue that the cosmic web environment provides a more fundamental view of the factors that affect galaxy formation and evolution.
The cosmic web descriptors are linked to the causal elements of the assembly histories. For instance, node haloes often experience frequent mergers, which are supplied by the filaments that connect them. Moreover, haloes in different cosmic web environments experience different tidal fields. At the most extreme end of the environmental range, massive node haloes have a major influence on their tidal environment and affect nearby haloes through anisotropic tidal forces. These cosmic web descriptors include distances to dense structures, which are used in this work, and measurements of the surrounding density, which are used in other works. By using cosmic web descriptors as a secondary feature, we can investigate their role in the formation of galaxies.
In this work, we study the secondary galaxy bias, which explicitly excludes the effect of halo mass on galaxy properties, and we underline the importance of disentangling the halo mass effect from the contribution of any secondary factor to galaxy formation and evolution. The success of various galaxy-halo connection models (see Wechsler and Tinker, 2018, and references therein) has demonstrated that halo mass (or some mass-like measure) is the predominant determinant of the properties of its galaxies. As halo mass is known to correlate with almost all other halo properties (e.g., Wechsler et al., 2002; Maccio et al., 2007; Knebe and Power, 2008), any apparent sensitivity of galaxy properties to secondary halo properties could, in fact, have a root in the halo mass dependence. It is crucial to always account for halo mass in the theoretical framework, and while it is more challenging to estimate halo masses in observational data, careful considerations of its effect should be made before drawing conclusions on physical factors that impact galaxy formation and evolution.
One might posit that any environmental dependence of galaxy occupation might be due to a halo mass dependence: we expect more massive haloes to prefer overdense regions of our Universe. However, our research has revealed that galaxies prefer to live near nodes and filaments, even when the halo mass is taken into consideration. This preference indicates that the cosmic web has a more complex effect on galaxy physics. These effects could be due to the different halo assembly histories, as well as surrounding tidal anisotropies, which we have discussed above. Galaxies in haloes with different assembly histories will form in different potential wells and have different merger histories, leading to different star formation histories, dynamical states and morphologies. On the other hand, the anisotropic tidal field may strip galaxies of their cold gas, or heat the gas reservoir, thus suppressing star formation as galaxies move through the cosmic web (e.g., Guo et al., 2021, 2023).
Our findings are based on the cosmic web structure identified by the DisPerSE cosmic web finder. Other algorithms, such as those discussed in summarised in Table 1 of Libeskind et al. (2018), may lead to different descriptions of the environment of individual objects. Nevertheless, the general behaviour of these algorithms is in agreement with each other, and we do not anticipate our primary conclusions to be altered by alternative cosmic web identification methods. It is worth noting that our quantification of the environment of haloes and galaxies, i.e., the distance to nearby dense structures, is not a comprehensive description of the environment information. For instance, this metric does not take into account the relative location of an object along a filament, nor does it differentiate between nodes or filaments with different densities and sizes. We do not consider cosmic sheets and voids in this work either. Therefore, we cannot definitively rule out the possibility that the secondary galaxy bias is completely rooted in the cosmic web environment.
Our analysis demonstrates the ability of \(P(N_{\rm CIC})\) to investigate the nuances of secondary galaxy bias, with a statistically significant measurement of environmental contributions to the SGB. As argued in Wang et al. (2022), while the two-point correlation function mainly concentrates on the densest parts of the galaxy distribution, the counts-in-cylinders statistic, \(P(N_{\rm CIC})\), is sensitive to all but the most extreme underdensities, and measures higher-order statistics of the galaxy field. In forward modelling approaches, \(P(N_{\rm CIC})\) provides additional information on the two-point statistics, and in our shuffling procedure, the changes in \(P(N_{\rm CIC})\) also reveal a level of detail that contributes to our understanding of the underlying physics.
## 7 Conclusions
In this study, we explore a link between the cosmic web environment and the galaxy-halo connection. First, we treat the host halo proximity to nodes and filaments as a secondary halo property, and quantify its relative contribution to the total secondary galaxy bias. Second, we compare the behaviour of the secondary galaxy bias in different environments. Our findings are summarised as follows.
* We identify dense structures in the cosmic web (nodes and filaments) in the TNG300-1 run of the IllustrisTNG simulation, using the DisPerSE algorithm. We use halo distances to these dense structures as an environmental measure. We illustrate general features of the cosmic web with these measures in Figure 1 and Figure 2.
* We directly measure the halo occupation distribution for our galaxy sample with stellar masses above \(10^{8}\)M\({}_{\odot}\), and find that haloes closer to nodes or filaments tend to host more galaxies at fixed halo mass (Figure 5).
* We compare summary statistics of shuffled and original galaxy samples to quantify the total secondary galaxy bias and the component that can be attributed to our environmental measures (see Figure 4 for a schematic illustration). In addition to the projected two-point correlation function, \(w_{\rm p}(r_{\rm p})\), we include a novel perspective with the counts-in-cylinders statistics, \(P(N_{\rm CIC})\) (see Figure 3 for the definition of \(P(N_{\rm CIC})\)). Figure 6 provides examples of both statistics.
* In our chosen summary statistics, we confirm that the secondary galaxy bias causes an enhancement in the two-point clustering, and we expose a nuanced effect with the counts-in-cylinders statistics, which manifests as a redistribution of galaxies from intermediate sized companion groups into groups with either very large or very small numbers of galaxies (solid curves in Figure 7). We conclude that the total effect of secondary galaxy bias is for galaxies to preferentially reside in more strongly clustered haloes at similar halo masses.
* We find that the host halo distance to nodes or filaments can account for a significant portion of the total secondary galaxy bias,
but not the entire effect (see comparison between solid and dotted curves in Figure 7).
* We find that the total effect of secondary galaxy bias is relatively stronger for subsamples that are closer to nodes or filaments (see comparison between different coloured solid curves in Figure 8). Within each subsample, the environmental component of SGB weakens (see reduced deviation of the ratio from unity in dotted curves in Figure 8). This trend indicates that while host haloes closer or further from dense cosmic web structures have different galaxy occupations, the finer details of the environment beyond this qualitative classification is less important in the galaxy-halo connection. We stress the importance of comparing galaxy samples with similar host halo mass distributions to isolate the effects of environment.
This work lays out a framework to comprehensively investigate the role of the cosmic web in the galaxy-halo connection, and constitutes a critical step towards understanding the role of the environment in galaxy formation and evolution. In the future, we will explore alternative descriptions of the cosmic web, and extend the analysis to observational data.
## Acknowledgements
We thank Johannes Lange, Risa Wechsler, Andrew Zentner and Qiong Zhang for useful discussions.
KW acknowledges support from the Leinweber Postdoctoral Research Fellowship at the University of Michigan. CA acknowledges support from the Leinweber Center for Theoretical Physics and DOE grant DE- SC009193. HG is supported by the National SKA Program of China (grant No. 2020SKA0110100), National Natural Science Foundation of China (Nos. 11922305, 1183305), the CAS Project for Young Scientists in Basic Research (No. YSBR-092) and the science research grants from the China Manned Space Project with NOs. CMS-CSST-2021-A02. PW is sponsored by Shanghai Pujiang Program(No. 22PJ1415100).
This research made use of Python, along with many community-developed or maintained software packages, including IPython (Perez & Granger, 2007), Jupyter (jupyter.org), Matplotlib (Hunter, 2007), NumPy (van der Walt et al., 2011), SciPy (Jones et al., 2001), and Astropy (Astropy Collaboration et al., 2013). This research made use of NASA's Astrophysics Data System for bibliographic information.
## Data Availability
The simulation underlying this article were accessed from publicly available sources: [https://www.tng-project.org/data/](https://www.tng-project.org/data/). The catalogues including the cosmic web information will be shared on reasonable request to the corresponding authors. The additional derived data are available in the article.
|
2309.12469 | Constraints for the X17 boson from compacts objects observations | We investigate the hypothetical X17 boson on neutron stars and Quark Stars
(QSs) using various hadronic Equation of States (EoSs) with phenomenological or
microscopic origin. Our aim is to set realistic constraints on its coupling
constant and the mass scaling, with respect to causality and various possible
upper mass limits and the dimensionless tidal deformability $\Lambda_{1.4}$. In
particular, we pay special attention on two main phenomenological parameters of
the X17, the one is related to the coupling constant $\mathrm{g}$ that it has
with hadrons or quarks and the other with the in-medium effects through the
regulator $\mathrm{C}$. Both are very crucial concerning the contribution on
the total energy density and pressure. In the case of considering the X17 as a
carrier of nuclear force in Relativistic Mean Field (RMF) theory, an admixture
into vector boson segment was constrained by 20\% and 30\%. In our
investigation, we came to the general conclusion that the effect of the
hypothetical X17 both on neutron and QSs constrained mainly by the causality
limit, which is a specific property of each EoS. Moreover, it depends on the
interplay between the main two parameters that is the interaction coupling
$\mathrm{g}$ and the in-medium effects regulator $\mathrm{C}$. These effects
are more pronounced in the case of QSs concerning all the bulk properties. | A. Kanakis-Pegios, V. Petousis, M. Veselsky, Jozef Leja, Ch. C. Moustakidis | 2023-09-21T20:30:34Z | http://arxiv.org/abs/2309.12469v1 | # Constraints for the X17 boson from compacts objects observations
###### Abstract
We investigate the hypothetical X17 boson on neutron stars and Quark Stars (QSs) using various hadronic Equation of States (EoSs) with phenomenological or microscopic origin. Our aim is to set realistic constraints on its coupling constant and the mass scaling, with respect to causality and various possible upper mass limits and the dimensionless tidal deformability \(\Lambda_{1.4}\). In particular, we pay special attention on two main phenomenological parameters of the X17, the one is related to the coupling constant g that it has with hadrons or quarks and the other with the in-medium effects through the regulator C. Both are very crucial concerning the contribution on the total energy density and pressure. In the case of considering the X17 as a carrier of nuclear force in Relativistic Mean Field (RMF) theory, an admixture into vector boson segment was constrained by 20% and 30%. In our investigation, we came to the general conclusion that the effect of the hypothetical X17 both on neutron and QSs constrained mainly by the causality limit, which is a specific property of each EoS. Moreover, it depends on the interplay between the main two parameters that is the interaction coupling g and the in-medium effects regulator C. These effects are more pronounced in the case of QSs concerning all the bulk properties.
keywords: X17 Boson; Neutron star; Quark Stars; Equation of State +
Footnote †: journal: Physics Letters B
## 1 Introduction
In 2016 an article of Krasznahorkay et al. appeared [1], where an anomaly in angular correlation of \(e^{-}e^{+}\) decay of the \(1^{+}\) excited level of \({}^{8}\)Be nucleus at 18.15 MeV reported, and specifically observed enhancement at folding angles. Since first report till today, Krasznahorkay and his group, reported in addition the same anomaly - observed in the angular correlation of the \(e^{-}e^{+}\) emission - in the excited states of \({}^{4}\)He and \({}^{12}\)C [2; 3; 4]. The reported anomalies at a folding angles was interpreted as a signature of a new neutral boson with a mass of about \(m_{X}=17\) MeV.
These reported observations placed the hypothetical X17 boson as a dark matter candidate and in that spirit since then, several theoretical works pursued this claim [5; 6]. However, an explanation relating this particle to the QCD vacuum was also proposed [7], in the conjecture that the 17 MeV particle could mediate the nucleon-nucleon interactions at large distances in an unbound cluster configuration. Since the assumption that the 17 MeV boson is the only carrier of nuclear interactions is somewhat extreme, we investigated the possible influence of the hypothetical 17 MeV boson on nuclear matter and its influence on the structure of the compact astrophysical objects like neutron stars [8].
A further investigation [9], explores the hypothetical 17 MeV boson in the frame of the Relativistic Mean Field (RMF) theory, constructing a universal Equation of State (EoS) that satisfy all of the well known experimental constraints, from finite nuclei, heavy ion collisions all the way to the neutron stars, allowing a reproduction in masses from \(\cong 1.4\) M\({}_{\odot}\) up to \(\cong 2.5\) M\({}_{\odot}\). The values of the radius show an agreement with the recent measurement by NICER [10; 11]. Also the value of the maximum mass is in a good agreement with the recently reported mass of pulsar 2.35 M\({}_{\odot}\)[12] and potentially also with the mass of the secondary component of the gravitational wave event GW190814 [13]. A previous investigation [14], following an alternative direction, tried to set the ratio \(g^{2}/\mu^{2}\) for a Weakly Interactive Light Boson (WILB) inside the neutron star. In this investigation, the range of the ratio \(g^{2}/\mu^{2}\) was estimated to be less than 2 GeV\({}^{-2}\). A value that looks to be suitable for the neutron star environment talking in account also the experimental constraints for consistency of symmetry and binding energy. Also using the fact that the presence of WILB does not effect the crust properties of neutron star matter they obtaining a quite restricted constraint of the WILB characteristic scale. They also show that the in-medium modification effect indeed cannot be neglected and for their investigation they assumed that the WILB mass follows the same scaling as the one of BrownRho[15].
Another similar research [16], provides a ratio \(g^{2}/\mu^{2}\) of a WILB to be less than 50 GeV\({}^{-2}\) and above than 25 GeV\({}^{-2}\). They also investigated an upper value of \(g^{2}/\mu^{2}=100\) GeV\({}^{-2}\) which deviates a lot from the must fulfilled restrictions concerning the binding energy and the symmetry energy, even though reproducing an acceptable mass according to the GW190814 observation [13] close to \(\cong 2.5\) M\({}_{\odot}\), which is comparable with the
astrophysical observation. In the case of Quark Star (QS) a research [17], considering the non-Newtonian gravity effects, for current quark masses of \(m_{u}=2.16\) MeV, \(m_{d}=4.67\) MeV, and \(m_{s}\) = 93 MeV, they estimate a range of \(g^{2}/\mu^{2}\) between 4.58 GeV\({}^{-2}\) and 9.32 GeV\({}^{-2}\) reaching a maximum mass \(\approx 2.4\) M\({}_{\odot}\).
In all the aforementioned investigations, the ratio \(g^{2}/\mu^{2}\) for a WILB and only the ratio, was thoroughly investigated. No attempt was made to estimate separately the coupling g from the boson mass \(\mu\) for the WILB, and that was reasonable, because the mass of a possible candidate boson was not known or guessed at that time.
In general, concerning the WILB case scenario inside a neutron star and it's connection with the hypothetical U boson, which in our case could be also represented by the hypothetical X17 boson, has been investigated also in the past [18; 19; 20; 21; 22; 23; 24; 25].
In our present work, having as a candidate a 17 MeV boson (hereafter X17), we investigating the effects of the non-Newtonian gravity together with a series of different models - hadronic EoSs with phenomenological and realistic or microscopic origin, from the RMF to the Momentum Dependent Interaction (MDI) model and QSs. Our main motivation is to set realistic constraints on the X17 coupling constant g and the scaling of its mass, which is affected by the changes in the baryon density well above its saturation value inside the neutron star and QS. All of our research is compared with astrophysical observations, reported from LIGO-VIRGO and pulsar observations.
The paper is organized as follows: in Section 2, we briefly describe the non-Newtonian gravity model while in Section 3, we present the nuclear models in the context of which we are investigating our constraints for the X17 boson. In Section 4, we present the basic formalism of the tidal deformability while in Section 5, we display and discuss the results of the present study. In Section 6, we finalize with our concluding remarks.
## 2 The non-Newtonian gravity model
The deviation of the Newton's gravitational potential, known as non-Newtonian gravity [26], usually parameterized in the form:
\[V(r)=-\frac{Gm_{1}m_{2}}{r}\left(1+\alpha_{G}e^{-r/\lambda}\right)=V_{N}(r)+V_ {Y}(r) \tag{1}\]
where \(V_{N}(r)\) is the Newtonian potential, \(V_{Y}(r)\) is the Yukawa correction, \(G=6.67\times 10^{-11}\) N m\({}^{2}\)/kg\({}^{2}\), is the universal gravitational constant, \(\alpha_{\rm G}\) is the dimensionless coupling constant of the Yukawa force and \(\lambda\) represents the range of the Yukawa force mediated by the exchange of a boson with mass \(\mu\). The above quantities related according to the following relations:
\[\alpha_{\rm G}=\pm\frac{{\rm g}^{2}\hbar c}{4\pi Gm_{b}^{2}},\qquad\lambda= \frac{\hbar}{\mu c} \tag{2}\]
where the \(\pm\) sign refers to scalar(+) and vector(-) boson, g is the boson-baryon coupling constant and \(m_{b}\) is the baryon mass.
## 3 The nuclear models
### The Relativistic Mean Field (RMF) model
In the case of the RMF theory using the extended Dirac-Hartree approximation, the energy density and pressure of neutron matter is given by the following expressions [27]:
\[{\cal E} = \frac{(\hbar c)^{3}g_{v}^{2}}{2(m_{v}c^{2})^{2}}n_{b}^{2}+\frac{ (\hbar c)^{3}(\frac{\hbar c}{2})^{2}}{2(m_{p}c^{2})^{2}}\rho_{l}^{2}+\frac{(m_{ s}c^{2})^{2}}{2g_{s}^{2}(\hbar c)^{3}}(m_{b}c^{2}-m_{b}^{*}c^{2})^{2} \tag{3}\] \[+ \frac{\kappa}{6g_{s}^{3}}(m_{b}c^{2}-m_{b}^{*}c^{2})^{3}+\frac{ \lambda}{24g_{s}^{4}}(m_{b}c^{2}-m_{b}^{*}c^{2})^{4}\] \[+ \sum_{i=n,p}\frac{\gamma}{(2\pi)^{3}}\int_{0}^{k_{F}i}4\pi k^{2} \sqrt{(\hbar ck)^{2}+(m_{i}^{*}c^{2})^{2}}dk\] \[P = \frac{(\hbar c)^{3}g_{v}^{2}}{2(m_{v}c^{2})^{2}}n_{b}^{2}+\frac{( \hbar c)^{3}(\frac{\hbar c}{2})^{2}}{2(m_{p}c^{2})^{2}}\rho_{l}^{2}-\frac{(m_{ s}c^{2})^{2}}{2g_{s}^{2}(\hbar c)^{3}}(m_{b}c^{2}-m_{b}^{*}c^{2})^{2}\] (4) \[+ \frac{\kappa}{6g_{s}^{3}}(m_{b}c^{2}-m_{b}^{*}c^{2})^{3}+\frac{ \lambda}{24g_{s}^{4}}(m_{b}c^{2}-m_{b}^{*}c^{2})^{4}\] \[+ \sum_{i=n,p}\frac{1}{3}\frac{\gamma}{(2\pi)^{3}}\int_{0}^{k_{F}i} \frac{4\pi k^{2}}{\sqrt{(\hbar ck)^{2}+(m_{i}^{*}c^{2})^{2}}}dk\]
where \({\cal E}\) is the energy density, \(P\) is the pressure, \(g_{s}\), \(g_{v}\) and \(g_{p}\) are the couplings of the scalar boson, vector boson, and isovector \(\rho\)-meson respectively, \(m_{s}\), \(m_{v}\) and \(m_{\rho}\) are the rest masses of scalar and vector bosons and \(\rho\)-meson respectively, the term \(\rho_{l}\) involves the difference between the proton and neutron densities (important for finite nuclei), also \(\kappa\) and \(\lambda\) are the couplings of the cubic and quartic self-interaction of the scalar boson, \(m_{b}\) and \(m_{b}^{*}\) are the rest mass and the effective mass of the nucleon, \(n_{b}\) is the nucleonic density, \(k_{F}\) is the Fermi momentum of nucleons at zero temperature and \(\gamma\) is the degeneracy, with value \(\gamma=4\) for symmetric nuclear matter and \(\gamma=2\) for neutron matter (used in this investigation).
Considering the possibility that the hypothetical 17 MeV boson [1], could contribute as a second vector boson, we can write an effective mass term in the following form [9]:
\[m_{v}^{*2}=a_{X}^{2}m_{X}^{2}+(1-a_{X})^{2}m_{\omega}^{2} \tag{5}\]
where \(a_{X}\) is the admixture coefficient of the \(m_{X}=17\) MeV boson to the total vector potential. Depending on the value of \(a_{X}\), the effective mass can range from \(m_{\omega}=782.5\) MeV to 17 MeV.
Using two different values for the admixture coefficient (\(a_{X}\)): 0.2(20%, \(m_{v}^{*}=626\) MeV) and 0.3(30%, \(m_{v}^{*}=547.8\) MeV) and increasing the value on the standard \(\rho\)-coupling by 5% and 10% (effective \(\rho\)-coupling: g\({}_{\rho}^{*}\)), respectively, we constructed a set of 13 EoSs, fulfilling experimental constraints in analogy to [9], where properties of nuclear matter and finite nuclei were considered.
All combinations and parameters that were used are shown in Table.1 and Table.2. The corresponding Mass-Radius diagrams are depicted in Fig.1.
### The Momentum Dependent Interaction (MDI) model
The Momentum Dependent Interaction (MDI) model used here, was already presented and analyzed in a previous paper [29; 30]. The MDI is designed to reproduce the results of the microscopic calculations of both nuclear and neutron rich matter at zero temperature and can be extended to finite temperature. The energy density of the baryonic matter, is given by:
\[{\cal E}(u,I) = \frac{3}{10}E_{F}^{0}n_{0}\left[(1+I)^{5/3}+(1-I)^{5/3}\right]u^{5 /3} \tag{7}\] \[+ \frac{1}{3}{\cal F}n_{0}\left[\frac{3}{2}-\left(\frac{1}{2}+x_{3} \right)I^{2}\right]u^{2}\] \[+ \frac{3}{1+\frac{2}{3}{\cal F}n_{0}\left[\frac{3}{2}-\left(\frac {1}{2}+x_{3}\right)I^{2}\right]u^{\sigma+1}}{1+\frac{2}{3}{\cal F}n_{0}\left[ \frac{3}{2}-\left(\frac{1}{2}+x_{3}\right)I^{2}\right]u^{\sigma-1}}\] \[+ u\sum_{i=1,2}\Bigg{[}\;C_{i}({\cal J}_{n}^{i}+{\cal J}_{p}^{i})+ \frac{(C_{i}-8Z_{i})}{5}I({\cal J}_{n}^{i}-{\cal J}_{p}^{i})\Bigg{]}\]
In Eq. (6), \(E_{F}^{0}\) is the Fermi energy of symmetric nuclear matter at the equilibrium density \(n_{0}=0.16\) fm\({}^{-3}\), \(I=(n_{n}-n_{p})/n\) and \(u=n/n_{0}\). The parameters \({\cal A},{\cal B},\sigma,{\cal C}_{1},{\cal C}_{2}\), and \({\cal B}^{\prime}\) which appear in the description of symmetric nuclear matter and the additional parameters \(x_{0}\), \(x_{3}\), \(Z_{1}\), and \(Z_{2}\) used to determine the properties of asymmetric nuclear matter, are treated as parameters constrained by empirical knowledge [29]. By suitable choice of the above parameters we can regulate the stiffness of the corresponding EoS. This stiffness is well reflected by the values of the slope parameter \(L\) which defined as:
\[L=3n_{0}\left(\frac{\partial E_{sym}(n)}{\partial n}\right)_{n=n_{0}} \tag{8}\]
where the symmetry energy, in general is defined as,
\[E_{sym}(n)=\frac{1}{2!}\left(\frac{\partial^{2}E(n,I)}{\partial I^{2}}\right) _{I=0} \tag{9}\]
and \(E(n,I)={\cal E}(u,I)/n\) is the energy per baryon. Moreover, the quantity \({\cal J}_{r}^{i}(n,I,T)\) is defined as:
\[{\cal J}_{r}^{i}(n,I,T)=2\int\frac{d^{3}k}{(2\pi)^{3}}{\rm g}(n,\Lambda_{i})f _{\tau} \tag{10}\]
where \(f_{\tau}\), (for \(\tau={\rm n}\), p) is the Fermi-Dirac distribution function. The function \({\cal G}(k,\Lambda_{i})\) ) suitably chosen to simulate finite range effects is of the following form:
\[{\cal G}(k,\Lambda_{i})=\left[1+\left(\frac{k}{\Lambda_{i}}\right)^{2}\right] ^{-1} \tag{11}\]
where the finite-range parameters are \(\Lambda_{1}=1.5k_{F}^{0}\) and \(\Lambda_{2}=3k_{F}^{0}\) and \(k_{F}^{0}\) is the Fermi momentum at the saturation point \(n_{0}\).
### Contribution on energy and pressure of the X17 boson
The energy density of the WILB boson in neutron star matter is given by [16; 31]:
\[{\cal E}_{\rm H}=\pm\frac{(\hbar c)^{3}}{2}\left(\frac{{\rm g}}{m_{B}c^{2}} \right)^{2}n_{b}^{2} \tag{12}\]
Where \({\rm g}\) is the coupling constant of the interaction and \(m_{B}c^{2}\) the mass of the boson. The sign (+) corresponds to a vector boson (repulsive interaction) and (-) to a scalar boson (attractive interaction). The energy per baryon then is given by \(E_{bar}={\cal E}_{\rm B}/n_{b}\).
The corresponding contribution on the pressure is defined as:
\[P_{B} = n^{2}\frac{\partial({\cal E}_{\rm B}/n)}{dn} \tag{13}\] \[= \frac{(\hbar c)^{3}}{2}\left(\frac{{\rm g}}{m_{B}c^{2}}\right)^{2} n_{b}^{2}\left(1-\frac{2n_{b}}{m_{B}c^{2}}\frac{\partial(m_{B}c^{2})}{\partial n _{b}}\right)\]
In the specific case where the the mass does not depends on the density the pressure contribution is identical to that of the energy density. Now, the total equation of state is just the sum of the baryons and bosons that is:
\[{\cal E}={\cal E}_{\rm bar}\pm{\cal E}_{\rm B},\;\;\;\;P=P_{\rm bar}\pm P_{\rm B} \tag{14}\]
In previous work [14; 16; 31] the range of the ratio \(\left({\rm g}/m_{B}c^{2}\right)^{2}\) was varying between \([0-200]\) GeV\({}^{-2}\). However, in the present study we consider that the coupling varies in the interval \([0.01-0.022]\) which corresponds (for \(m_{B}c^{2}=17\) MeV) the interval for \((g/m_{B}c^{2})^{2}\)\([0.346,1.675]\) GeV\({}^{-2}\).
According to Brown and Rho [15], the in-medium modification of the mesons follows the linear scaling:
\[m_{\rm B}^{*}\equiv m_{\rm B}\left(1-C\frac{n_{b}}{n_{0}}\right)\,({\rm MeV}) \tag{15}\]
\begin{table}
\begin{tabular}{l l l l l l l} EoS & g\({}_{v}^{+}\) & g\({}_{v}\) & g\({}_{s}\) & m\({}_{s}[MeV]\) & \(\kappa[{\rm MeV}]\) & \(\lambda\) \\ \hline \hline E1 & +5\% & 7.61 & 6.78 & 406.6 & 19.0 & -60.0 \\ E2 & +5\% & 8.00 & 6.76 & 391.4 & 17.0 & -63.3 \\ E3 & +5\% & 8.00 & 7.03 & 405.6 & 19.5 & -80.0 \\ \hline \hline E4 & +10\% & 7.23 & 7.27 & 451.9 & 25.0 & -33.3 \\ E5 & +10\% & 7.23 & 7.27 & 451.9 & 25.5 & -46.7 \\ E6 & +10\% & 7.23 & 7.51 & 467.0 & 28.5 & -56.7 \\ E7 & +10\% & 7.61 & 7.03 & 421.7 & 21.0 & -60.0 \\ E8 & +10\% & 7.61 & 7.03 & 421.7 & 21.5 & -73.3 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Constrained parameter sets for eight EoSs with incompressibilities K\({}_{0}\)=245-260 MeV, \(ax_{3}=0.2\) (20%, \(m_{v}^{*}=626\) MeV), standard \(\rho\)-coupling = 4.47 [28] and effective \(\rho\)-coupling g\({}_{v}^{*}\) increased (+) by 5% and 10% compared to its standard value.
\begin{table}
\begin{tabular}{l l l l l l} EoS & g\({}_{v}^{+}\) & g\({}_{v}\) & g\({}_{s}\) & m\({}_{s}[MeV]\) & \(\kappa[{\rm MeV}]\) & \(\lambda\) \\ \hline \hline E9 & +5\% & 7.61 & 8.08 & 451.9 & 19.0 & -103.3 \\ E10 & +5\% & 8.00 & 8.35 & 451.9 & 18.5 & -123.3 \\ E11 & +5\% & 7.23 & 8.33 & 482.2 & 26.0 & -150.0 \\ \hline \hline E12 & +10\% & 6.47 & 5.77 & 346.1 & 14.5 & -33.3 \\ E13 & +10\% & 7.23 & 8.07 & 467.0 & 23.0 & -123.3 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Constrained parameter sets for five EoSs with incompressibilities K\({}_{0}\)=245-260 MeV, \(ax_{3}=0.3\) (30%, \(m_{v}^{*}=547.8\) MeV), standard \(\rho\)-coupling = 4.47 [28] and effective \(\rho\)-coupling g\({}_{v}^{*}\) increased (+) by 5% and 10% compared to its standard value.
We consider, following the suggestion in Ref.[14] that, at least at low densities, in-medium modification of X17 mass takes a similar form to Eq.14. The parameter C is fixed in order the predicted EoS to be compatible with the bulk properties in symmetric nuclear matter (\(E_{\rm bind}=-16\) MeV and \(n_{0}=0.16\) fm\({}^{-3}\)).
A suitable parametrization is also the following:
\[m_{\rm B}^{*}\equiv\frac{17}{1-C}\left(1-C\frac{n_{b}}{n_{0}}\right)\,\,\mbox{( MeV)} \tag{15}\]
which ensures that at the saturation density \(n_{0}\), \(m_{\rm B}^{*}=17\) MeV. To good approximation the contribution of the WILB begins after the crust-core interface.
In previous studies, it was found that in the case of vector boson, the total equation of state becomes very stiff leading to a very large values of the maximum mass and corresponding larges radius. On the other hand in the case of scalar boson the EoS is so soft leading to a very small values (out of the observations) of the maximum mass. In any case the majority of the studies focuses only on the maximum mass and not on the radius and the tidal deformability. The aforementioned quantity directly related with information obtained from the detection of gravitational waves.
### Quark Stars (QSs)
In the present work we use the Color-Flavor Locked (CFL) model for the Quark Stars (QSs), following the previous work of Lugones and Horvath [32]. In particular, we use the lowest order approximation, in which the EoSs are given in a simple and analytical form and in particular the contribution of the pairing correlations and the mass of the strange quark are provided explicitly. The pressure, energy density and the baryon density are given by the expressions [32] (see also [33]):
\[P_{Q}=\frac{3\mu^{4}}{4\pi^{2}(\hbar c)^{3}}-\frac{3(m_{s}c^{2})^{2}\mu^{2}}{ 4\pi^{2}(\hbar c)^{3}}+\frac{3\Delta^{2}\mu^{2}}{\pi^{2}(\hbar c)^{3}}-B \tag{16}\]
\[\mathcal{E}_{Q}=\frac{9\mu^{4}}{4\pi^{2}(\hbar c)^{3}}-\frac{3(m_{s}c^{2})^{2} \mu^{2}}{4\pi^{2}(\hbar c)^{3}}+\frac{3\Delta^{2}\mu^{2}}{\pi^{2}(\hbar c)^{3} }+B \tag{17}\]
and
\[n_{b}=\frac{\mu^{3}}{\pi^{2}(\hbar c)^{3}}\frac{(m_{s}c^{2})^{2}\mu}{2\pi^{2} (\hbar c)^{3}}+\frac{2\Delta^{2}\mu}{\pi^{2}(\hbar c)^{3}}=\frac{\mu^{3}}{\pi ^{2}(\hbar c)^{3}}+\frac{3\mu\alpha}{\pi^{2}(\hbar c)^{3}} \tag{18}\]
where
\[\mu^{2}=-3\alpha+\sqrt{9\alpha^{2}+\frac{4}{3}\pi^{2}(P_{Q}+B)(\hbar c)^{3}} \tag{19}\]
and
\[\alpha=-\frac{(m_{s}c^{2})^{2}}{6}+\frac{2\Delta^{2}}{3} \tag{20}\]
In this study we use the parametrization of the EoS according to the paper [34]. In particular each equation of state is denoted as CFLX[\(B\),\(\Delta\),\(m_{s}\)], where \(X\), is the numbering of the specific EoS, \(B\) (in MeV fm\({}^{-3}\)), \(\Delta\) (in MeV ), \(m_{s}\) (in MeV) are the bag constant, the pairing and the mass of the strange quark respectively. It is worthwhile to notice here that, in general, the lower the value of \(B\) and \(m_{s}\) the stiffer the EoS as well as the lower values of \(\Delta\) the softer the EoS. In the present study we use the sets CFL2[60,50,150] (intermediate stiffness), CFL10[80,150,150] (stiff) and CFL13[100,100,150] (soft) in order to cover a large region of stiffness.
The energy density of the bosons in the case of quark stars is given by [17; 24]:
\[\mathcal{E}_{\rm B}=\pm\frac{9(\hbar c)^{3}}{2}\left(\frac{\rm g}{m_{B}c^{2}} \right)^{2}n_{b}^{2} \tag{21}\]
where the contribution of the pressure is given by an expression similar to (12). The total equation of state is just the sum of the quarks and bosons that is:
\[\mathcal{E}=\mathcal{E}_{\rm Q}\pm\mathcal{E}_{\rm B},\quad P=P_{\rm Q}\pm P_{ \rm B} \tag{22}\]
It is worth clarifying here that the main difference with the case of neutron stars is the extra factor of 9 in terms of the contribution to pressure and energy density. This additional contribution plays, as we shall see, an important role in the effect of the boson on the basic properties of compact objects.
### Speed of sound
The speed of sound is a very crucial quantity since is directly related with the stiffness of the EoS. In particular, the adiabatic speed of sound is defined as:
\[v_{s}/c=\sqrt{\left(\frac{dP}{d\mathcal{E}}\right)_{S}} \tag{23}\]
where \(S\) is the entropy per baryon. More importantly, the speed of sound introduce an upper limit on stiffness of the EoS according to which the speed of sound cannot exceed the speed of light. Bedaque and Steiner [35] have provided simple arguments that support as an upper limit the value \(c/\sqrt{3}\) in nonrelativistic and/or weakly coupled theories. These authors pointed out that the existence of neutron stars with masses about two solar masses combined with the knowledge of the EoS of hadronic matter at low densities is not consistent with this bound. In any case, in studies related with the predictions of bulk properties of compact object special attention must be given on the density dependence of the speed of sound and the upper limits must be carefully taken into account.
## 4 Tidal Deformability
Very important sources of gravitational waves are those produced during the phase of the inspiral on a binary system of neutron stars before they finally merge [36; 37; 38]. These kind of sources leads to the measurement of various properties of neutron stars. During the inspiral phase the tidal effects can be detected [37].
The \(k_{2}\) parameter, also known as tidal Love number, depends on the EoS and describes the response of a neutron star to the tidal field \(E_{ij}\) with respect to the induced quadrupole field \(Q_{ij}\)[37; 38]. Their exact relation is given below:
\[Q_{ij}=-\frac{2}{3}k_{2}\frac{R^{5}}{G}E_{ij}\equiv-\lambda E_{ij}, \tag{24}\]
where \(R\) is the neutron star radius and \(\lambda=2R^{3}k_{2}/3G\) is the tidal deformability. Also, another quantity that is well measured is the effective tidal deformability \(\tilde{\Lambda}\) which is given by [39]
\[\tilde{\Lambda}=\frac{16}{13}\frac{(12q+1)\Lambda_{1}+(12+q)q^{4}\Lambda_{2}}{(1 +q)^{5}}, \tag{25}\]
where the mass ratio \(q=m_{2}/m_{1}\) lies within the range \(0\leq q\leq 1\) and \(\Lambda_{i}\) is the dimensionless deformability [39]
\[\Lambda_{i}=\frac{2}{3}k_{2}\left(\frac{R_{i}c^{2}}{M_{i}G}\right)^{5}\equiv \frac{2}{3}k_{2}\beta_{i}^{-5},\quad i=1,2. \tag{26}\]
The effective tidal deformability \(\tilde{\Lambda}\) plays important role on the neutron star merger process and is one of the main quantities that can be inferred by the detection of the corresponding gravitation waves.
## 5 Results and Discussion
Starting with the RMF nuclear model of Pure Neutron Matter (PNM), we see that its parameterization makes the EoSs to present two different behaviors. The general tendency is, as we can see in the Fig.1, that the increase of the admixture fraction coefficient \(a_{X}\) leads to stiffer EoS and on the other hand the increase of the \(\rho\)-coupling leads to a softer EoS. The combination between these two parameters can be optimized, resulting a reasonable EoS which fits inside the accepted limits with respect the observational data.
Performing this optimization we can deduce that an admixture of 30% in the \(a_{X}\) coefficient and using a \(\rho\)-coupling of +5% results a stiff EoS with maximum mass of 2.33 \(\mathrm{M}_{\odot}\). Furthermore an admixture of 20% in \(a_{X}\) in combination with the \(\rho\)-coupling varying between +5% and +10% results EoSs with maximum masses from 2.1 \(\mathrm{M}_{\odot}\) to 2.27 \(\mathrm{M}_{\odot}\) and radius between 12.5 Km to 13 Km respectively. We notice that none of the total EoSs of the RMF model that we used in our work can describe the HESS J1731-347 [40].
In addition as we can see from Fig.2, in the \(\tilde{\Lambda}-q\) diagram concerning the effective tidal deformability with respect to the binary mass ratio \(q\), the aforementioned optimization can exist well below the specified upper limit of the first gravitational wave GW170817 observation given by LIGO [39]. In our study we adopted the estimations of GW170817 detection concerning the component masses and the chirp mass of the system [39]. As one can observe from Fig.2, the observational upper limit on \(\tilde{\Lambda}\) favors in general the 20% admixture, contrary to the 30% admixture, as the first one provides softer EoSs. Across the same amount of admixture, the EoSs with higher \(\rho\)-coupling provide lower values of \(\tilde{\Lambda}\). Though, this is not an all-out decisive behavior for the exclusion of the 30% admixture since there are cases that fulfill the observational upper limit on \(\tilde{\Lambda}\).
When considering \(\beta\)-equilibrated (npe) matter instead of PNM in the RMF model and the admixture into vector boson segment constrained by 20% and 30% we see not much of a difference of neutron stars radius. A appreciable reduction appears in the maximum mass as shown in Fig.3. Looking to the relative \(\tilde{\Lambda}\) we see a good agreement with the constraints from GW170817 observational data as depicted in Fig.4.
Furthermore, we studied the effect of the X17 boson using the MDI model. To be more specific, we used as a base three main MDI EoSs with different slope parameter L: L = 65 \(\mathrm{MeV}\), L = 72.5 \(\mathrm{MeV}\), and L = 80 \(\mathrm{MeV}\), with blue, green, and red color in the corresponding figures respectively. The coupling constant g is allowed to take values up to g \(\simeq\) 0.022 to fulfill the properties of symmetric nuclear matter at n\({}_{0}\), while the parameter C ranges in the interval C \(\in[0,\mathrm{C}_{\mathrm{max}}]\), where \(\mathrm{C}_{\mathrm{max}}\) is the higher allowed value, unique for each EoS, derived from the non-violation of causality. To be more specific, the value
Figure 1: The M-R diagram corresponds to the RMF model of PNM for various EoSs (for more details see the text). The shaded regions from bottom to top represent the HESS J1731-347 remnant [40], the GW170817 event [39], PSR J1614-2230 [41], PSR J0348+0432 [42], PSR J0740+6620 [43], and PSR J0952-0607 [44] pulsar observations for the possible maximum mass.
Figure 2: The \(\tilde{\Lambda}-q\) dependence corresponds to the RMF model of PNM. The shaded region shows the excluded values derived from the GW170817 event [39].
of g\({}_{\rm max}\simeq 0.022\) arises from the deviation of the binding energy of the symmetric nuclear matter at the saturation density, i.e. \(-17\) Mev \(\leq\) E\({}_{\rm bind}\)(n\({}_{0}\)) \(\leq-15\) MeV.
In Fig, 5 the mass-radius dependence for all the cases of the MDI model is displayed. The solid curves correspond to the three initial EoSs without the contribution of the X17 boson, while the dashed and dash-dotted curves correspond to the EoSs within the presence of the X17 boson for g = 0.011, and g = 0.022 respectively. Among the same set of EoS and g, the darker colored curve corresponds to C = 0, while the lighter one to C = C\({}_{\rm max}\). At first sight, the higher value of L provides stiffer EoS, nevertheless the amount of contribution of X17. The EoSs with g = 0.011 (dashed curves) lie closely to their corresponding initial EoS (solid curves) leading to a slight increment of M\({}_{\rm max}\) and radius R, while the bigger differentiation from the initial EoSs is for g = 0.022. Also, the effect from the contribution of the X17 is stronger on the coupling constant g than the parameter C. We notice that all the EoSs lie outside the estimated region for the HESS J1731-347 remnant [40].
The expansion of the aforementioned EoSs by their application to the observational data of the GW170817 event, allowed us to study directly their behavior through the \(\tilde{\Lambda}-q\) dependence, as one can observe in Fig. 6. In this figure, the first two parametrizations for L (blue and green curves) are in a first in
Figure 4: The \(\tilde{\Lambda}-q\) dependence corresponds to the RMF model of \(\beta\)-equilibrated (npe) matter. The shaded region shows the excluded values derived from the GW170817 event [39].
Figure 5: The M-R diagram corresponds to the MDI models. The shaded regions from bottom to top represent the HESS J1731-347 remnant [40], the GW170817 event [39], PSR J1614-2230 [41], PSR J0348+0432 [42], PSR J0740+6620 [43], and PSR J0952-0607 [44] pulsar observations for the possible maximum mass.
Figure 3: The M-R diagram corresponds to the RMF model of \(\beta\)-equilibrated (npe) matter for various EoSs (for more details see the text). The shaded regions from bottom to top represent the HESS J1731-347 remnant [40], the GW170817 event [39], PSR J1614-2230 [41], PSR J0348+0432 [42], PSR J0740+6620 [43], and PSR J0952-0607 [44] pulsar observations for the possible maximum mass.
Figure 6: The \(\tilde{\Lambda}-q\) dependence corresponds to the MDI models. The shaded region shows the excluded values derived from the GW170817 event [39].
stance in good agreement with the observational upper limit on \(\tilde{\Lambda}\). The third and more stiff branch of EoSs (red curves) also lies inside the acceptance area (imposed by the upper limit of \(\tilde{\Lambda}\)), but it us much closer to the limit, with the more stiff EoS of the total MDI EoSs to be almost exact on it. Additionally, as we increase the slope parameter L, the EoSs spread out more, e.g. for L = 65 MeV the effect of the X17 is smaller compared to the other EoSs in total. At a second level, across EoSs with the same L, this effect depends more on the coupling constant g, as we already observed from Fig. 5. Therefore, a more detailed examination of the dependence of the EoSs on the two main parameters g, and C is needed.
In Figure 7, we examined the dependence of the EoSs from the C parameter using observational constraints on the M\({}_{\rm max}\) and \(\Lambda_{1.4}\). By focusing on the left panel of Figure 7, we notice that at the most high contribution of the X17, the maximum mass increased \(\Delta\)M\({}_{\rm max}\simeq\) 0.066 M\({}_{\odot}\) for L = 65 MeV, \(\Delta\)M\({}_{\rm max}\simeq\) 0.068 M\({}_{\odot}\) for L = 72.5 MeV, and \(\Delta\)M\({}_{\rm max}\simeq\) 0.069 M\({}_{\odot}\) for L = 80 MeV. However, for the same value of (g, C) between the three different set of MDI EoSs, the \(\Delta\)M is higher for the set with the lower L; e.g. for g = 0.022 and C = 0.06 the corresponding values are \(\Delta\)M\({}_{\rm max}^{\rm L=65}\simeq\) 0.06 M\({}_{\odot}\), \(\Delta\)M\({}_{\rm max}^{\rm L=72.5}\simeq\) 0.045 M\({}_{\odot}\), and \(\Delta\)M\({}_{\rm max}^{\rm L=80}\simeq\) 0.043 M\({}_{\odot}\). Hence, the effect of the X17 under this perspective is as a relative amount higher in the softer EoSs. Moreover, the EoS with L = 65 MeV fails to provide a neutron star with M\({}_{\rm max}\geq\) 2.0 M\({}_{\odot}\) and requires high value of the coupling constant g in order to provide a neutron star with M\({}_{\rm max}\simeq\) 1.9 M\({}_{\odot}\), such as the PSR J1614-2230 [41]. In the other two cases of L, the presence of the X17 could lead them to predict a \(\geq\) 2.0 M\({}_{\odot}\) neutron star (for a variety of combinations of g and C) but not higher than the \(\sim\) 2.05 M\({}_{\odot}\) value. The "shark-fin" shaded region arises from the constraints that the non-violation of the causality implies on the C\({}_{\rm max}\), with the peak corresponding to the (g = 0.022, C = C\({}_{\rm max}\)) pair of values for each one of the three MDI set of EoSs. In addition the shaded regions for L = 72.5 MeV and L = 80 MeV are hardly be distinguished.
By examining further the way in which the contribution dependence of the X17 and the corresponding EoSs on the C parameter evolves, we constructed the Fig. 7(b), in which the behavior of the dimensionless tidal deformability \(\Lambda_{1.4}\) of a 1.4 M\({}_{\odot}\) neutron star related to the C for all the MDI EoSs is shown. The shape of the shaded regions is similar to that of the left panel of Fig. 7 and each peak corresponds to the (g = 0.022, C = C\({}_{\rm max}\)) combination, unique for each set of EoSs. In both panels, as we move to higher values of L the peak, meaning the higher contribution of X17 for each EoS, is shifted to higher values of C. The diagonal colored lines indicate the area of values of g and C that violates the causality.
Contrary to the behavior that we observed in the M\({}_{\rm max}-\)C diagram, the L = 72.5 MeV and L = 80 MeV families of EoSs are clearly distinguished. Especially, the L = 80 MeV set of EoSs lies almost inside the observational data from LIGO (green shaded area) with a small peak violating the observational upper limit of \(\Lambda_{1.4}\). The other two families of MDI EoSs lie inside the observational data for all the combinations of g and C that we used. Therefore, even if the L = 72.5 MeV and L = 80 MeV set of EoSs provided almost identical maximum masses as we showed in Fig. 7(a), their dependence on \(\Lambda_{1.4}\) (i.e. the radius R\({}_{1.4}\)) highlights their differences, and especially the sensitivity of the radius to the X17 contribution. Another issue that arises is the tension between the favor of softer EoS from the tidal deformability perspective, and the requirement for a stiffer EoS in order to provide a sufficient high maximum neutron star mass. This tension, through the corresponding observations, we used as a tool in our study to impose constraints on the contribution of X17.
As we mentioned before, the study of the X17 related to radius R\({}_{1.4}\) is of interest. In Fig. 8 we show the behavior of R\({}_{1.4}\) with respect to the coupling constant g for all the three
Figure 7: (a) The maximum mass, and (b) the tidal deformability \(\Lambda_{1.4}\) of a 1.4 M\({}_{\odot}\) neutron star related to the parameter C for the three MDI EoSs. The horizontal shaded areas on the left panel correspond to those of Figure 1. The green shaded area on the right panel indicates the constraints from GW170817 [45], while the colored diagonal lines show the excluded regions from the violation of causality for each EoS.
sets of MDI EoSs. The horizontal shaded regions indicate various estimations of R\({}_{1.4}\)[46; 47; 48]. The L = 72.5 MeV and L = 80 MeV EoSs lie inside only on the estimated area of Ref. [48]. On the other hand, the EoSs with L = 65 MeV lie inside the values predicted by Refs. [46; 47], and above the coupling constant g \(\simeq\) 0.011 the corresponding EoSs insert the region of Ref. [48], but violate the upper limit of Ref. [47]. As a general remark, as we move to higher values of g the effect of the X17 boson becomes bigger, leading to bigger radii. Also, for higher values of g the dependence from the C parameter is stronger (thickening of the curved shaded region), contrary to the lower values of g where the impact of different C on the radius R\({}_{1.4}\) vanishes.
So far we studied the two main parameters, g and C, through their dependence on the macroscopic properties of neutron stars, derived from the EoS, and by exploiting the available observations. In general, only for higher coupling constant g (g \(\gtrapprox\) 0.009) the EoS becomes more sensitive to the C. In order to examine further how g and C affects the EoSs, we introduced a g \(-\) C parameter space, displayed in Fig. 9. This figure is suitable for an overall view on these parameters and the way that the constraints affect them. The green shaded region shows the allowed parameter space with respect to causality, for all three set of MDI EoSs. The green arrows show the direction of the accepted region. The green solid curve indicates the limit for the L = 80 MeV EoS, the dashed one indicates the limit for the L = 72.5 MeV EoS, while the dash-dotted curve indicates the limit for the L = 65 MeV EoS.
As one can observe, the constraint imposed from the non-violation of causality, cuts off the very high values of g and C (grey shaded area). Additionally, as we move to EoSs with lower L, this bound is shifted to even lower values of the parameter space, minimizing further the "window". The red solid curve shows the maximum mass limit of M\({}_{\text{max}}\) = 2 M\({}_{\odot}\) for the L = 80 MeV EoS; for the parameter space on the left of this curve, the combination of (g,C) leads to neutron stars with M\({}_{\text{max}}\) \(<\) 2 M\({}_{\odot}\). The light orange curve, shows the observational upper limit of \(\Lambda_{1.4}\) for the L = 80 MeV EoS; only the combination of parameters on the left side of this curve can be accepted, without violating the observational value. Therefore, for the L = 80 MeV EoS, if we looking for a neutron star with at least M\({}_{\text{max}}\)\(\simeq\) 2 M\({}_{\odot}\) with respect to \(\Lambda_{1.4}\) and causality, then we must search inside the curved triangular parameter space that the three curves form. We notice that this space is unique for each set of EoS, and the corresponding M\({}_{\text{max}}\) requirement. Summarizing, strong constraints are introduced both by observational data and causality leading to a significant limitation of the allowed range of parameters; a behavior that is present correspondingly in the case of QSs, as we will demonstrate below.
Beyond the hadronic EoSs, we expanded our study of the contribution of the X17 boson to QSs. The effect of this contribution is depicted on the mass-radius dependence in Fig. 10. In particular, we used three different -concerning the parametrization- sets of EoSs running from soft (CFL13 EoS) and medium (CFL2 EoS) to stiff (CFL10 EoS) behavior, covering the most possible cases. We notice that in the case of QSs we chose the same upper value for g so that the comparison with the hadronic EoSs of the MDI model to be more clear (the quark EoSs does not have to respect the properties of symmetric nuclear matter at the saturation density). The dashed curves correspond to coupling constant g = 0.011 while the dash-dotted ones correspond to g = 0.022. For the same value of g the lighter colored curves correspond to higher C. As we underlined in the hadronic EoSs, the contribution of X17 in general depends stronger on g than on C. It is worth noticing that in the case of a pure QS the effects of X17 are more pronounced compared to the neutron star case, beucase of the additional contribution of factor of 9 on energy density and pressure (see Eq. 21). Especially, the possible existence of X17 affects dramatically the provided maximum mass in each set of EoSs. The
Figure 8: The radius R\({}_{1.4}\) of a 1.4 M\({}_{\odot}\) neutron star related to the coupling constant g for all the MDI EoSs that we used in our study. The horizontal shaded regions correspond to different constraints on R\({}_{1.4}\)[46; 47; 48].
Figure 9: Constraints for g and C for the three MDI EoSs with respect to causality, the possible upper mass limit M\({}_{\text{max}}\) = 2 M\({}_{\odot}\) and the dimensionless tidal \(\Lambda_{1.4}\) = 580 for the MDI (L=80 MeV) EoS.
effect on the radius of a 1.4 M\({}_{\odot}\) star is moderate but enough to affect the tidal deformability (see Fig. 12). Also, it is interesting and noteworthy that the CFL10 (stiff) and CFL13 (medium) EoSs satisfy simultaneously both the observational data and the corresponding constraints.
An important microscopic quantity to study the properties of the EoS is the speed of sound (see Eq. 23). In the context of this study we used so far the upper bound on the speed of sound, derived from the causality; the speed of sound should never exceed the speed of light (\(\mathrm{v_{s}/c}\))\({}^{2}\leq 1\). In Fig. 11 we demonstrate the relation between (\(\mathrm{v_{s}/c}\))\({}^{2}\) and the pressure P for all the CFL EoSs. The square points indicate the pair of values which provide the M\({}_{\mathrm{max}}\) for each CFL EoS. All EoSs respect the upper bound of causality. We notice the high dependence of the speed of sound on the value of coupling constant g. Additionally, the dependence on C can be observed; this dependence becomes stronger for higher values of g.
By applying the study of quark EoSs to the observational estimations of the properties of the GW170817 event [39], we examined further the behavior of X17 in the CFL EoSs, as shown in Fig. 12, through the \(\tilde{\Lambda}-\mathrm{q}\) dependence. As one can observe, the effect of the X17 depends on the parametrization of both the coupling constant g and the in-medium effects described by the parameter C, leading to an appreciably increase of \(\tilde{\Lambda}\). As in the case of hadronic EoSs, the dependence on g is stronger than on C. In general, the stiffness on the EoS magnifies gradually this deviation on \(\tilde{\Lambda}\) from the initial (without X17) EoS.
In order to take a deeper look on the behavior of the quark EoSs which include the X17 boson, we studied the dependence of their properties related directly to the parameter C, by imposing also available constraints from observations. In Fig. 13 we display the maximum mass M\({}_{\mathrm{max}}\) (left panel) and the tidal deformability \(\Lambda_{1,4}\) (right panel) related to C respectively. The blue color corresponds to the CFL13 set of EoSs (soft), the green one to the CFL2 (intermediate), and the purple color corresponds to the CFL10 (stiff). In both panels the colored regions of the EoSs are defined by the non-violation of the causality. In Fig. 13(b) the diagonal colored lines indicate the violation of causality. In both M\({}_{\mathrm{max}}-\mathrm{C}\) and \(\Lambda_{1.4}-\mathrm{C}\) diagrams, the corresponding peak of each set of EoSs occurs when g = 0.022 and C = C\({}_{\mathrm{max}}\), where C\({}_{\mathrm{max}}=0.0456\) for CFL10, C\({}_{\mathrm{max}}=0.0595\) for CFL2, and C\({}_{\mathrm{max}}=0.0653\) for CFL10. From softer to stiffer EoS this peak shifts to higher values of C, with a corresponding grow on M\({}_{\mathrm{max}}\) and \(\Lambda_{1.4}\). Across the same value of g, i.e. g = const. as the C increases, the effect of the X17 becomes higher; we clarify that the upper limit of each shaded region on the left side of each peak corresponds to g = 0.022. The im
Figure 11: The speed of sound as a function of the pressure for the three quark EoSs and various parametrization for the X17 boson. The square points indicate the pair of values which provide the M\({}_{\mathrm{max}}\) for each CFL EoS.
Figure 12: The dimensionless tidal deformability \(\tilde{\Lambda}\) for the three quark EoSs and various parametrization for the X17 boson. The shaded region shows the acceptance values derived from the GW170817 event [39].
Figure 10: The M-R diagram of the three CFL quark EoSs. The shaded regions from bottom to top represent the HESS J1731-347 remnant [40], the GW170817 event [39], PSR J1614-2230 [41], PSR J0348+0432 [42], PSR J0740+6620 [43], and PSR J0952-0607 [44] pulsar observations.
print of the high contribution of the X17 to the quark EoSs can be observed clearly from the deviation of the M\({}_{\rm max}\) and the \(\Lambda_{1.4}\) from the corresponding value of the initial EoS; e.g. for the CFL2 EoS, g = 0.022, and C = C\({}_{\rm max}\) it is \(\Delta\)M\({}_{\rm max}\simeq\) 0.417 M\({}_{\odot}\). Regarding the \(\Lambda_{1.4}\), the corresponding deviation for the CFL13 EoS is \(\Delta\Lambda_{1.4}\approx\) 85, for the CFL2 EoS it is \(\Delta\Lambda_{1.4}\approx\) 99, while for the CFL10 EoS it is \(\Delta\Lambda_{1.4}\simeq\) 167. Hence, there is an agreement with the gradual change of stiffness. From the Fig. 13(b) we notice that all EoSs lie inside the GW170817 accepted region, except some cases with high g and C for the CFL10 set of EoSs. So far, the available observational data offered us useful constraints on the two parameters g and C, but a more detailed view is needed.
For the purpose of the aforementioned examination, we constructed the g - C parameter space for each set of quark EoS, by applying various upper limits on M\({}_{\rm max}\), \(\Lambda_{1.4}\) and including the non-violation of causality. In Fig. 14 the three parameter spaces are demonstrated; the change on the stiffness (soft to stiff) of the EoS corresponds to the direction from the left to the right panel. This kind of diagrams clarify specifically the role of the two parameters. As we move from softer to stiffer EoS, the range of pair values (g, C) that satisfy the mentioned constraints increases. Therefore, since the range of values for the coupling constant g is common for all cases, the stiffer the EoS the higher the value of C\({}_{\rm max}\), meaning higher contribution of X17. We underline at this point that across the same set of EoSs the contribution of X17 is characterized firstly by the value of g. The displayed points correspond to the crossed colored curves' values with the causality limit. The curves of M\({}_{\rm max}\)(C) and \(\Lambda_{1.4}\)(C) are unique for each set of CFL EoSs. The M\({}_{\rm max}\)(C)
Figure 14: Constraints for g and C for (a) the CFL13 quark EoS (soft), the possible upper mass limits 1.8 M\({}_{\odot}\) and 2.0 M\({}_{\odot}\) and the dimensionless tidal \(\Lambda_{1.4}=\) 580, (b) the CFL2 quark EoS (intermediate), the possible upper mass limits 2.2 M\({}_{\odot}\) and 2.4 M\({}_{\odot}\) and the dimensionless tidal \(\Lambda_{1.4}=\) 580, and (c) the CFL10 quark EoS (stiffer), the possible upper mass limit 2.4 M\({}_{\odot}\) and the dimensionless tidal \(\Lambda_{1.4}=\) 580 [45]. In all panels, the green shaded region indicates the allowed parameter space derived by the non-violation of causality, while the yellow ones indicate the regions where we not include in our study.
Figure 13: (a) The maximum mass, and (b) the tidal deformability A\({}_{1.4}\) corresponding to 1.4 M\({}_{\odot}\) related to the parameter C for the three CFL EoSs. The horizontal shaded areas on the left panel correspond to those of Fig. 10. The green shaded area on the right panel indicates the constraints from GW170817 [45], while the colored diagonal lines show the excluded regions from the violation of causality for each EoS.
curves define that the \(\left(\mathrm{g,C}\right)\) pair of values on the left of each one of these curves do not provided the desired \(\mathrm{M_{max}}\). These points are a) \(\mathrm{g=0.0105,~{}C_{\mathrm{max}}=0.05325\) with \(\mathrm{M_{max}=1.8~{}M_{\odot}}\) and \(\mathrm{g=0.012,~{}C_{\mathrm{max}}=0.04581\) with \(\mathrm{M_{max}=2.0~{}M_{\odot}}\) for the CFL13 EoS (soft case), b) \(\mathrm{g=0.0189,~{}C_{\mathrm{max}}=0.0608\) with \(\mathrm{M_{max}=2.2~{}M_{\odot}}\) for the CFL2 EoS (intermediate case), and c) \(\mathrm{g=0.0102,~{}C_{\mathrm{max}}=0.079\) with \(\mathrm{M_{max}=2.4~{}M_{\odot}}\) and g = 0.017, \(\mathrm{C_{max}}=0.06948\) with \(\mathrm{\Lambda_{1.4}=580}\) for the CFL10 EoS (stiff case). In the latter case, the observational limit on \(\mathrm{\Lambda_{1.4}}\) excludes all \(\left(\mathrm{g,C}\right)\) values on the right of its curve. We observe that as we move to stiffer EoS only higher values on \(\mathrm{M_{max}}\) can provide further constraints. Concerning the constraints imposed by the \(\mathrm{\Lambda_{1.4}}\), these can be useful in stiffer EoSs.
Concluding, the more soft the equation of state the more limited is the range of parameters, for both hadronic and quark case. In this context, additional observational data concerning the maximum mass as well as more strict upper and even lower limits on \(\mathrm{\Lambda_{1.4}}\) may lead to much stringent constraints regarding the coupling constant g and the in-medium effect regulator C.
## 6 Concluding Remarks
We studied the effect of the hypothetical X17 boson on the EoS of neutron star matter as well as on QS and the corresponding bulk properties including mass, radius and tidal deformability. In particular, we payed attention on two main phenomenological parameters of the X17: a) the coupling constant g of its interaction with hadrons or quarks, and b) the in-medium effects through a regulator C. Both are very crucial concerning the contribution on the total energy density and pressure. We suggest that it is possible to provide constraints on these parameters with respect to causality and various possible upper mass limits and the dimensionless tidal deformability \(\mathrm{\Lambda_{1.4}}\). Moreover, we found that the more stiff is the EoS (hadronic or quark), the more indiscernible are the effects on the properties of compact objects. In particular, we found that the effect of the existence of the hypothetical X17 boson, are more pronounced, in the case of QSs, concerning all the bulk properties. This is due mainly on the extra factor 9 both on the energy and pressure contribution to the total ones. It must be emphasized that in the present study, special attention was paid to maintain the non violation of causality on the EoSs, while systematically taken into account the in-medium effects on the mass of the hypothetical boson (which usually had been omitted in similar works so far). In addition, an attempt was made to find possible constraints on the hypothetical X17 boson with the help of observational data and mainly those derived from the detection of gravitational waves.
It would be also of great interest to perform similar calculations in the context of modified gravity theories in order to fully demonstrate the implications of the X17 boson to the properties of various compact objects [49; 50; 51]. Finally, it is worth to notice that the present study can form the framework for similar studies concerning the possible existence of bosons in nuclear matter and their consequences on the structure and basic properties of compact objects. Likely, in this case it will be possible from both terrestrial and astrophysical observations to make the best possible estimate of the properties of these particles, which will concern both individual properties and those related to the interaction with the environment in which they are found.
## Acknowledgments
This work is supported by the Czech Science Foundation (GACR Contract No. 21-24281S) and by the Hellenic Foundation for Research and Innovation (HFRI) under the 3rd Call for HFRI PhD Fellowships (Fellowship Number: 5657). One of the authors (Ch.C.M) would like to thank Prof. M.I. Krivoruchenko and Prof. F. Simkovic for useful discussion and correspond.
|
2309.15312 | MAPTree: Beating "Optimal" Decision Trees with Bayesian Decision Trees | Decision trees remain one of the most popular machine learning models today,
largely due to their out-of-the-box performance and interpretability. In this
work, we present a Bayesian approach to decision tree induction via maximum a
posteriori inference of a posterior distribution over trees. We first
demonstrate a connection between maximum a posteriori inference of decision
trees and AND/OR search. Using this connection, we propose an AND/OR search
algorithm, dubbed MAPTree, which is able to recover the maximum a posteriori
tree. Lastly, we demonstrate the empirical performance of the maximum a
posteriori tree both on synthetic data and in real world settings. On 16 real
world datasets, MAPTree either outperforms baselines or demonstrates comparable
performance but with much smaller trees. On a synthetic dataset, MAPTree also
demonstrates greater robustness to noise and better generalization than
existing approaches. Finally, MAPTree recovers the maxiumum a posteriori tree
faster than existing sampling approaches and, in contrast with those
algorithms, is able to provide a certificate of optimality. The code for our
experiments is available at https://github.com/ThrunGroup/maptree. | Colin Sullivan, Mo Tiwari, Sebastian Thrun | 2023-09-26T23:43:37Z | http://arxiv.org/abs/2309.15312v3 | # MAPTree: Beating "Optimal" Decision Trees with Bayesian Decision Trees
###### Abstract
Decision trees remain one of the most popular machine learning models today, largely due to their out-of-the-box performance and interpretability. In this work, we present a Bayesian approach to decision tree induction via maximum a posteriori inference of a posterior distribution over trees. We first demonstrate a connection between maximum a posteriori inference of decision trees and AND/OR search. Using this connection, we propose an AND/OR search algorithm, dubbed MAPTree, which is able to recover the maximum a posteriori tree. Lastly, we demonstrate the empirical performance of the maximum a posteriori tree both on synthetic data and in real world settings. On 16 real world datasets, MAPTree either outperforms baselines or demonstrates comparable performance but with much smaller trees. On a synthetic dataset, MAPTree also demonstrates greater robustness to noise and better generalization than existing approaches. Finally, MAPTree recovers the maximum a posteriori tree faster than existing sampling approaches and, in contrast with those algorithms, is able to provide a certificate of optimality. The code for our experiments is available at [https://github.com/ThrunGroup/maptree](https://github.com/ThrunGroup/maptree).
## 1 Introduction
Decision trees are amongst the most widely used machine learning models today due to their empirical performance, generality, and interpretability. A decision tree is a binary tree in which each internal node corresponds to an if/then/else comparison on a feature value; a label for a datapoint is produced by determining the corresponding leaf node into which it falls. The predicted label is usually the majority vote (respectively, mean) of the label of training datapoints at the leaf node in classification (respectively, regression).
Despite recent advances in neural networks, decision trees remain a popular choice amongst machine learning practitioners. Decision trees form the backbone of more complex ensemble models such as Random Forest [1] and XGBoost [1], which have been the leading models in many machine learning competitions and often outperform neural networks on tabular data [12]. Decision trees naturally work with complex data where the features can be of mixed data types, e.g., binary, categorical, or continuous. Furthermore, decision trees are highly interpretable and the prediction-generating process can be inspected, which can be a necessity in domains such as law and healthcare. Furthermore, inference in decision trees is highly efficient as it relies only on efficient feature value comparisons. Given decision trees' popularity, an improvement upon existing decision tree approaches would have widespread impact.
**Contributions:** In this work, we:
* Formalize a connection between maximum a posteriori inference of Bayesian Classification and Regression Trees (BCART) and AND/OR search problems,
* Propose an algorithm, dubbed MAPTree, for search on AND/OR graphs that recovers the maximum a posteriori tree of the BCART posterior over decision trees,
* Demonstrate that MAPTree is significantly faster than previous sampling-based approaches,
* Demonstrate that the tree recovered by MAPTree either a) outperforms current state-of-the-art algorithms in performance, or b) demonstrates comparable performance but with smaller trees, and
* Provide a heavily optimized C++ implementation that is also callable from Python for practitioners.
## 2 Related Work
In this work, we focus on the construction of individual decision trees. We compare our proposed algorithm with four main classes of prior algorithms: greedy algorithms, "Optimal" Decision Trees (ODTs), "Optimal" Sparse Decision Trees (OSDTs), and sampling-based approaches.
The most popular method for constructing decision trees is a greedy approach that recursively splits nodes based on a heuristic such as Gini impurity or entropy (in classification) or mean-squared error (in regression) [13]. However, individual decision trees constructed in this manner often overfit the training data; ensemble methods such as Random Forest and XGBoost attempt to ameliorate overfitting but are significantly more complex than a single decision tree [1, 1].
So-called "optimal" decision trees reformulate the problem of decision tree induction as a global optimization prob
lem, i.e., to find the tree that maximizes global objective function, such as training accuracy, of a given maximum depth [12, 13, 14, 15]. Though this problem is NP-Hard in general [12], existing approaches can find the global optimum of shallow trees (depth \(\leq 5\)) on medium-sized datasets with thousands of datapoints and tens of features. The original ODT approaches were based on mixed integer programming or binary linear program formulations [13, 14, 15]. Other work attempts to improve upon these methods using caching branch-and-bound search [1], constraint programming with AND/OR search [14], or dynamic programming with bounds [14]. ODTs have been shown to outperform their greedily constructed counterparts with smaller trees [13, 15] but still suffer from several drawbacks. First, choosing the maximum depth hyperparameter is nontrivial, even with cross-validation, and the maximum depth cannot be set too large as the runtime of these algorithms scales exponentially with depth. Furthermore, ODTs often suffer from overfitting, especially when the maximum depth is set too large. Amongst ODT approaches, verhaeghe2020optimal formulates the search for an optimal decision tree in terms of an AND/OR graph and is most similar to ours, but still suffers from the aforementioned drawbacks. Additionally, many ODT algorithms exhibit poor anytime behavior [11]. Optimal sparse decision trees attempt to adapt ODT approaches to train smaller and sparser trees by incorporating a sparsity penalty in their objectives. As a result, OSDTs are smaller and less prone to overfitting than ODTs [12, 13]. These approaches, however, often underfit the data [10, 12].
Another class of approaches, called Bayesian Classification and Regression Trees (BCART), introduce a posterior over tree structures given the data and sample trees from this posterior. Initially, BCART methods were observed to generate better trees than greedy methods [12]. Many variations to the BCART methodology were developed using sampling methods based on Markov-Chain Monte Carlo (MCMC), such as Metropolis-Hastings [13] and others [14, 15]. These methods, however, often suffer from exponentially long mixing times in practice and become stuck in local minima [16]. In one study, the posterior over trees was represented as a lattice over itemsets [11]. This approach discovered the maximum a posteriori tree within the hypothesis space of decision trees. However, this approach required enumerating and storing the entire space of decision trees and therefore placed stringent constraints on the search space of possible trees, based on leaf node support and maximum depth. Our method utilises the same posterior over tree structures introduced by BCART. In contrast with prior work, however, we are able to recover the provably maximum a posteriori tree from this posterior in the unconstrained setting.
## 3 Preliminaries and Notation
In this paper, we focus on the binary classification task, though our techniques extend to multi-class classification and regression. We also focus on binary datasets, as is common in the decision tree literature [13, 12, 11] since many datasets can be binarized via bucketing, one-hot encoding, and other techniques.
**General notation:** We assume we are given a binary dataset \(\mathcal{X}\in\{0,1\}^{N\times F}\) with \(N\) samples, \(F\) features, and associated binary labels \(\mathcal{Y}\in\{0,1\}^{N}\). We let \([u]\coloneqq\{1,\ldots,u\}\), \(I\subseteq[N]\) the indices of a subsample of the dataset, and \((x_{i},y_{i})\) denote the \(i\)th sample and its label. We define \(\mathcal{X}|_{\mathcal{I}}\coloneqq\{x_{i}:i\in\mathcal{I}\}\subset\mathcal{X },\mathcal{Y}|_{\mathcal{I}}\coloneqq\{y_{i}:i\in\mathcal{I}\}\subset\mathcal{Y}\), and \(\mathcal{I}|_{f=k}\coloneqq\{i:i\in\mathcal{I}\text{ and }(x_{i})_{f}=k\}\), for \(k\in\{0,1\}\). Finally, we let \(c^{k}(\mathcal{I})\) be the count of points in \(I\) with label \(k\in\{0,1\}\), i.e., \(c^{k}(\mathcal{I})=|\{i:i\in\mathcal{I}\text{ and }y_{i}=k\}|\) and \(\mathcal{V}(I)\) be the set of nontrivial features splits of the samples in \(I\), i.e., the set of features such that neither \(I|_{f=0}\) nor \(I|_{f=1}\) is nonempty.
**Tree notation:** We let \(T=\{n_{1},n_{2},\ldots,n_{M+L}\}\) be a binary classification tree represented as a collection of its nodes and use \(n\) to refer to a node in \(T\), \(m\) to refer to one of the \(M\) internal nodes in \(T\), and \(l\) to refer to one of the \(L\) leaf nodes in \(T\). Furthermore, we use \(\mathcal{I}(n)\) to denote the indices of the samples in \(\mathcal{X}\) that reach node \(n\) in \(T\), namely \(\{i:x_{i}\in\text{space}(n)\}\), where \(\text{space}(n)\) is the subset of feature space that reaches node \(n\) in \(T\). We also use \(c^{k}_{l}\) to denote the count of points assigned to leaf \(l\) with label \(k\in\{0,1\}\) (i.e., \(c^{k}_{l}=c^{k}(I(l))\)), \(T_{\text{internal}}=\{m_{1},m_{2},\ldots,m_{M}\}\subset T\) to denote the set of internal nodes in tree \(T\), and \(T_{\text{leaves}}=\{l_{1},l_{2},\ldots,l_{L}\}\subset T\) is the set of all leaf nodes in tree \(T\). Finally, we use \(d(n)\) to denote the depth of node \(n\) in \(T\).
### AND/OR Graph Search
We briefly recapitulate the concept of AND/OR graphs and a search algorithm for AND/OR graphs, AO*. AND/OR graph search can be viewed as a generalization of the shortest path problem that allows nodes consisting of independent subproblems to be decomposed and solved separately. Thus, a solution of an AND/OR graph is not a path but rather a subgraph \(\mathcal{S}\) with cost, denoted \(\texttt{cost}(\mathcal{S})\), equal to the sum across the costs of its edges. AND/OR graphs contain two types of nodes: terminal nodes and nonterminal nodes. Nonterminal nodes can be further subdivided into AND nodes and OR nodes, with a special OR node designated as the _root_ or _start_ node \(r\). A _solution graph_\(\mathcal{S}\) on an AND/OR graph is connected subset of nodes of \(\mathcal{G}\) in which:
1. \(r\in\mathcal{S}\),
2. for every AND node \(a\in\mathcal{S}\), _all_ the immediate children of \(a\) are also in \(\mathcal{S}\), and
3. for every non-terminal OR node \(o\in\mathcal{S}\)_exactly one_ of \(o\)'s children is also in \(\mathcal{S}\).
Intuitively, the children of an AND node \(a\) represent subtasks that must all be solved for \(a\) to be satisfied (e.g., simultaneous prerequisites), and the children of an OR node \(o\) represent mutually exclusive satisfying choices.
One of the most popular AND/OR graph search algorithms is AO* [14, 15]. The AO* algorithm explores potential paths in an AND/OR graph in a best-first fashion, guided by a heuristic. When a new node is explored, its children are revealed and the cost for that node and all of its ancestors is updated; the search then continues. This process is repeated until the the root node is marked as solved, indicating that no immediately accessible nodes could lead to an increase in heuristic value. The AO* algorithm is guaranteed to find the minimal cost solution if the heuristic is _admissible_, i.e., the heuristic estimate of cost is always less than or equal to the actual cost of a node. For more details on the AO* algorithm, we refer the reader to [14]. An example AND/OR graph is given in Figure 1 with its minimal cost solution shown in red.
**Additional AND/OR graph notation:** In addition to the notation defined above, we use \(t\) to refer to a terminal node. When searching over an AND/OR graph, we use \(\mathcal{G}\) to refer to the implicit (entire) AND/OR graph and \(\mathcal{G}^{\prime}\subset G\) to explicit (explored) AND/OR graph, as in prior work.
### Bayesian Classification and Regression Trees (BCART)
Bayesian Decision Trees are a family of statistical models of decision trees introduced in Chipman, George, and McCulloch [11] and Denison, Mallick, and Smith [11]. A Bayesian Decision Tree (BDT) is a pair \((T,\Theta)\) where \(T\) is a tree and \(\Theta=(\theta_{l_{1}},\theta_{l_{2}},\ldots,\theta_{l_{L}})\) parameterizes the independent probability distributions over labels in the leaf nodes of tree \(T\). We are interested in the binary classification setting, where each \(\theta_{l}\) parameterizes a Bernoulli distribution \(\text{Ber}(\theta_{l})\) with \(\theta_{l}\in[0,1]\). We denote by \(\text{Beta}(\rho^{1},\rho^{0})\) the Beta distribution with parameters \(\rho^{1},\rho^{0}\in\mathbb{R}^{+}\) and by \(B(c^{1},c^{0})\) the Beta function.
We note that a BDT's tree \(T\) partitions the data such that the sample subsets \(I(l_{1})\), \(I(l_{2}),I(l_{L})\) fall into leaves \(l_{1},l_{2},\ldots,l_{L}\). Furthermore, a BDT defines a probability distribution over the respective labels occurring in their leaves: each label in leaf \(l\) is sampled from \(\text{Ber}(\theta_{l}))\). Every BDT therefore induces a likelihood function, given in Theorem 1.
**Theorem 1**.: _The likelihood of a BDT \((T,\Theta)\) generating labels \(\mathcal{Y}\) given features \(\mathcal{X}\) is_
\[P(\mathcal{Y}|\mathcal{X},T,\Theta) =\prod_{l\in\mathcal{I}_{\text{linear}}}\prod_{i\in I(l)}\theta_ {l}^{y_{i}}\left(1-\theta_{l}\right)^{1-y_{i}} \tag{1}\] \[=\prod_{l\in\mathcal{I}_{\text{linear}}}\theta_{l}^{c_{1}^{1}} \left(1-\theta_{l}\right)^{c_{l}^{0}} \tag{2}\]
The specific formulation of BCART also assumes a prior distribution over \(\Theta\), i.e., that \(\theta\sim\text{Beta}(\rho^{1},\rho^{0})\) for each \(\theta\in\Theta\). With this assumption, we can derive the likelihood function \(P(\mathcal{Y}|\mathcal{X},T)\); see Theorem 2.
**Theorem 2**.: _Assume that each \(\theta\sim\text{Beta}(\rho^{1},\rho^{0})\) for each \(\theta\in\Theta\). Then the likelihood of a tree \(T\) generating labels \(\mathcal{Y}\) given features \(\mathcal{X}\) is_
\[P(\mathcal{Y}|\mathcal{X},T)=\prod_{l\in\mathcal{T}_{\text{linear}}}\frac{B(c_ {l}^{1}+\rho^{1},c_{l}^{0}+\rho^{0})}{B(\rho^{1},\rho^{0})} \tag{3}\]
Theorems 1 and 2 are proven in the appendices; we note they have been observed in different forms in prior work [11].
For notational convenience, we define a leaf count likelihood function \(\ell_{\text{leaf}}(c^{1},c^{0})\) for integers \(c^{1}\) and \(c^{0}\):
\[\ell_{\text{leaf}}(c^{1},c^{0})\coloneqq\frac{B(c^{1}+\rho^{1},c^{0}+\rho^{ 0})}{B(\rho^{1},\rho^{0})} \tag{4}\]
and we can rewrite Equation 3 as
\[P(\mathcal{Y}|\mathcal{X},T)=\prod_{l\in\mathcal{T}_{\text{linear}}}\ell_{ \text{leaf}}(c_{l}^{1},c_{l}^{0}) \tag{5}\]
In this work, we utilize the original prior over trees from [11], given in Definition 3.
**Definition 3**.: The original BCART prior distribution over trees is
\[P(T|\mathcal{X}) =\left(\prod_{l\in\mathcal{T}_{\text{linear}}}p_{\text{leaf}}(d(l), I(l))\right)\times\] \[\left(\prod_{m\in\mathcal{T}_{\text{linear}}}p_{\text{inner}}(d(m), I(m))\right)\]
where
\[p_{\text{leaf}}(d,I) =\begin{cases}1,&\mathcal{V}(I)=\emptyset\\ 1-p_{\text{split}}(d),&\mathcal{V}(I)\neq\emptyset\end{cases} \tag{6}\] \[p_{\text{inner}}(d,I) =\begin{cases}0,&\mathcal{V}(I)=\emptyset\\ p_{\text{split}}(d)/|\mathcal{V}(I)|,&\mathcal{V}(I)\neq \emptyset\end{cases} \tag{7}\]
and
\[p_{\text{split}}(d)=\alpha(1+d)^{-\beta} \tag{8}\]
Figure 1: An example AND/OR graph, with AND nodes drawn as squares, and OR nodes drawn as solid circles, and terminal nodes drawn as dashed circles. The minimal cost solution is highlighted in red and has cost \(0+0+3+4+1+2=10\).
Intuitively, \(p_{\text{split}}(d)\) is the prior probability of any node splitting and is allocating equally amongst valid splits. This choice of prior, \(P(T|\mathcal{X})\), combined with the likelihood function in Equation 5 induces the posterior distribution over trees \(P(T|\mathcal{Y},\mathcal{X})\):
\[P(T|\mathcal{Y},\mathcal{X})\propto P(\mathcal{Y}|\mathcal{X},T)P(T|\mathcal{X}) \tag{9}\]
Throughout our analysis, we treat the dataset \((\mathcal{X},\mathcal{Y})\) as fixed.
## 4 Connecting BCART with AND/OR Graphs
Given a dataset \((\mathcal{X},\mathcal{Y})\), we will now construct a special AND/OR graph \(\mathcal{G}_{\mathcal{X},\mathcal{Y}}\). We will then show that a minimal cost solution graph on \(\mathcal{G}_{\mathcal{X},\mathcal{Y}}\) corresponds directly with the maximum a posteriori tree given our choice of prior distributions \(P(T|\mathcal{X})\) and \(P(\Theta)\). Using this construction, the problem of finding the maximum a posteriori tree of our posterior is reduced to that of finding the minimum cost solution graph on \(\mathcal{G}_{\mathcal{X},\mathcal{Y}}\).
**Definition 4** (BCART AND/OR graph \(\mathcal{G}_{\mathcal{X},\mathcal{Y}}\)).: Given a dataset \((\mathcal{X},\mathcal{Y})\), construct the AND/OR graph \(\mathcal{G}_{\mathcal{X},\mathcal{Y}}\) as follows:
1. For every possible subset \(\mathcal{I}\subset[N]\) and depth \(d\in\{0,\dots,F\}\), create an OR node \(o_{\mathcal{I},d}\).
2. For every OR node \(o_{\mathcal{I},d}\) created in Step 1, create a terminal node \(t_{\mathcal{I},d}\) and draw an edge from \(o_{\mathcal{I},d}\) to \(t_{\mathcal{I},d}\) with cost \(\texttt{cost}(o_{\mathcal{I},d},t_{\mathcal{I},d})=-\log p_{\text{leaf}}(d, \mathcal{I})-\log\ell_{\text{leaf}}(c^{1}(\mathcal{I}),c^{0}(\mathcal{I}))\).
3. For every OR node \(o_{\mathcal{I},d}\) created in Step 1, create \(F\) AND nodes \(a_{\mathcal{I},d},\dots,a_{\mathcal{I},d,F}\) and drawn an edge from \(o_{\mathcal{I},d}\) to each \(a_{\mathcal{I},d}\) with cost \(\texttt{cost}(o_{\mathcal{I},d},a_{\mathcal{I},d,f})=-\log p_{\text{inner}}(d)\).
4. For every pair \(a_{\mathcal{I},d,f}\) and \(o_{\mathcal{I}^{\prime},d+1}\) where \(\mathcal{I}|_{f=k}=\mathcal{I}^{\prime}\) for some \(f\in[F]\) and \(k\in\{0,1\}\), draw an edge from \(a_{\mathcal{I},d,f}\) to \(o_{\mathcal{I}^{\prime},d+1}\) with cost \(\texttt{cost}(a_{\mathcal{I},d,f},o_{\mathcal{I}^{\prime},d+1})=0\).
5. Let \(o_{[n],0}\), the OR node representing all sample indices, be the unique root node \(r\) of \(\mathcal{G}_{\mathcal{X},\mathcal{Y}}\).
6. Remove all OR nodes representing empty subsets and their neighbors.
7. Remove all nodes not connected to the root node \(r\).
We note that \(\mathcal{G}_{\mathcal{X},\mathcal{Y}}\) contains \(F\times 2^{n}\) OR Nodes, \(F\times 2^{n}\) terminal nodes (one for each OR Node), and \(F^{2}\times 2^{n}\) AND nodes (\(F\) for each OR Node) and so is finite.
Intuitively, each OR node \(o_{\mathcal{I},d}\) in \(\mathcal{G}_{\mathcal{X},\mathcal{Y}}\) corresponds with the subproblem of discovering a maximum a posteriori subtree starting from depth \(d\) and over the subset of samples \(\mathcal{I}\) from dataset \(\mathcal{X},\mathcal{Y}\). Each AND node \(a_{\mathcal{I},d,f}\) then represents the same subproblem but given that a decision was already made to split on feature \(f\) at the root node of this subtree. A valid solution graph on \(\mathcal{G}_{\mathcal{X},\mathcal{Y}}\) corresponds with a binary classification tree \(T\) on the dataset \((\mathcal{X},\mathcal{Y})\) and the value of a solution is related to the posterior probability of \(T\) given by \(P(T|\mathcal{Y},\mathcal{X})\). We formalize these properties in Theorems 5 and 6.
**Theorem 5**.: _Every solution graph on AND/OR graphs induces a unique binary decision tree. Furthermore, every decision tree can be represented as a unique solution graph under this correspondence. Thus, there is natural bijection between solution graphs on \(\mathcal{G}_{\mathcal{X},\mathcal{Y}}\) and binary decision trees._
**Theorem 6**.: _Under the natural bijection described in Theorem 5, given a solution graph \(\mathcal{S}\) and its corresponding tree \(T\), we have that \(\texttt{cost}(\tilde{\mathcal{S}})=-\log P(T,\mathcal{Y}|\mathcal{X})\). Therefore the minimal cost solution over \(\mathcal{G}_{\mathcal{X},\mathcal{Y}}\) corresponds with a maximum a posteriori tree._
The bijection constructed in Theorems 5 and 6 is depicted in Figure 3. Due to space constraints, we defer a formal description of this bijection to Appendix A.
## 5 MAPTree
Theorems 5 and 6 imply that it is sufficient to find the minimum cost solution graph on \(\mathcal{G}_{\mathcal{X},\mathcal{Y}}\) to recover the MAP tree under the BCART posterior. In this section, we introduce MAPTree, an AND/OR search algorithm that finds a minimal cost solution on \(\mathcal{G}_{\mathcal{X},\mathcal{Y}}\). MAPTree is shown in Algorithm 1.
A key component of MAPTree is the Perfect Split Heuristic \(h\) that guides the search, presented in Definition 7.
Figure 3: Example map between an example solution of the AND/OR graph \(\mathcal{G}_{\mathcal{X},\mathcal{Y}}\) depicted in Figure 2 and its corresponding binary classification tree. We see that the resulting tree is a stump which splits on feature \(f_{1}\) at the root.
Figure 2: Example of the defined BCART AND/OR graph \(\mathcal{G}_{\mathcal{X},\mathcal{Y}}\). OR nodes are represented as circles with solid borders, terminal nodes as circles with dashed borders, and AND nodes as squares. In this dataset, two feature splits are possible at the root node (\(f_{0}\) and \(f_{1}\)) and no further splits are possible at deeper nodes. The best solution on this AND/OR graph is highlighted in red and corresponds with a stump which splits the root node, corresponding to the entire dataset, on feature \(f_{1}\).
**Definition 7** (Perfect Split Heuristic).: For OR node \(o_{\mathcal{I},d}\) with terminal node child \(t_{\mathcal{I},d}\), let
\[h(o_{\mathcal{I},d}) =-\max\{ \tag{10}\] \[\log\ell_{\text{leaf}}(c^{1}(I),c^{0}(I)),\] (11) \[\log p_{\text{split}}(d,\mathcal{I})\] (12) \[+\log\ell_{\text{leaf}}(c^{1}(I),0)\] (13) \[+\log\ell_{\text{leaf}}(0,c^{0}(I))\} \tag{14}\]
and for AND node \(a_{I,d,f}\) with OR node children \(o_{I|_{f=0},d+1}\) and \(o_{I|_{f=0},d+1}\), let
\[h(a_{I,d,f})=h(o_{I|_{f=0},d+1})+h(o_{I|_{f=0},d+1}) \tag{15}\]
Intuitively, the Perfect Split Heuristic describes the negative log posterior probability of the best potential subtree rooted at the given OR node \(o_{\mathcal{I},d}\): one that perfectly classifies the data in a single additional split. The heuristic guides the search away from subproblems that are too deep or for which the labels have already been poorly divided. We prove that this heuristic is a lower bound (admissible) and consistent in later sections.
### Analysis of MAPTree
We now introduce several key properties of MAPTree. In particular, we show that (1) the Perfect Split Heuristic is consistent and therefore also admissible, (2) MAPTree finds the maximum a posteriori tree of the BCART posterior upon completion, and (3) upon early termination, MAPTree returns the minimum cost solution within the explored explicit graph \(\mathcal{G}^{\prime}\). Theorems 8 - 12 and Corollary 11 are proven in Appendix A.
**Theorem 8** (Consistency of the Perfect Split Heuristic).: _The Perfect Split Heuristic in Definition 7 is consistent, i.e., for any OR node \(o\) with children \(\{t,a_{1},\ldots,a_{F}\}\):_
\[h(o)\leq\min_{e\in\{t,a_{1},\ldots,a_{F}\}}\texttt{cost}(o,c)+h(c) \tag{16}\]
_and for any AND node \(a\) with children \(\{o_{0},o_{1}\}\):_
\[h(a)\leq\sum_{c\in\{o_{0},o_{1}\}}\texttt{cost}(a,c)+h(c) \tag{17}\]
**Theorem 9** (Finiteness of MAPTree).: _Algorithm 1 always terminates._
**Theorem 10** (Correctness of MAPTree).: _Algorithm 1 always outputs a minimal cost solution on \(\mathcal{G}_{\mathcal{X},\mathcal{Y}}\) upon completion._
```
1:ORNode \(l\), cost function cost, lower bounds \(LB\)
2:\(\mathcal{V}=\{l\}\)
3:while\(|V|>0\)do
4: Remove a node \(o\) from \(\mathcal{V}\) with maximal depth
5: Let \(\{a_{1},\dots,a_{F}\}\) be the AND node children of \(o\)
6: Let \(t\) be the terminal node child of \(o\)
7:\(v^{(lb)}_{\text{split}}=\min_{c\in\{a_{1},\dots,a_{F}\}}(\texttt{cost}(o,c)+ LB[c])\)
8:\(v^{(lb)}=\min\{v^{(lb)}_{\text{split}},\texttt{cost}(o,t)\}\)
9:if\(v^{(lb)}>LB[o]\)then
10:\(LB[o]:=v^{(lb)}\)
11: Add all parents of \(o\) to \(\mathcal{V}\)
12:endif
13:endwhile
```
**Algorithm 4**updateLowerBounds
**Corollary 11**.: _Consider the tree induced by the output of Algorithm 1 under the natural bijection described in Section 4. By Theorems 5 and 6, this tree is the maximum a posteriori tree \(\arg\max P(T|X,Y)\)._
**Theorem 12** (Anytime optimality of MAPTree).: _Upon early termination, Algorithm 1 outputs the minimal cost solution across the explicit subgraph \(\mathcal{G}^{\prime}\) of already explored nodes._
## 6 Experiments
We evaluate the performance of MAPTree in multiple settings. In all experiments in this section, we set \(\alpha=0.95\) and \(\beta=0.5\). We find that our results are not highly dependent on the choices of \(\alpha\) and \(B\); see Appendix B.
In the first setting, we compare the efficiency of MAPTree to the Sequential Monte Carlo (SMC) and Markov-Chain Monte Carlo (MCMC) baselines from Lakshminarayanan, Roy, and Teh (2013) and Chipman, George, and McCulloch (1998), respectively. In the second setting, we create a synthetic dataset in which the true labels are generated by a randomly generated tree and measure generalization performance with respect to training dataset size. In the third setting, we measure the generalization accuracy, log likelihood, and tree size of models generated by MAPTree and baseline algorithms across all 16 datasets from the CP4IM dataset repository (Guns, Nijssen, and De Raedt 2011).
### Speed Comparisons against MCMC and SMC
We first compare the performance of MAPTree with the SMC and MCMC baselines from Lakshminarayanan, Roy, and Teh (2013) and Chipman, George, and McCulloch (1998), respectively, on all 16 binary classification datasets from the CP4IM dataset repository (Guns, Nijssen, and De Raedt 2011). We note that all three methods, given infinite exploration time, should recover the maximum a posteriori tree from the BCART posterior. However, it has been observed that the mixing times for Markov-Chain-based methods, such as the MCMC and SMC baselines, is exponential in the depth of the data-generating tree (Kim and Rockova 2023). Furthermore, the SMC and MCMC methods are unable to determine when they have converged, nor can they provide a certificate of optimality upon convergence.
In our experiments, we modify the hyperparameters of each algorithm and measure the training time and log posterior of the data under the output tree (Figure 4). In 12 of the 16 datasets in Figure 6, MAPTree outperforms SMC and MCMC and is able to find trees with higher log posterior faster than the baseline algorithms. Furthermore, in 5 of the 16 datasets, MAPTree converges to the provably optimal tree, i.e., the maximum a posteriori tree of the BCART posterior.
### Fitting a Synthetic Dataset
We measure the generalization performance of MAPTree and various other baseline algorithms as a function of training dataset size on tree-generated data.
**Synthetic Data**: We construct a synthetic dataset where labels are generated by a randomly generated tree. We first construct a random binary tree structure as specified in Devroye and Kruszewski (1995) via recursive random divisions of the available internal nodes to the left or right subtree. Next, features are selected for each internal node uniformly at random such that no internal node splits on the same feature as its ancestors. Lastly, labels are assigned to the leaf nodes in alternating fashion so as to avoid compression of the underlying tree structure. Individual datapoints with 40 features are then sampled with each feature drawn i.i.d. from \(\text{Ber}(1/2)\), and their labels are determined by following the generated tree to a leaf node. We repeat this process 20 times, generating 20 datasets for 20 random trees. We also randomly flip \(\epsilon\) of the training data labels, with \(\epsilon\) ranging from \(0\) to \(0.25\) to simulate label noise.
In our experiments, MAPTree generates trees which outperform both the greedy, top-down approaches and ODT methods in test accuracy for various training dataset sizes and values of label corruption proportion \(\epsilon\) (Figure 6). (We note that though some baseline algorithms demonstrate comparable performance at a single noise level, no baseline algorithm demonstrates test accuracy comparable to MAPTree
across all noise levels). We also emphasize that MAPTree requires no hyperparameter tuning, whereas we experimented with various values of hyperparameters for the baseline algorithms in which performance was highly dependent on hyperparameter values (e.g., DL8.5 and GOSDT); see Appendix B.
produces smaller trees.
## 7 Discussion and Conclusions
We presented MAPTree, an algorithm which provably finds the maximum a posteriori tree of the BCART posterior for a given dataset. Our algorithm is inspired by best-first-search algorithms over AND/OR graphs and the observation that the search problem for trees can be framed as a search problem over an appropriately constructed AND/OR graph.
MAPTree outperforms thematically similar approaches such as SMC- and MCMC-based algorithms, finding higher log-posterior trees faster, and is able to determine when it has converged to the maximum a posteriori tree, unlike prior work. MAPTree also outperforms greedy, ODT, and ODST construction methods in test accuracy on the synthetic dataset constructed in Section 6. Furthermore, on many real world benchmark datasets, MAPTree either a) demonstrates better generalization performance, or b) demonstrates comparable generalization performance but with smaller trees.
A limitation of MAPTree is that it constructs a potentially large AND/OR graph, which consumes a significant amount of memory. We leave optimizations that may permit MAPTree to run on huge datasets to future work. Nonetheless, with the optimizations presented in Section 6, we find that MAPTree was performant enough to run on the CP4IM benchmark datasets used in evaluation of previous ODT benchmarks.
|
2309.06408 | Surface Casimir densities on branes orthogonal to the boundary of
anti-de Sitter spacetime | We investigate the vacuum expectation value of the surface energy-momentum
tensor (SEMT) for a scalar field with general curvature coupling in the
geometry of two branes orthogonal to the boundary of anti-de Sitter (AdS)
spacetime. For Robin boundary conditions on the branes, the SEMT is decomposed
into the contributions corresponding to the self-energies of the branes and the
parts induced by the presence of the second brane. The renormalization is
required for the first parts only and for the corresponding regularization the
generalized zeta function method is employed. The induced SEMT is finite and is
free from renormalization umbiguities. For an observer living on the brane, the
corresponding equation of state is of the cosmological constant type. Depending
on the boundary conditions and on the separation between the branes, the
surface energy densities can be either positive or negative. The energy density
induced on the brane vanishes in special cases of Dirichlet and Neumann
boundary conditions on that brane. The effect of gravity on the induced SEMT is
essential at separations between the branes of the order or larger than the
curvature radius for AdS spacetime. In the large separation limit the decay of
the SEMT, as a function of the proper separation, follows a power law for both
massless and massive fields. For parallel plates in Minkowski bulk and for
massive fields the fall-off of the corresponding expectation value is
exponential. | A. A. Saharian | 2023-09-12T17:21:34Z | http://arxiv.org/abs/2309.06408v2 | # Surface Casimir densities on branes orthogonal to the boundary
###### Abstract
We investigate the vacuum expectation value of the surface energy-momentum tensor (SEMT) for a scalar field with general curvature coupling in the geometry of two branes orthogonal to the boundary of anti-de Sitter (AdS) spacetime. For Robin boundary conditions on the branes, the SEMT is decomposed into the contributions corresponding to the self-energies of the branes and the parts induced by the presence of the second brane. The renormalization is required for the first parts only and for the corresponding regularization the generalized zeta function method is employed. The induced SEMT is finite and is free from renormalization ambiguities. For an observer living on the brane, the corresponding equation of state is of the cosmological constant type. Depending on the boundary conditions and on the separation between the branes, the surface energy densities can be either positive or negative. The energy density induced on the brane vanishes in special cases of Dirichlet and Neumann boundary conditions on that brane. The effect of gravity on the induced SEMT is essential at separations between the branes of the order or larger than the curvature radius for AdS spacetime. In the large separation limit the decay of the SEMT, as a function of the proper separation, follows a power law for both massless and massive fields. For parallel plates in Minkowski bulk and for massive fields the fall-off of the corresponding expectation value is exponential.
**Keywords:** Casimir effect; anti-de Sitter space; surface energy; Robin boundary conditions
## 1 Introduction
Among the interesting direction in the developments of the Casimir effect theory (for general introduction and applications see, e.g., [1]-[6]) is the study of dependence of expectation values of physical characteristics for quantum fields on the bulk and boundary geometries, as well as on the spatial topology. The interest is motivated by applications in gravitational physics, in cosmology and in condensed matter physics. Exact analytic expressions for physical characteristics are obtained in geometries with a sufficient degree of symmetry. In particular, the respective background geometries include maximally symmetric spacetimes sourced by positive and negative cosmological constants. These geometries, referred as de Sitter (dS) and anti-de Sitter (AdS) spacetimes, respectively, are among the most popular bulks in quantum field theory on curved backgrounds.
The goal of this paper is to investigate the surface Casimir densities on two parallel branes for a scalar field in AdS spacetime. Quantum field theoretical effects on fixed AdS background have been extensively studied in the literature. The importance of those investigations is motivated by several reasons. The AdS
spacetime is a non-globally hyperbolic manifold with a timelike boundary at spatial infinity and the early interest to the formulation of quantum field theory in that geometry was related to principal questions of quantization [7, 8, 9] (see also the references in [10]). The necessity to control the information through the spatial infinity requires the imposition of boundary conditions on quantum fields (for a discussion of possible boundary conditions on the AdS boundary see, e.g., [11, 12]). The different boundary conditions correspond to physically different field theories. The AdS boundary at spatial infinity plays a central role in models of AdS/Conformal Field Theory (AdS/CFT) correspondence [13]-[16]. The latter establishes duality between conformal field theory living on the boundary of AdS spacetime and supergravity or string theory on AdS bulk. This holographic correspondence between two different theories provides an efficient computational framework for non-perturbative effects, mapping them to the perturbative region of the dual theory. Within this approach interesting results have been obtained in high energy physics, in quantum chromodynamics and in condensed matter physics [14, 17, 18]. The braneworld models [19] with large extra dimensions, both phenomenological and string theory motivated, present another interesting setup where the properties of AdS spacetime play a crucial role. They provide a geometrical solution to the hierarchy problem between the electroweak and gravitational energy scales and serve as an interesting framework to discuss the problems in high energy physics, gravitation and cosmology.
The braneworld models contain two types of fields: fields propagating in the bulk and fields localized on the branes. In simplified models, the interaction between branes and bulk fields is reduced to boundary conditions on the branes. Those conditions modify the spectrum of vacuum fluctuations of bulk quantum fields and give rise to the Casimir type contributions in the expectation values of physical observables, such as the ground state energy and the vacuum forces acting on the branes. The Casimir energy and forces in the geometry of branes parallel to the AdS boundary have been widely studied in the literature (see [20]-[35] for early investigations and [36] for a more complete list of references). The Casimir forces can be used as a possible mechanism for stabilization of interbrane distance that is required to escape the variations of physical constants in the effective theory on the branes. The vacuum fluctuations of bulk field may also provide a mechanism for generation of cosmological constant on branes. More detailed information on the properties of the vacuum state is contained in the expectation values of bilinear combinations of fields, such as the field squared and the energy-momentum tensor. In braneworld models on AdS bulk those expectation values are considered in [32], [37]-[45] for scalar, fermionic and electromagnetic fields. For charged fields, another important local characteristic of the vacuum state is the expectation value of the current density. The combined effects of branes and spatial topology on the vacuum currents for scalar and fermionic fields in locally AdS spacetime, with a part of spatial dimensions compactified on a torus, have been studied in [46]-[51].
In the references cited above the branes are parallel to the AdS boundary (Randall-Sundrum-type models [52, 53]). In a number of recent developments in conformal field theories additional boundaries are present (see, e.g., [54] and references therein). In the context of AdS/CFT correspondence, the respective dual theory on the AdS bulk contains boundaries intersecting the AdS boundary (AdS/BCFT correspondence) [55, 56]. Another interesting problem on AdS bulk with surfaces crossing its boundary is related to the evaluation of the entanglement entropy of a quantum system in conformal field theory with a boundary. In accordance of the procedure suggested in [57, 58], the entanglement entropy in a bounded region from the CFT side on the AdS boundary is expressed in terms of the area of the minimal surface in the AdS bulk that asymptotes the boundary of CFT (see also [59, 60] for reviews). Motivated by those developments, in [61, 62] we have studied the influence of branes, orthogonally intersecting the AdS boundary, on the local properties of the scalar vacuum in general number of spatial dimensions. As local characteristics of the vacuum state, the expectation values of the field squared and of the energy-momentum tensor have been considered. By using the respective vacuum stresses, the Casimir forces acting on the branes were investigated as well. It has been shown that, in addition to the component perpendicular to the brane, those forces have a nonzero parallel component (shear force). In quantum field theory with boundaries the expectation values of physical quantities may contain contributions
localized on the boundary. The expression for the surface energy-momentum tensor of a scalar field with general curvature coupling parameter and for general bulk and boundary geometries has been derived in [63] by using the standard variational procedure. The corresponding vacuum expectation value in the problem with branes parallel to the AdS boundary is investigated in [64, 65]. The present paper considers the vacuum expectation value of the surface energy-momentum tensor for a scalar field in the problem with two parallel branes orthogonal to the AdS boundary.
The organization of the paper is as follows. In the next section we describe the geometry of the problem and present the expression for the surface energy-momentum tensor. The corresponding vacuum expectation value (VEV) is investigated in Section 3 by using the two-point function from [62]. The surface energy density is decomposed into contributions corresponding to the self-energy of the brane when the second brane is absent and the part induced by the second brane. The renormalization is required only for the first contribution. In the limit of infinite curvature radius we recover the result for parallel plates in Minkowski bulk. Another special case with conformal relation to the Casimir problem in Minkowski spacetime corresponds to a conformally coupled massless field. The behavior of the surface energy-momentum tensor in asymptotic regions of the parameters is discussed in Section 4. The numerical analysis for the induced surface energy density is presented as well. The main results of the paper are summarized in Section 5. The regularization of the self-energy contribution, by using the generalized zeta function approach, is considered in Appendix A. The finite part is separated on the basis of principal part prescription.
## 2 Geometry of the problem
AdS spacetime is the maximally symmetric solution of the Einstein equations with a negative cosmological constant \(\Lambda\) as the only source of the gravitational field. In Poincare coordinates \((t,x^{1},{\bf x},z)\), with \({\bf x}=(x^{2},\ldots,x^{D-1})\) and \(D\) being the number of spatial dimensions, the respective metric tensor \(g_{ik}\) is given by
\[ds^{2}=g_{ik}dx^{i}dx^{k}=\left(\frac{\alpha}{z}\right)^{2}\left[dt^{2}-\left( dx^{1}\right)^{2}-d{\bf x}^{2}-dz^{2}\right]. \tag{1}\]
Here, the parameter \(\alpha=\sqrt{D(1-D)/(2\Lambda)}\) determines the curvature radius of the background spacetime, \(-\infty<x^{i}<+\infty\) for \(i=0,1,2,\ldots,D-1\), and \(0\leq z<\infty\). The \(D\)-dimensional hypersurfaces \(z=0\) and \(z=\infty\) present the AdS boundary and horizon, respectively. The proper distance along the \(z\)-direction is measured by the coordinate \(y=\alpha\ln(z/\alpha)\), \(-\infty<y<+\infty\). In the coordinate system \((t,x^{1},{\bf x},y)\) one has \(g^{\prime}_{DD}=1\) and \(g^{\prime}_{ik}=g_{ik}=e^{-2y/\alpha}\eta_{ik}\), \(i,k=0,1,\ldots,D-1\), with \(\eta_{ik}\) being the metric tensor for Minkowski spacetime.
We aim to investigate the surface Casimir densities induced by quantum fluctuations of a scalar field \(\varphi(x)\) on codimension one parallel branes located at \(x^{1}=a_{1}\) and \(x^{1}=a_{2}\), \(a_{1}<a_{2}\) (see Figure 1 for the geometry of the problem). It will be assumed that the field is prepared in the Poincare vacuum state. For a scalar field with curvature coupling parameter \(\xi\) the corresponding field equation reads
\[\left(\square+\xi R+m^{2}\right)\varphi(x)=0, \tag{2}\]
where \(\square=g^{ik}\nabla_{i}\nabla_{k}\) is the covariant d'Alembertian and \(R=2\Lambda(D+1)/(D-1)\) is the Ricci scalar for AdS spacetime. On the branes, the field operator is constrained by Robin boundary conditions
\[(A_{j}+B_{j}n^{i}_{(j)}\nabla_{i})\varphi(x)=0,\;x^{1}=a_{j}, \tag{3}\]
where \(n^{i}_{(j)}\) is the normal to the brane at \(x^{1}=a_{j}\) pointing into the region under consideration. The branes divide the background space into three regions: \(x^{1}\leq a_{1}\), \(a_{1}\leq x^{1}\leq a_{2}\), and \(x^{1}\geq a_{2}\). In the first and third regions one has \(n^{i}_{(1)}=-\delta^{i}_{1}z/\alpha\) and \(n^{i}_{(2)}=\delta^{i}_{1}z/\alpha\), respectively. For the region \(a_{1}\leq x^{1}\leq a_{2}\) the normal
in (3) is expressed as \(n^{i}_{(j)}=(-1)^{j-1}\delta^{i}_{1}z/\alpha\). In the discussion below we consider the region between the branes. The VEVs for the regions \(x^{1}\leq a_{1}\) and \(x^{1}\geq a_{2}\) are obtained in the limits \(a_{2}\to\infty\) and \(a_{1}\to-\infty\). For the sets of the coefficients \((A_{j},B_{j})=(A_{j},0)\) and \((A_{j},B_{j})=(0,B_{j})\) the constraints (3) are reduced to Dirichlet and Neumann boundary conditions, respectively. For Robin boundary conditions, here the special case \(B_{j}/A_{j}=\alpha\beta_{j}/z\) will be assumed with \(\beta_{j}\), \(j=1,2\), being constants. For this choice, the boundary conditions (3), written in terms of the coordinate \(x^{1}_{(p)}=\alpha x^{1}/z\), take the form
\[(1+\beta_{j}n^{1}_{(j)}\partial_{x^{1}_{(p)}})\varphi(x)=0,\;x^{1}=a_{j}. \tag{4}\]
The latter is the Robin boundary condition with constant coefficient \(\beta_{j}\). This coefficient characterizes the properties of the brane and can be used to model the finite penetrations length of quantum fluctuations. Note that the coordinate \(x^{1}_{(p)}\) in (4) measures the proper distance from the brane for fixed \(z\).
For the scalar field modes in the region between the branes the eigenvalues of the quantum number \(k^{1}\), corresponding to the momentum along the direction \(x^{1}\), are quantized by the boundary conditions (4). Those eigenvalues are roots of the transcendental equation (see [62])
\[(\beta_{1}+\beta_{2})\,k^{1}a\cos\left(k^{1}a\right)+\left[\beta_{1}\beta_{2} (k^{1})^{2}-1\right]\sin\left(k^{1}a\right)=0, \tag{5}\]
where \(a=a_{2}-a_{1}\). Depending on the values of the Robin coefficients this equation, in addition to an infinite set of roots with real \(k^{1}\), may have purely imaginary roots \(k^{1}=i\chi\) (for the corresponding conditions see [66]). The energy of the scalar modes, with the momentum \({\bf k}=(k^{2},\ldots,k^{D-1})\), \(-\infty<k^{i}<+\infty\), \(i=2,\ldots,D-1\), in the subspace with coordinates \({\bf x}\), is expressed as \(E=\sqrt{(k^{1})^{2}+{\bf k}^{2}+\gamma^{2}}\), where \(0\leq\gamma<\infty\) is the quantum number corresponding to the \(z\)-direction. The dependence of the mode functions on the coordinate \(z\) is expressed in terms of the function \(z^{D/2}J_{\nu}(\gamma z)\), with \(J_{\nu}(u)\) being the Bessel function and
\[\nu=\sqrt{\frac{D^{2}}{4}-D(D+1)\xi+m^{2}\alpha^{2}}. \tag{6}\]
Note that, in contrast to the Minkowski bulk, the energy of the scalar modes with given momentum does not depend on the mass of the field quanta. The mass enters in the problem through the parameter \(\nu\geq 0\). Now, we see that in the presence of imaginary roots \(k^{1}=i\chi\), for the scalar field modes with \({\bf k}^{2}+\gamma^{2}<\chi^{2}\) the energy becomes imaginary. This signals about the instability of the vacuum state under consideration. In the discussion below we will assume the values of the coefficients \(\beta_{1}\) and \(\beta_{2}\) for which there are no imaginary roots of the eigenvalue equation (5). The corresponding conditions read [66]
\[\beta_{1,2}\leq 0\cup\{\beta_{1}\beta_{2}\leq 0,\beta_{1}+\beta_{2}>1/a\}. \tag{7}\]
Figure 1: The geometry of two branes orthogonal to the AdS boundary.
For a general \((D+1)\)-dimensional spacetime with a smooth boundary \(\partial M_{s}\), the surface energy-momentum tensor (SEMT) \(T^{(\rm s)}_{ik}(x)=\tau_{ik}\delta(x;\partial M_{s})\), localized on the boundary by the one-sided delta function \(\delta(x;\partial M_{s})\), is given by [63]
\[\tau_{ik}=(1/2-2\xi)h_{ik}\varphi n^{l}\nabla_{l}\varphi+\xi K_{ik}\varphi^{2}. \tag{8}\]
Here, \(h_{ik}=g_{ik}+n_{i}n_{k}\) is the induced metric on the boundary, with \(n_{i}\) being the inward-pointing unit normal vector for \(\partial M_{s}\), and \(K_{ik}=h^{l}_{i}h^{m}_{\nabla}\Gamma n_{m}\) is the respective extrinsic curvature tensor. The expression (8) was obtained in [63] by using the standard variational procedure for the action of a scalar field with general curvature coupling parameter and with an appropriate boundary term localized on \(\partial M_{s}\). Denoting the vacuum state by \(|0\rangle\), the VEV of the SEMT is presented as
\[\langle 0|T^{(\rm s)}_{ik}|0\rangle=\delta(x;\partial M_{s})\langle 0|\tau_{ ik}|0\rangle, \tag{9}\]
where the VEV \(\langle\tau_{ik}\rangle\equiv\langle 0|\tau_{ik}|0\rangle\) is written in terms of the Hadamard function \(G^{(1)}(x,x^{\prime})=\langle 0|\varphi(x)\varphi(x^{\prime})+\varphi(x^{ \prime})\varphi(x)|0\rangle\) by the formula
\[\langle\tau_{ik}(x)\rangle=\frac{1}{2}\lim_{x^{\prime}\to x}\left[(1/2-2\xi)h_ {ik}n^{l}\nabla_{l}+\xi K_{ik}\right]G^{(1)}(x,x^{\prime}). \tag{10}\]
The limit in the right-hand side contains two types of divergences. The first one is present already in the case when the point \(x\) does not belong to the boundary. The corresponding divergent part is the same as that in the problem where the branes are absent and it is removed by the subtraction from the Hadamard function in (10) the corresponding function in the brane-free geometry. The SEMT is absent in the latter geometry and the brane-free Hadamard function does not contribute to the VEV of the SEMT. The second type of divergences originates from the surface divergences in quantum field theory with boundaries and arise when the point \(x\) belongs to the boundary.
## 3 VEV of the SEMT
### General expression
In the problem under consideration and for the region \(a_{1}\leq x^{1}\leq a_{2}\) the inward-pointing normal is given by \(n_{i}=n_{(j)i}=(-1)^{j}\delta_{i}^{1}\alpha/z\) for the brane at \(x^{1}=a_{j}\). The corresponding induced metric reads \(h_{ik}=g_{ik}\), \(i,k\neq 1\), and \(h_{11}=0\). Now, it can be easily checked that the extrinsic curvature tensor for the branes vanishes, \(K_{ik}=0\). Hence, the VEV of the SEMT is expressed as
\[\langle\tau_{ik}(x)\rangle=\left(\frac{1}{4}-\xi\right)h_{ik}n^{l}\lim_{x^{ \prime}\to x}\nabla_{l}G^{(1)}(x,x^{\prime}). \tag{11}\]
The expression for the Hadamard function in the region between the branes is obtained from the corresponding expression for the Wightman function derived in [62]. It is presented in the decomposed form
\[G^{(1)}(x,x^{\prime}) = G^{(1)}_{j}(x,x^{\prime})+\frac{2(zz^{\prime})^{\frac{D}{2}}}{( 2\pi\alpha)^{D-1}}\int d{\bf k}\,e^{i{\bf k}\Delta{\bf x}}\int_{0}^{\infty}d \gamma\,\gamma J_{\nu}(\gamma z)J_{\nu}(\gamma z^{\prime}) \tag{12}\] \[\times\int_{w}^{\infty}d\lambda\frac{\cosh(\sqrt{\lambda^{2}-w^{2 }}\Delta t)}{\sqrt{\lambda^{2}-w^{2}}}\frac{2\cosh\left[\lambda\left(x^{1}-x^ {\prime 1}\right)\right]+\sum_{l=\pm 1}\left[e^{|x^{1}+x^{\prime 1}-2a_{j}|\lambda}c_{j}( \lambda)\right]^{l}}{c_{1}(\lambda)c_{2}(\lambda)e^{2a\lambda}-1},\]
where \(\Delta{\bf x}={\bf x}-{\bf x}^{\prime}\), \(w=\sqrt{\gamma^{2}+k^{2}}\), \(k=|{\bf k}|\), and
\[c_{j}(\lambda)=\frac{\beta_{j}\lambda-1}{\beta_{j}\lambda+1}. \tag{13}\]
In (12),
\[G_{j}^{(1)}(x,x^{\prime}) = G_{0}^{(1)}(x,x^{\prime})+\frac{(zz^{\prime})^{D/2}}{(2\pi\alpha)^{ D-1}}\int d{\bf k}\,e^{i{\bf k}\Delta{\bf x}}\int_{0}^{\infty}d\gamma\,\gamma J_{ \nu}(\gamma z)J_{\nu}(\gamma z^{\prime}) \tag{14}\] \[\times\int_{0}^{\infty}d\lambda\,\frac{e^{-i\sqrt{\lambda^{2}+w^{ 2}}\Delta t}}{\sqrt{\lambda^{2}+w^{2}}}\,\sum_{l=\pm 1}\left[e^{i[x^{1}+x^{ \prime 1}-2a_{j}]\lambda}c_{j}(i\lambda)\right]^{l},\]
is the Hadamard function in the problem with a brane at \(x^{1}=a_{j}\) when the second brane is absent. Again, it is obtained from the respective Wightman function given in [61, 62]. The first term in the right-hand side, \(G_{0}^{(1)}(x,x^{\prime})\), is the Hadamard function in AdS spacetime without branes. The last term in (12) is interpreted as the contribution to the Hadamard function in the region \(a_{1}\leq x^{1}\leq a_{2}\), induced by the brane at \(x^{1}=a_{j^{\prime}}\) when we add it to the problem with a single brane at \(x^{1}=a_{j}\). Here and below, \(j^{\prime}=1\) for \(j=2\) and \(j^{\prime}=2\) for \(j=1\).
Combining (11) and (12), the SEMT on the brane at \(x^{1}=a_{j}\) is decomposed as
\[\langle\tau_{ik}\rangle_{j}=\langle\tau_{ik}\rangle_{j}^{(0)}+\langle\tau_{ik} \rangle_{j}^{\rm ind}. \tag{15}\]
Here, \(\langle\tau_{ik}\rangle_{j}^{(0)}\) is the VEV of the SEMT when the second brane is absent and \(\langle\tau_{ik}\rangle_{j}^{\rm ind}\) is induced by the second brane at \(x^{1}=a_{j^{\prime}}\). The VEV \(\langle\tau_{ik}\rangle_{j}^{(0)}\) is obtained from (11) with the Hadamard function (14). By taking into account that in the AdS spacetime without branes the SEMT is absent, we get
\[\langle\tau_{i}^{k}\rangle_{j}^{(0)}=(4\xi-1)\frac{\delta_{i}^{k}\beta_{j}z^{ D+1}}{(2\pi)^{D-1}\alpha^{D}}\int d{\bf k}\,\int_{0}^{\infty}d\gamma\,\gamma J_{ \nu}^{2}(\gamma z)\int_{0}^{\infty}d\lambda\,\frac{1}{\sqrt{\lambda^{2}+b^{2} }}\frac{\lambda^{2}}{1+\lambda^{2}\beta_{j}^{2}}. \tag{16}\]
The vacuum SEMT induced by the second brane comes from the last term in (12). It is presented in the form
\[\langle\tau_{i}^{k}\rangle_{j}^{\rm ind} = (4\xi-1)\frac{2\delta_{i}^{k}\beta_{j}z^{D+1}}{(2\pi)^{D-1}\alpha ^{D}}\int d{\bf k}\,\int_{0}^{\infty}d\gamma\,\gamma J_{\nu}^{2}(\gamma z) \int_{b}^{\infty}d\lambda\frac{\lambda^{2}}{\sqrt{\lambda^{2}-b^{2}}} \tag{17}\] \[\times\frac{\beta_{j^{\prime}}\lambda+1}{\beta_{j}\lambda-1}\frac {1}{(\beta_{1}\lambda-1)\,(\beta_{2}\lambda-1)\,e^{2a\lambda}-(\beta_{1} \lambda+1)\,(\beta_{2}\lambda+1)}.\]
The expression (16) for the self-SEMT is divergent and needs a regularization with a subsequent renormalization removing the divergences. This type of surface divergences are well known in quantum field theory with boundaries.
Note that for an observer living on the brane \(x^{1}=a_{j}\) the \(D\)-dimensional line element is obtained from (1) taking \(dx^{1}=0\). It describes \(D\)-dimensional AdS spacetime generated by a cosmological constant \(\Lambda^{\prime}=(1-2/D)\Lambda\). From the point of vew of an observer on the brane, the energy-momentum tensor \(\langle\tau_{i}^{k}\rangle_{j}\) is a source of gravitation with the energy density \(\varepsilon_{j}=\langle\tau_{0}^{0}\rangle_{j}\) and isotropic effective pressure \(p_{j}=-\langle\tau_{2}^{2}\rangle_{j}=\cdots=-\langle\tau_{D}^{D}\rangle_{j}\). The corresponding equation of state reads \(p_{j}=-\varepsilon_{j}\) and, hence, \(\langle\tau_{i}^{k}\rangle_{j}\) is a source of the cosmological constant type. Of course, the latter property is a consequence of the symmetry in the problem under consideration. In accordance with (15), the surface energy density is decomposed into the self-energy and the contribution induced by the second brane:
\[\varepsilon_{j}=\varepsilon_{j}^{(0)}+\varepsilon_{j}^{\rm ind}, \tag{18}\]
where \(\varepsilon_{j}^{\rm ind}=\langle\tau_{0}^{0}\rangle_{j}^{\rm ind}\).
The regularization of the divergent expression in the right-hand side of (16), based on the generalized zeta function approach, is discussed in appendix A. It is decomposed into pole and finite contributions
obtained from (48) in combination with (35). In the principal part prescription the finite self-energy \(\varepsilon_{j}^{(0)}\) is identified with the finite part of the respective Laurent expansion near the physical point \(s=1\). In order to remove the divergent part we note that the VEV \(\langle\tau_{ik}\rangle_{j}\) is a part of a theory which contains other contributions localized on the brane and the divergences in \(\langle\tau_{ik}\rangle_{j}\) are absorbed by renormalizing the parameters in those contributions. The finite part of the SEMT \(\langle\tau_{ik}\rangle_{j}^{(0)}\) is given by (52). This part contains renormalization umbiguities which can be fixed by imposing additional renormalization conditions. Here the situation is completely parallel to that for the total Casimir energy discussed, for example, in [4]. Similar to (15), the Casimir energy for a system composed of separate bodies is decomposed into the self energies and the interaction energy. The renormalization is required only for the self energies
Unlike to the self energy part \(\varepsilon_{j}^{(0)}\), the surface energy density \(\varepsilon_{j}^{\rm ind}\) and the related SEMT \(\langle\tau_{i}^{k}\rangle_{j}^{\rm ind}\) are finite and uniquely defined. Our main concern in the discussion below is that part of the energy-momentum tensor. Integrating over the angular coordinates of \({\bf k}\) and introducing the polar coordinates in the plane \((k,u)\), we integrate over the related polar angle:
\[\langle\tau_{i}^{k}\rangle_{j}^{\rm ind} = \frac{(4\xi-1)\delta_{i}^{k}\beta_{j}z^{D+1}}{2^{D-2}\pi^{\frac{ D-1}{2}}\Gamma(\frac{D-1}{2})\alpha^{D}}\,\int_{0}^{\infty}d\gamma\,\gamma J_{ \nu}^{2}(\gamma z)\int_{0}^{\infty}dr\,r^{D-2}\,\frac{\beta_{j^{\prime}} \lambda+1}{\beta_{j}\lambda-1} \tag{19}\] \[\times\left.\frac{\lambda}{\left(\beta_{1}\lambda-1\right)\left( \beta_{2}\lambda-1\right)e^{2a\lambda}-\left(\beta_{1}\lambda+1\right)\left( \beta_{2}\lambda+1\right)}\right|_{\lambda=\sqrt{\gamma^{2}+r^{2}}}.\]
Next we introduce polar coordinates in the plane \((\gamma,r)\). The angular integral is evaluated by using the result [67]
\[\int_{0}^{1}dxx(1-x^{2})^{\frac{D-3}{2}}J_{\nu}^{2}(ux)=\frac{\Gamma(\frac{D- 1}{2})}{2^{2\nu+1}}u^{2\nu}F_{\nu}(u), \tag{20}\]
with the function
\[F_{\nu}(u)=\frac{{}_{1}F_{2}(\nu+\frac{1}{2};\frac{D+1}{2}+\nu,1+2\nu;-u^{2}) }{\Gamma(\frac{D+1}{2}+\nu)\Gamma(1+\nu)}. \tag{21}\]
Here, \({}_{1}F_{2}(a;b,c;x)\) is the hypergeometric function. This gives
\[\langle\tau_{i}^{k}\rangle_{j}^{\rm ind}=\frac{(4\xi-1)\delta_{i}^{k}\beta_{j }z^{D+2\nu+1}}{2^{D+2\nu-1}\pi^{\frac{D-1}{2}}\alpha^{D}}\,\int_{0}^{\infty}d \lambda\,\frac{\beta_{j^{\prime}}\lambda+1}{\beta_{j}\lambda-1}\frac{\lambda^ {D+2\nu+1}F_{\nu}(\lambda z)}{\left(\beta_{1}\lambda-1\right)\left(\beta_{2} \lambda-1\right)e^{2a\lambda}-\left(\beta_{1}\lambda+1\right)\left(\beta_{2} \lambda+1\right)}. \tag{22}\]
From here it follows that the induced SEMT on the brane \(x^{1}=a_{j}\) vanishes for special cases of Dirichlet and Neumann boundary conditions on that brane. Depending on the coefficients \(\beta_{j}\) and on the separation between the branes, the induced energy density \(\varepsilon_{j}^{\rm ind}\) can be either positive or negative (see numerical examples below). Introducing a new integration variable \(u=\lambda z\), we see that the product \(\alpha^{D}\langle\tau_{i}^{k}\rangle_{j}^{\rm ind}\) depends on the quantities \(z\), \(a_{j}\), \(\beta_{j}\), having dimension of length, in the form of two dimensionless ratios \(a/z\), \(\beta_{j}/z\). Those ratios are the proper values of the quantities, measured by an observer with fixed \(z\), in units of the curvature radius \(\alpha\). This feature is a consequence of the AdS maximal symmetry.
### Minkowskian limit and a conformally coupled massless field
To clarify the features of the SEMT on the branes we consider special cases and asymptotic regions of the parameters. First we discuss the Minkowskian limit corresponding to \(\alpha\to\infty\) for fixed coordinate \(y\). For the coordinate \(z\), in the leading order, one has \(z\approx\alpha\) and the line element (1) tends to the Minkowskian interval \(ds_{\rm M}^{2}=dt^{2}-\left(dx^{1}\right)^{2}-d{\bf x}^{2}-dy^{2}\). The geometry of the corresponding problem consists two parallel plates at \(x^{1}=a_{1}\) and \(x^{1}=a_{2}\) with the boundary condition \((1-(-1)^{j}\beta_{j}\partial_{1})\varphi(x)=0\) at \(x^{1}=a_{j}\) in the region \(a_{1}\leq x^{1}\leq a_{2}\). For large values of \(\alpha\) and for a massive field the parameter \(\nu\) is large, \(\nu\approx m\alpha\), and
one needs the asymptotic of the function \(F_{\nu}(\lambda z)\) when both the argument and the order are large. The respective analysis given in [61] shows that the function \(F_{\nu}(\nu\lambda/m)\) is exponentially suppressed for \(\nu\gg 1\) and \(\lambda<m\). For \(\lambda>m\) the leading behavior is approximated by [61]
\[F_{\nu}\left(\frac{\nu}{m}\lambda\right)\approx\frac{\left(\lambda^{2}-m^{2} \right)^{\frac{D}{2}-1}\left(2m/\nu\right)^{2\nu+1}}{2\sqrt{\pi}\Gamma(\frac{D }{2})\lambda^{D+2\nu-1}}. \tag{23}\]
By using this asymptotic for the part of the integral in (22) over the region \(m\leq\lambda<\infty\), one obtains the SEMT on the plate \(x^{1}=a_{j}\) in Minkowski spacetime, \(\langle\tau_{i}^{k}\rangle_{\rm(M)j}^{\rm ind}=\lim_{\alpha\to\infty}\langle \tau_{i}^{k}\rangle_{j}^{\rm ind}\), given by
\[\langle\tau_{i}^{k}\rangle_{\rm(M)j}^{\rm ind}=\frac{(4\xi-1)\delta_{i}^{k} \beta_{j}}{2^{D-1}\pi^{\frac{D}{2}}\Gamma(\frac{D}{2})}\,\int_{m}^{\infty}d \lambda\,\frac{\beta_{j^{\prime}}\lambda+1}{\beta_{j}\lambda-1}\frac{\lambda ^{2}\left(\lambda^{2}-m^{2}\right)^{\frac{D}{2}-1}}{\left(\beta_{1}\lambda-1 \right)\left(\beta_{2}\lambda-1\right)e^{2a\lambda}-\left(\beta_{1}\lambda+1 \right)\left(\beta_{2}\lambda+1\right)}. \tag{24}\]
This result for a massive field was obtained in [64] as a limiting case of the problem with two branes in AdS spacetime parallel to the AdS boundary. In the case of a massless field, the expression for \(\langle\tau_{i}^{k}\rangle_{\rm(M)1}^{\rm ind}+\langle\tau_{i}^{k}\rangle_{ \rm(M)2}^{\rm ind}\), obtained from (24), coincides with the result derived in [66]. The VEV of the SEMT for a single Robin boundary in background of (3+1)-dimensional Minkowski spacetime has also been considered in [68, 69].
In the case of a massless field with conformal coupling one has \(\xi=\xi_{D}=\frac{D-1}{4D}\) and \(\nu=1/2\). By taking into account that \(J_{1/2}(x)=\sqrt{\frac{\pi}{2x}}\sin x\), from (20) we get [61]
\[F_{1/2}(u)=\frac{2}{\sqrt{\pi}u^{2}}\left[\frac{1}{\Gamma\left(\frac{D}{2} \right)}-\frac{J_{\frac{D}{2}-1}(2u)}{u^{\frac{D}{2}-1}}\right]. \tag{25}\]
Substituting this expression in (22) we get
\[\varepsilon_{j}^{\rm ind}=\left(z/\alpha\right)^{D}\varepsilon_{\rm(M)j}^{\rm ind}, \tag{26}\]
with
\[\varepsilon_{\rm(M)j}^{\rm ind} = -\frac{2^{1-D}\beta_{j}}{D\pi^{\frac{D}{2}}}\,\int_{0}^{\infty}d \lambda\,\frac{\beta_{j^{\prime}}\lambda+1}{\beta_{j}\lambda-1}\left[\frac{1} {\Gamma\left(\frac{D}{2}\right)}-\frac{J_{\frac{D}{2}-1}(2\lambda z)}{(\lambda z )^{\frac{D}{2}-1}}\right] \tag{27}\] \[\times\frac{\lambda^{D}}{\left(\beta_{1}\lambda-1\right)\left( \beta_{2}\lambda-1\right)e^{2a\lambda}-\left(\beta_{1}\lambda+1\right)\left( \beta_{2}\lambda+1\right)}.\]
For a conformally coupled massless scalar field the problem we consider is conformally related to the problem of two Robin plates at \(x^{1}=a_{j}\), \(j=1,2\), in Minkowski spacetime described by the interval \(ds_{\rm M}^{2}=dt^{2}-\left(dx^{1}\right)^{2}-d\mathbf{x}^{2}-dz^{2}\), intersected by a Dirichlet plate located at \(z=0\). The presence of the latter is related to the boundary condition for scalar field modes imposed on the AdS boundary \(z=0\). The surface energy density (27) is induced on the plate \(x^{1}=a_{j}\) by the presence of the second plate \(x^{1}=a_{j^{\prime}}\). The part of \(\varepsilon_{\rm(M)j}^{\rm ind}\) coming from the first term in the square brackets is the respective quantity in the geometry where the plate \(z=0\) is absent (see (24) for \(m=0\)). The part with the second term is a consequence of the presence of the plate \(z=0\). Note that \(\varepsilon_{\rm(M)j}^{\rm ind}\) vanishes on that plate: \(\varepsilon_{\rm(M)j}^{\rm ind}|_{z=0}=0\). This is a consequence of Dirichlet boundary condition at \(z=0\).
## 4 Asymptotics and numerical analysis
In this section, the behavior of the VEV for SEMT in asymptotic regions of the parameters is studied. We start with the asymptotics at small and large separations between the branes. For a given \(z\), the
proper separation between the branes is given by \(a_{(p)}=\alpha a/z\). For small proper separations compared to the curvature radius one has \(a/z\ll 1\) and the integral in (22) is dominated by the contribution of the region with large values of the argument of the function \(F_{\nu}(\lambda z)\). By using the corresponding asymptotic [61]
\[F_{\nu}(u)\approx\frac{2^{2\nu}}{\sqrt{\pi}\Gamma\left(\frac{D}{2}\right)u^{2 \nu+1}},\;u\gg 1, \tag{28}\]
we can see that the relation
\[\langle\tau_{i}^{k}\rangle_{j}^{\rm ind}\approx\left(z/\alpha\right)^{D} \langle\tau_{i}^{k}\rangle_{({\rm M})j}^{\rm ind}|_{m=0}, \tag{29}\]
takes place, where \(\langle\tau_{i}^{k}\rangle_{({\rm M})j}^{\rm ind}|_{m=0}\) is given by (24) with \(m=0\). In the limit under consideration the main contribution to the SEMT comes from the zero-point fluctuations with wavelengths smaller than the curvature radius and the effect of the gravitational field is weak. The asymptotic (29) is further simplified if the separation \(a\) is smaller than the length scales determined by the boundary conditions, \(a/|\beta_{l}|\ll 1\), \(l=1,2\). For Dirichlet boundary condition on the brane \(x^{1}=a_{j^{\prime}}\), \(\beta_{j^{\prime}}=0\), the condition \(a/|\beta_{j}|\ll 1\) is assumed. Under those conditions we have \(\lambda|\beta_{l}|\gg 1\) (\(\lambda|\beta_{j}|\gg 1\) in the case \(\beta_{j^{\prime}}=0\)) for the region of \(\lambda\) that dominates in the integral on the right-hand side of (24) (with \(m=0\)). In the leading order one gets
\[\langle\tau_{i}^{k}\rangle_{j}^{\rm ind}\approx\delta_{i}^{k}\frac{\left(z/ \alpha\right)^{D}\left(4\xi-1\right)}{2^{D}\pi^{\frac{D+1}{2}}a^{D-1}}\zeta \left(D-1\right)\Gamma\left(\frac{D-1}{2}\right)\left\{\begin{array}{ll}1/ \beta_{j^{\prime}},&\beta_{j^{\prime}}\neq 0\\ \left(2^{2-D}-1\right)/\beta_{j},&\beta_{j^{\prime}}=0\end{array}\right., \tag{30}\]
with \(\zeta\left(u\right)\) being the Riemann zeta function. Note that the asymptotic (29) also describes the behavior of the SEMT near the AdS horizon. As it is seen from (30), in the special cases of minimally (\(\xi=0\)) and conformally (\(\xi=\xi_{D}\)) coupled fields and for small separations between the branes the energy density induced on the brane \(x^{1}=a_{j}\) by the second brane is positive for \(\beta_{j^{\prime}}<0\) and negative for \(\beta_{j^{\prime}}>0\). For Dirichlet boundary condition on the second brane (\(\beta_{j^{\prime}}=0\)) the sign of the induced energy density coincides with the sign of the product \(\left(1-4\xi\right)\beta_{j}\).
In the opposite limit of large proper separations compared with the curvature radius, we have \(a/z\gg 1\) and the main contribution to the integral in (22) gives the region near the lower limit, corresponding to \(\lambda z\ll 1\). In the leading order, replacing the function \(F_{\nu}(\lambda z)\) by
\[F_{\nu}(0)=\frac{1}{\Gamma\left(\nu+1\right)\Gamma\left(\frac{D+1}{2}+\nu \right)}, \tag{31}\]
one gets
\[\langle\tau_{i}^{k}\rangle_{j}^{\rm ind}\approx\frac{8(4\xi-1)\delta_{i}^{k} \left(z/2\right)^{D+2\nu+2}\beta_{j}/z}{\pi^{\frac{D-1}{2}}\Gamma\left(\nu+1 \right)\Gamma\left(\frac{D+1}{2}+\nu\right)\alpha^{D}}\;\int_{0}^{\infty}d \lambda\,\frac{\lambda\beta_{j^{\prime}}+1}{\lambda\beta_{j}-1}\frac{\lambda ^{D+2\nu+1}}{\left(\lambda\beta_{1}-1\right)\left(\lambda\beta_{2}-1\right)e^ {2\lambda a}-\left(\lambda\beta_{1}+1\right)\left(\lambda\beta_{2}+1\right)}. \tag{32}\]
This expression is further simplified for separations larger than the length scales in Robin boundary conditions. Assuming \(a\gg|\beta_{l}|\), \(l=1,2\), we see that \(\lambda|\beta_{l}|\ll 1\) for the region giving the dominant contribution to the integral in (32). For the case of Neumann boundary condition on the brane \(x^{1}=a_{j^{\prime}}\), corresponding to the limit \(|\beta_{j}|\to\infty\), for separations \(a\gg|\beta_{j}|\) one has \(\lambda|\beta_{j}|\ll 1\) in the region with dominant contribution to the integral. For the leading order term in the VEV of the SEMT and for non-Neumann boundary conditions on the second brane we find
\[\langle\tau_{i}^{k}\rangle_{j}^{\rm ind}\approx\delta_{i}^{k}\frac{\left(1-4 \xi\right)\zeta\left(D+2\nu+2\right)\beta_{j}/a}{\pi^{\frac{D}{2}}\Gamma \left(\nu+1\right)\alpha^{D}\left(2a/z\right)^{D+2\nu+1}}\;\left(D+2\nu+1 \right)\Gamma\left(\frac{D}{2}+\nu+1\right). \tag{33}\]
For Neumann boundary condition on the second brane, an additional factor \((2^{-D-2\nu-1}-1)\) should be added in the right-hand side of (33). We see that at large distances between the branes the decay of
the SEMT, as a function of the proper separation, is a power law for both massive and massless fields. This feature for massive fields is in contrast with the corresponding behavior for parallel plates in the Minkowski bulk, where the suppression is exponential, by the factor \(e^{-2ma}\). We note that the formula (32) also gives the asymptotic of the SEMT near the AdS boundary. As seen, for fixed \(\beta_{j}\), the SEMT tends to zero on the AdS boundary like \(z^{D+2\nu+1}\). The asymptotic estimate (33) shows that for \(\beta_{j}<0\) and for non-Neumann boundary conditions on the second brane (\(1/\beta_{j^{\prime}}\neq 0\)), at large separations between the branes the induced energy density \(\varepsilon_{j}^{\rm ind}\) is negative for minimally and conformally coupled fields.
Figure 2 presents the VEV of the energy density, induced on the brane at \(x^{1}=a_{1}\) by the brane at \(x^{1}=a_{2}\), as a function of the proper separation between the branes \(a/z\). The graphs are plotted for a scalar field in (4+1)-dimensional AdS spacetime (\(D=4\)), for Robin boundary condition with \(\beta_{1}/z=-0.5\) and with the mass corresponding to \(m\alpha=0.5\). The dependence on the proper separation is displayed for different values of the ratio \(\beta_{2}/z\) (the numbers near the curves) and for Dirichlet and Neumann boundary conditions on the second brane. The left and right panels correspond to conformally and minimally coupled fields, respectively. In accordance with the asymptotic analysis given above, for minimally and conformally coupled fields and at small separations between the branes, the energy density, induced by the second brane, is positive (negative) for non-Dirichlet (Dirichlet) boundary condition on the second brane. At large separations the energy density is negative for non-Neumann boundary conditions on the second brane and is positive for Neumann boundary condition.
In Figure 3, for conformally (left panel) and minimally (right panel) coupled scalar fields in \(D=4\) spatial dimensions, we have plotted the dependence of the energy density \(\varepsilon_{1}^{\rm ind}\) on the Robin coefficient \(\beta_{1}/z\) for different values of the Robin coefficient \(\beta_{2}/z\) on the second brane (the numbers near the curves) and for Dirichlet and Neumann boundary conditions. The graphs are plotted for \(m\alpha=0.5\) and \(a/z=1\).
The dependence of the surface energy density on the mass of the field (in units of \(1/\alpha\)) is displayed in Figure 4 for conformally (left panel) and minimally (right panel) coupled scalar field in spatial dimensions \(D=4\). The graphs are plotted for \(a/z=1\), \(\beta_{1}/z=-0.5\), and for different values of the ratio \(\beta_{2}/z\) (the numbers near the curves). The graphs corresponding to Robin boundary conditions, \(-\infty<\beta_{2}/z<0\), are located between the graphs corresponding to Neumann and Dirichlet boundary conditions on the second brane (\(\beta_{2}/z=-\infty\) and \(\beta_{2}/z=0\), respectively). As seen, the induced energy density, in general, is not a monotonic function of the field mass. In addition, for fixed values of the other parameters it may change
Figure 2: The induced surface energy density on the brane at \(x^{1}=a_{1}\), in units of \(\alpha^{-D}\), versus the proper separation between the branes for \(D=4\), \(m\alpha=0.5\), and \(\beta_{1}/z=0.5\). The graphs are presented for different values of the ratio \(\beta_{2}/z\) (the numbers near the curves) and for Dirichlet and Neumann boundary conditions on the second brane (\(\beta_{2}/z=0\) and \(\beta_{2}/z=\infty\), respectively).
the sign as a function of the mass. In particular, that is the case for a minimally coupled field with the boundary conditions corresponding to \(\beta_{1}/z=-0.5\) and \(\beta_{2}/z=-0.25\) (see the right panel in Figure 4).
## 5 Conclusion
For a scalar field with general curvature coupling, we have studied the VEV of the SEMT induced on branes in AdS spacetime orthogonal to its boundary. On the branes the field operator is constrained by the boundary conditions (3) or, equivalently, by (4). To ensure the stability of the vacuum state, the values of the parameters in Robin boundary conditions are restricted by (7). For the geometry of the branes under consideration the extrinsic curvature tensor is zero and the general formula for the SEMT is simplified to (11). From the viewpoint of observers living on the branes this SEMT presents a gravitational source with the equation of state for a cosmological constant. In order to evaluate the corresponding VEV the Hadamard function is used that is obtained from the positive frequency Wightman function from [62]. In the region between the branes the Hadamard function is decomposed into single brane and the second brane induced contributions. That allows to separate from the total VEV of the SEMT the part generated by the second brane. The surface divergences are contained in the self-energy contributions on the branes and the renormalization is required for those parts only. In order to extract the finite parts in the respective VEVs, in Appendix A we have employed the regularization procedure based on the generalized zeta function approach. The divergences appearing in the form of simple poles are absorbed by the renormalization of the respective parameters in the "classical" action localized on the branes. The finite part of the SEMT separated in this way contains renormalization ambiguities and additional conditions are required to obtain a unique result. Here the situation is completely parallel to the one for the self-energy in the Casimir effect in the geometry of a single boundary (see, for example, the respective discussion in [4]).
The part of the SEMT induced on the brane by the presence of the second brane is finite and uniquely defined. The induced SEMT on the brane \(x^{1}=a_{j}\) is given by the expression (22). It vanishes for special cases of Dirichlet and Neumann boundary conditions on that brane. As a consequence of the maximal symmetry of AdS spacetime, for general case of Robin boundary conditions the dimensionless quantity \(\alpha^{D}\langle\tau_{i}^{k}\rangle_{j}^{\rm ind}\) is completely determined by dimensionless ratios \(a/z\) and \(\beta_{j}/z\), \(j=1,2\). The first one is the
Figure 3: The induced surface energy density on the brane at \(x^{1}=a_{1}\) versus the Robin coefficient \(\beta_{1}/z\) for different values of \(\beta_{2}/z\) (the numbers near the curves, \(\beta_{2}/z=0\) and \(\beta_{2}/z=-\infty\) for Dirichlet and Neumann conditions). The graphs are plotted for conformally (left panel) and minimally (right panel) coupled fields and for \(D=4\), \(m\alpha=0.5\), and \(a/z=1\).
proper separation between the branes, measured by an observer with fixed \(z\) in units of the curvature radius \(\alpha\). The VEV of the SEMT for Robin parallel plates in Minkowski bulk is obtained from (22) in the limit \(\alpha\rightarrow\infty\) and is expressed as (24). The latter includes special cases previously discussed in the literature and coincides with the result obtained in [64] as a limit \(\alpha\rightarrow\infty\) of the SEMT in the geometry of branes parallel to the AdS boundary. For a conformally coupled massless field the problem in the AdS bulk is conformally related to the problem in Minkowski spacetime consisting two parallel Robin plates perpendicularly intersected by a Dirichlet plate, the latter being the image of the AdS boundary. The VEV in the Minkowski counterpart is given by the formula (27), where the contribution of the Dirichlet plate comes from the term in the square brackets with the Bessel function.
At small separations between the branes, compared to the curvature radius and length scales determined by the Robin coefficients, the influence of the gravitational field on the SEMT is small and the leading term in the respective expansion is expressed by (30). In this limit and for non-Dirichlet (Dirichlet) boundary conditions on the brane \(x^{1}=a_{j^{\prime}}\) the sign of the surface energy density induced on the brane \(x^{1}=a_{j}\) coincides with the sign of the product \((4\xi-1)\beta_{j^{\prime}}\) (\((1-4\xi)\beta_{j}\)). The effects of the gravitational field are essential at proper separations between the branes of the order or larger than the curvature scale of the background geometry. Additionally assuming that the separation is large than the length scales fixed by boundary conditions, the leading behavior of the induced SEMT is described by (33) for non-Neumann boundary conditions on the second brane. The sign of the energy density coincides with the sign of \((1-4\xi)\,\beta_{j}\). For Neumann condition on the second brane, an additional factor \((2^{-D-2\nu-1}-1)\) should be added in the right-hand side and the energy density at large distances has an opposite sign. An important feature of the large distance behavior of the SEMT is the power law decay as a function of the proper separation. For parallel plates in Minkowski spacetime the respective decay for massive fields is exponential. The induced surface energy density vanishes on the AdS boundary like \(z^{D+2\nu+1}\) and behaves as \(\left(z/\alpha\right)^{D}\) near the AdS horizon.
Figure 4: The dependence of the surface energy density on the first brane, induced by the second brane, versus the field mass for conformally and minimally coupled fields (left and right panels, respectively). The graphs are plotted for \(D=4\), \(a/z=1\), \(\beta_{1}/z=-0.5\) and for separate values of \(\beta_{2}/z\) (the numbers near the curves). The graphs for Dirichlet and Neumann boundary conditions on the second brane are presented as well.
## Acknowledgments
The work was supported by the grant No. 21AG-1C047 of the Higher Education and Science Committee of the Ministry of Education, Science, Culture and Sport RA.
## Appendix A Surface densities for a single brane
We have seen that the VEV of the SEMT for a single brane at \(x^{1}=a_{j}\) is presented in the form (16). The corresponding expression is divergent and we can regularize it by using the generalized zeta function approach (for a general introduction and applications in the theory of the Casimir effect see, e.g., [70, 71, 72]). Let us consider the function
\[F(s,z)=\frac{\mu^{s-1}\beta_{j}z^{D+1}}{(2\pi)^{D-1}}\int_{0}^{\infty}d\gamma \,\gamma J_{\nu}^{2}(\gamma z)\int_{0}^{\infty}d\lambda\,\lambda^{2}\int d{ \bf k}\,\frac{\left(\lambda^{2}+\gamma^{2}+k^{2}\right)^{-\frac{s}{2}}}{1+ \lambda^{2}\beta_{j}^{2}}, \tag{34}\]
with, in general, complex argument \(s\). As it will be seen below, the expression in the right-hand is finite for \(\mathop{\rm Re}\nolimits s>D\). The scale parameter \(\mu\), having dimension of inverse length, is introduced to keep the function \(F(s,z)\) dimensionless. Following the principal part prescription, considered previously in the literature for the total Casimir energy in ultrastatic manifolds with boundaries (see, [70, 71, 73]), the SEMT in the geometry of a single brane is obtained as
\[\left\langle\tau_{i}^{k}\right\rangle_{j}^{(0)}=\delta_{i}^{k}\frac{4\xi-1}{ \alpha^{D}}{\rm PP}\left[F(s,z)\right]_{s=1}, \tag{35}\]
where \({\rm PP}\left[F(s,z)\right]_{s=1}\) corresponds to the finite part of the Laurent expansion of the function \(F(s,z)\) near \(s=1\). The evaluation of that part is reduced to the extraction of the pole term.
The integral over \({\bf k}\) in (34) is expressed in terms of the gamma function and we get
\[F(s,z)=\frac{\mu^{s-1}\beta_{j}z^{D+1}}{2^{D-1}\pi^{D/2}}\frac{\Gamma(1-\frac{ D-s}{2})}{\Gamma(\frac{s}{2})}\int_{0}^{\infty}d\gamma\,\gamma J_{\nu}^{2}( \gamma z)\int_{0}^{\infty}d\lambda\,\lambda^{2}\frac{\left(\lambda^{2}+\gamma ^{2}\right)^{\frac{D-s}{2}-1}}{1+\lambda^{2}\beta_{j}^{2}}. \tag{36}\]
For the further transformation of the expression in the right-hand side of (36) we use the integral representation
\[\left(\lambda^{2}+\gamma^{2}\right)^{\frac{D-s}{2}-1}=\frac{1}{\Gamma\left(1- \frac{D-s}{2}\right)}\int_{0}^{\infty}dx\,x^{\frac{s-D}{2}}e^{-\left(\lambda^ {2}+\gamma^{2}\right)x}. \tag{37}\]
With this representation, the integral over \(\gamma\) is evaluated by the formula [67]:
\[\int_{0}^{\infty}d\gamma\,\gamma J_{\nu}^{2}(\gamma z)e^{-\gamma^{2}x}=\frac{ 1}{2x}\exp\left(-\frac{z^{2}}{2x}\right)I_{\nu}\left(\frac{z^{2}}{2x}\right), \tag{38}\]
with \(I_{\nu}\left(u\right)\) being the modified Bessel function. Passing to a new integration variable \(u=z^{2}/(2x)\), one finds
\[F(s,z)=\frac{\mu^{s-1}\beta_{j}z^{s+1}}{2^{\frac{D+s}{2}}\pi^{D/2}\Gamma(\frac {s}{2})}\int_{0}^{\infty}du\,u^{\frac{D-s}{2}-1}e^{-u}I_{\nu}\left(u\right) \int_{0}^{\infty}d\lambda\,\frac{\lambda^{2}e^{-\lambda^{2}\frac{z^{2}}{2y}} }{1+\lambda^{2}\beta_{j}^{2}}. \tag{39}\]
The \(\lambda\)-integral is evaluated in terms of the complementary incomplete gamma function \(\Gamma(-1/2,x)\). As a result, the function \(F(s,z)\) is presented as
\[F(s,z)=\frac{\left(\mu z\right)^{s-1}\beta_{j}z^{2}}{2^{\frac{D+s}{2}+2}\pi^{ \frac{D-1}{2}}\Gamma(\frac{s}{2})|\beta_{j}|^{3}}\int_{0}^{\infty}du\,u^{\frac {D-s}{2}-1}S\left(2\beta_{j}^{2}/z^{2},u\right), \tag{40}\]
where we have introduced the function
\[S(b,u)=e^{-u}I_{\nu}\left(u\right)e^{\frac{1}{bu}}\Gamma\left(-\frac{1}{2},\frac{ 1}{bu}\right). \tag{41}\]
In the limit \(u\rightarrow\infty\) the function (41) tends to limiting value \(\sqrt{2b/\pi}\) and \(\lim_{u\to 0}S(b,u)=0\). This shows that the representation (40) is valid in the region \(\mbox{Re}\,s>D\) of the complex plane \(s\).
The divergence of the integral in (41) at \(s=1\) comes from the divergence in the upper limit of the integral. By using the expansions of the functions \(e^{-u}I_{\nu}\left(u\right)\) and \(e^{\frac{1}{bu}}\Gamma\left(-\frac{1}{2},\frac{1}{bu}\right)\) (see, for example, [74]) for large values of \(u\), the following expansion is obtained:
\[S(b,u)=\sqrt{\frac{2b}{\pi}}\sum_{n=0}^{\infty}\left[\frac{A_{n}(b)}{u^{n}}- \sqrt{\pi}\frac{B_{n}(b)}{u^{n+\frac{1}{2}}}\right]. \tag{42}\]
For the coefficients one has
\[A_{0} = 1,\;A_{1}=\frac{2}{b}-\frac{1}{2}\left(\nu^{2}-\frac{1}{4} \right),\] \[A_{2} = \frac{4}{3b^{2}}+\left(\nu^{2}-\frac{1}{4}\right)\left[\frac{1}{ 8}\left(\nu^{2}-\frac{9}{4}\right)-\frac{1}{b}\right], \tag{43}\]
and
\[B_{0} = \frac{1}{\sqrt{b}},\;B_{1}=\frac{1}{b^{\frac{3}{2}}}-\frac{1}{2 \sqrt{b}}\left(\nu^{2}-\frac{1}{4}\right),\] \[B_{2} = \frac{1}{2\sqrt{b}}\left[\frac{1}{b^{2}}+\left(\nu^{2}-\frac{1}{ 4}\right)\left(\frac{1}{4}\left(\nu^{2}-\frac{9}{4}\right)-\frac{1}{b}\right) \right]. \tag{44}\]
In order to separate the pole term in (40) we rewrite the function \(F(s,z)\) in the form
\[F(s,z) = \frac{\left(\mu z\right)^{s-1}\beta_{j}z^{2}}{2^{\frac{D+s}{2}+2 }\pi^{\frac{D-1}{2}}\Gamma(\frac{s}{2})|\beta_{j}|^{3}}\left\{\int_{0}^{1}du \,u^{\frac{D-s}{2}-1}S\left(b_{j},u\right)\right. \tag{45}\] \[+\int_{1}^{\infty}du\,u^{\frac{D-s}{2}-1}\left[S\left(b_{j},u \right)-S_{N}\left(b_{j},u\right)\right]+\int_{1}^{\infty}du\,u^{\frac{D-s}{2 }-1}S_{N}\left(b_{j},u\right)\right\},\]
where \(b_{j}=2\beta_{j}^{2}/z^{2}\) and
\[S_{N}(b,u)=\sqrt{\frac{2b}{\pi}}\sum_{n=0}^{N}\left[\frac{A_{n}(b)}{u^{n}}- \sqrt{\pi}\frac{B_{n}(b)}{u^{n+\frac{1}{2}}}\right]. \tag{46}\]
For \(N>(D-3)/2\) the first two integrals in the figure braces (40) are convergent for \(s=1\). By using (46) in the part coming from the last integral in (45), the corresponding contribution to the function \(F(s,z)\) is presented as
\[\bar{F}(s,z)=-\frac{\left(\mu z/\sqrt{2}\right)^{s-1}z}{2^{\frac{D+1}{2}} \frac{D}{2}\Gamma(\frac{s}{2})\beta_{j}}\sum_{n=0}^{N}\left[\frac{A_{n}(b_{j} )}{s+2n-D}-\frac{\sqrt{\pi}B_{n}(b_{j})}{s+1+2n-D}\right]. \tag{47}\]
The function \(\bar{F}(s,z)\) has a simple pole at \(s=1\). The pole comes from the term with \(n=(D-1)/2\) for odd \(D\) and from the term with \(n=D/2-1\) for even \(D\).
Expanding the function (47) near the physical point \(s=1\), the function \(F(s,z)\) is decomposed as
\[F(s,z)=\frac{F_{(\rm p)}(s,z)}{s-1}+F_{(\rm f)}(z)+\cdots, \tag{48}\]
where the ellipsis stand for the part vanishing in the limit \(s\to 1\). Here, the coefficient in the pole term and the finite part are given by the expressions
\[F_{\rm(p)}(s,z)=-\frac{zC_{D}(b_{j})}{(2\pi)^{\frac{D+1}{2}}\,\beta_{j}}, \tag{49}\]
and
\[F_{\rm(f)}(z) = \frac{\beta_{j}z^{2}}{2^{\frac{D+1}{2}+2}\pi^{\frac{D}{2}}|\beta_ {j}|^{3}}\left\{\int_{0}^{1}du\,u^{\frac{D-3}{2}}S\left(b_{j},u\right)+\int_{1 }^{\infty}du\,u^{\frac{D-3}{2}}\left[S\left(b_{j},u\right)-S_{N}\left(b_{j},u \right)\right]\right\} \tag{50}\] \[+ \frac{z}{(2\pi)^{\frac{D+1}{2}}\,\beta_{j}}\left\{C_{D}(b_{j}) \left[\ln\left(\frac{\mu z}{\sqrt{2}}\right)+\frac{1}{2}\psi(1/2)\right]-\sum _{n=0}^{N\prime}\left[\frac{A_{n}(b_{j})}{1+2n-D}-\frac{\sqrt{\pi}B_{n}(b_{j}) }{2+2n-D}\right]\right\},\]
where the prime on the summation sign means that the term \(n=\frac{D-1}{2}\) for odd \(D\) and the term \(n=\frac{D}{2}-1\) for even \(D\) should be omitted. In (50), \(\psi(x)\) is the digamma function with \(\psi(1/2)\approx-1.964\) and
\[C_{D}(b)=\left\{\begin{array}{ll}A_{\frac{D-1}{2}}(b),&\mbox{for odd }D\\ -\sqrt{\pi}B_{\frac{D}{2}-1}(b)&\mbox{for even }D\end{array}\right.. \tag{51}\]
In the principal part prescription, the physical value extracted from the divergent expectation value of the SEMT \(\langle\tau_{i}^{k}\rangle_{j}^{(0)}\) is identified with
\[\langle\tau_{i}^{k}\rangle_{j}^{(0)}=\delta_{i}^{k}\frac{4\xi-1}{\alpha^{D}}F _{\rm(f)}(z). \tag{52}\]
Note that this result contains a scale ambiguity. Under scale change it transforms as
\[\langle\tau_{i}^{k}\rangle_{j}^{(0)}(\mu^{\prime})=\langle\tau_{i}^{k} \rangle_{j}^{(0)}(\mu)+\delta_{i}^{k}\left(4\xi-1\right)\frac{\ln(\mu^{\prime }/\mu)C_{D}(b_{j})z}{\left(2\pi\right)^{\frac{D+1}{2}}\alpha^{D}\beta_{j}}. \tag{53}\]
The logarithmic dependence on the scale \(\mu\) is a characteristic feature of the regularization procedure.
|
2309.00079 | On the Implicit Bias of Adam | In previous literature, backward error analysis was used to find ordinary
differential equations (ODEs) approximating the gradient descent trajectory. It
was found that finite step sizes implicitly regularize solutions because terms
appearing in the ODEs penalize the two-norm of the loss gradients. We prove
that the existence of similar implicit regularization in RMSProp and Adam
depends on their hyperparameters and the training stage, but with a different
"norm" involved: the corresponding ODE terms either penalize the (perturbed)
one-norm of the loss gradients or, conversely, impede its reduction (the latter
case being typical). We also conduct numerical experiments and discuss how the
proven facts can influence generalization. | Matias D. Cattaneo, Jason M. Klusowski, Boris Shigida | 2023-08-31T18:33:05Z | http://arxiv.org/abs/2309.00079v4 | # On the Implicit Bias of Adam
###### Abstract
In previous literature, backward error analysis was used to find ordinary differential equations (ODEs) approximating the gradient descent trajectory. It was found that finite step sizes implicitly regularize solutions because terms appearing in the ODEs penalize the two-norm of the loss gradients. We prove that the existence of similar implicit regularization in RMSProp and Adam depends on their hyperparameters and the training stage, but with a different "norm" involved: the corresponding ODE terms either penalize the (perturbed) one-norm of the loss gradients or, on the contrary, hinder its decrease (the latter case being typical). We also conduct numerical experiments and discuss how the proven facts can influence generalization.
## 1 Introduction
Gradient descent (GD) can be seen as a numerical method solving the ordinary differential equation (ODE) \(\hat{\mathbf{\theta}}=-\nabla E(\mathbf{\theta})\), where \(E(\cdot)\) is the loss function and \(\nabla E(\mathbf{\theta})\) denotes its gradient. Starting at \(\mathbf{\theta}^{(0)}\), it creates a sequence of guesses \(\mathbf{\theta}^{(1)},\mathbf{\theta}^{(2)},\ldots\), which lie close to the solution trajectory \(\mathbf{\theta}(t)\) governed by aforementioned ODE. Since the step size \(h\) is finite, one could search for a modified differential equation \(\hat{\tilde{\mathbf{\theta}}}=-\nabla\widetilde{E}(\tilde{\mathbf{\theta}})\) such that \(\mathbf{\theta}^{(n)}-\tilde{\mathbf{\theta}}(nh)\) is exactly zero, or at least closer to zero than \(\mathbf{\theta}^{(n)}-\mathbf{\theta}(nh)\), that is, all the guesses of the descent lie exactly on the new solution curve or closer compared to the original curve. This approach to analysing properties of a numerical method is called backward error analysis in the numerical integration literature (see Chapter IX in Ernst Hairer and Wanner (2006)).
Barrett and Dherin (2021) first used this idea for full-batch gradient descent and found that the modified loss function \(\widetilde{E}(\tilde{\mathbf{\theta}})=E(\tilde{\mathbf{\theta}})+(h/4)\|\nabla E( \tilde{\mathbf{\theta}})\|^{2}\) makes the trajectory of the solution to \(\hat{\tilde{\mathbf{\theta}}}=-\nabla\widetilde{E}(\tilde{\mathbf{\theta}})\) approximate the sequence \(\{\mathbf{\theta}^{(n)}\}_{n=0}^{\infty}\) one order of \(h\) better than the original differential equation, where \(\|\cdot\|\) denotes the Euclidean norm. In related work, Miyagawa (2022) obtained the correction term for full-batch gradient descent up to any chosen order, also studying the global error (uniform in the iteration number) as opposed to the local (one-step) error.
The analysis was later extended to mini-batch gradient descent in Smith et al. (2021). Assume that the training set is split in batches of size \(B\) and there are \(m\) batches per epoch (so the training set size is \(mB\)), the cost function is rewritten \(E(\mathbf{\theta})=(1/m)\sum_{k=0}^{m-1}\hat{E}_{k}(\mathbf{\theta})\) with mini-batch costs denoted \(\hat{E}_{k}(\mathbf{\theta})=(1/B)\sum_{j=kB+1}^{kB+B}E_{j}(\mathbf{\theta})\). It was obtained in that work that after one epoch, the mean iterate of the algorithm, averaged over all possible shuffles of the batch indices, is close to the solution to \(\hat{\mathbf{\theta}}=-\nabla\widetilde{E}_{SGD}(\mathbf{\theta})\), where the modified loss is given by \(\widetilde{E}_{SGD}(\mathbf{\theta})=E(\mathbf{\theta})+h/(4m)\cdot\sum_{k=0}^{m-1} \left\|\nabla\hat{E}(\mathbf{\theta})\right\|^{2}\).
More recently, Ghosh et al. (2023) studied gradient descent with heavy-ball momentum iteration \(\mathbf{\theta}^{(n+1)}=\mathbf{\theta}^{(n)}-h\nabla E(\mathbf{\theta}^{(n)})+\beta(\mathbf{ \theta}^{(n)}-\mathbf{\theta}^{(n-1)})\), where \(\beta\) is the momentum parameter. In the full-batch
setting, they proved that for \(n\) large enough it is close to the continuous trajectory of the first-order ODE
\[\dot{\mathbf{\theta}}=\frac{1}{1-\beta}\nabla E(\mathbf{\theta})+\underbrace{h\frac{1+ \beta}{4(1-\beta)^{3}}\nabla\big{\|}\nabla E(\mathbf{\theta})\big{\|}^{2}}_{\text{ implicit regularization}}. \tag{1.1}\]
Their main theorem also provides the analysis for the general mini-batch case.
In another recent work, Zhao et al. (2022) introduce a regularization term \(\lambda\cdot\big{\|}\nabla E(\mathbf{\theta})\big{\|}\) to the loss function as a way to ensure finding flatter minima, improving generalization. The only difference between their term and the first-order correction coming from backward error analysis (up to a coefficient) is that the norm is not squared and regularization is applied on a per-batch basis.
Using backward error analysis to approximate the discrete dynamics with a modified ODE for adaptive algorithms such as RMSProp(Tieleman et al., 2012) and Adam (Kingma and Ba, 2015) (which is an improvement over RMSProp and AdaGrad (Duchi et al., 2011)) is currently missing in the literature. Barrett and Dherin (2021) note that "it would be interesting to use backward error analysis to calculate the modified loss and implicit regularization for other widely used optimizers such as momentum, Adam and RMSprop". Smith et al. (2021) reiterate that they "anticipate that backward error analysis could also be used to clarify the role of finite learning rates in adaptive optimizers like Adam". In the same context, Ghosh et al. (2023) agree that "RMSProp... and Adam..., albeit being powerful alternatives to SGD with faster convergence rates, are far from well-understood in the aspect of implicit regularization". In a similar context, in Appendix G to Miyagawa (2022) it is mentioned that "its [Adam's] counter term and discretization error are open questions".
This work fills the gap in the literature by conducting backward error analysis for (mini-batch, and full-batch as a special case) Adam and RMSProp. Our main contributions are listed below.
* In Theorem3.1, we provide a global second-order in \(h\) continuous ODE approximation to Adam in the general mini-batch setting. (A similar result for RMSProp is moved to the supplemental appendix). For the full-batch special case, it was shown in prior work Ma et al. (2022) that the continuous-time limit of both these algorithms is a (perturbed by \(\varepsilon\)) signGD flow \(\dot{\mathbf{\theta}}=-\nabla E(\mathbf{\theta})/(\big{|}\nabla E(\mathbf{\theta})\big{|} +\varepsilon)\) component-wise, where \(\varepsilon\) is the numerical stability parameter; we make this more precise by finding an additional "bias" term on the right (linearly depending on \(h\)).
* We analyze the full-batch case in more detail. We find that the bias term does something different from penalizing the two-norm of the loss gradient as in the case of gradient descent: it either penalizes the perturbed one-norm of the loss gradient, defined as \(\|\mathbf{v}\|_{1,\varepsilon}=\sum_{i=1}^{p}\sqrt{v_{i}^{2}+\varepsilon}\), or, on the contrary, hinders its decrease (depending on hyperparameters and the training stage). See the summary of our theoretical finding for the full-batch case in Section2. We also obtain the backward error analysis result for heavy-ball momentum gradient descent (Ghosh et al., 2023) as a special case (Example2.3).
* We provide numerical evidence consistent with our results. In particular, we notice that often penalizing the perturbed one-norm appears to improve generalization, and hindering its decrease hurts it. The typical absence of implicit regularization appearing from backward error analysis in RMSProp and Adam (as opposed to GD) becomes one more previously unidentified possible explanation for poorer generalization of adaptive gradient algorithms compared to other methods.
### Related work
Backward error analysis of first-order methods.We provide the history of finding ordinary differential equations approximating different algorithms above in the introduction. Recently, there have been other applications of backward error analysis related to machine learning. Kunin et al. (2020) show that the approximating continuous-time trajectories satisfy conservation laws that are broken in discrete time. Franca et al. (2021) use backward error analysis while studying how to discretize continuous-time dynamical systems preserving stability and convergence rates. Rosca et al. (2021) find continuous-time approximations of discrete two-player differential games.
Approximating gradient methods by differential equation trajectories.Ma et al. (2022) prove that the trajectories of Adam and RMSProp are close to signGD dynamics, and investigate different training regimes of these algorithms empirically. SGD is approximated by stochastic differential equations and novel adaptive parameter adjustment policies are devised in Li et al. (2017).
Implicit bias of first-order methods.Soudry et al. (2018) prove that GD trained to classify linearly separable data with logistic loss converges to the direction of the max-margin vector (the solution to the hard margin SVM). This result has been extended to different loss functions in Nacson et al. (2019), to stochastic gradient descent in Nacson et al. (2019) and more generic optimization methods in Gunasekar et al. (2018), to the nonseparable case in Ji and Telgarsky (2018), Ji and Telgarsky (2019). This line of research has been generalized to studying implicit biases of linear networks (Ji and Telgarsky, 2018; Gunasekar et al., 2018), homogeneous neural networks (Ji and Telgarsky, 2020; Nacson et al., 2019; Lyu and Li, 2019). Woodworth et al. (2020) study the gradient flow of a diagonal linear network with squared loss and show that large initializations lead to minimum 2-norm solutions while small initializations lead to minimum 1-norm solutions. Even et al. (2023) extend this work to the case of non-zero step sizes and mini-batch training. Wang et al. (2021) prove that Adam and RMSProp maximize the margin of homogeneous neural networks.
Generalization of adaptive methods.Cohen et al. (2022) empirically investigate the edge-of-stability regime of adaptive gradient algorithms and the effect of sharpness (defined as the largest eigenvalue of the hessian) on generalization; Granziol (2020); Chen et al. (2021) observe that adaptive methods find sharper minima than SGD and Zhou et al. (2020); Xie et al. (2022) argue theoretically that it is the case. Jiang et al. (2022) introduce a statistic that measures the uniformity of the hessian diagonal and argue that adaptive gradient algorithms are biased towards making this statistic smaller. Keskar and Socher (2017) propose to improve generalization of adaptive methods by switching to SGD in the middle of training.
### Notation
We denote the loss of the \(k\)th minibatch as a function of the network parameters \(\mathbf{\theta}\in\mathbb{R}^{p}\) by \(E_{k}(\mathbf{\theta})\), and in the full-batch setting we omit the index and write \(E(\mathbf{\theta})\). \(\nabla E\) means the gradient of \(E\), and \(\nabla\) with indices denotes partial derivatives, e. g. \(\nabla_{ijs}E\) is a shortcut for \(\frac{\partial^{3}E}{\partial\theta_{i}\partial\theta_{j}\partial\theta_{s}}\). The norm without indices \(\left\lVert\cdot\right\rVert\) is the two-norm of a vector, \(\left\lVert\cdot\right\rVert_{1}\) is the one-norm and \(\left\lVert\cdot\right\rVert_{1,\varepsilon}\) is the perturbed one-norm defined as \(\left\lVert\mathbf{v}\right\rVert_{1,\varepsilon}=\sum_{i=1}^{p}\sqrt{v_{i}^{2 }+\varepsilon}\). (Of course, if \(\varepsilon>0\) the perturbed one-norm is not a norm, but \(\varepsilon=0\) makes it the one-norm.)
## 2 Implicit bias of full-batch Adam: an informal summary
To avoid ambiguity and to provide the names and notations for hyperparameters, we define the algorithm below.
**Definition 2.1**.: The _Adam_ algorithm is an optimization algorithm with numerical stability hyperparameter \(\varepsilon>0\), squared gradient momentum hyperparameter \(\rho\in(0,1)\), gradient momentum hyperparameter \(\beta\in(0,1)\), initialization \(\mathbf{\theta}^{(0)}\in\mathbb{R}^{p}\), \(\mathbf{\nu}^{(0)}=\mathbf{0}\in\mathbb{R}^{p}\), \(\mathbf{\mathrm{m}}^{(0)}=\mathbf{0}\in\mathbb{R}^{p}\) and the following update rule: for each \(n\geq 0\), \(j\in\{1,\ldots,p\}\)
\[\begin{split}&\nu_{j}^{(n+1)}=\rho\nu_{j}^{(n)}+(1-\rho)\big{(} \nabla_{j}E_{n}(\mathbf{\theta}^{(n)})\big{)}^{2},\quad m_{j}^{(n+1)}=\beta m_{j} ^{(n)}+(1-\beta)\nabla_{j}E_{n}(\mathbf{\theta}^{(n)}),\\ &\theta_{j}^{(n+1)}=\theta_{j}^{(n)}-h\frac{m_{j}^{(n+1)}/(1- \beta^{n+1})}{\sqrt{\nu_{j}^{(n+1)}/(1-\rho^{n+1})+\varepsilon}}.\end{split} \tag{2.1}\]
**Remark 2.2** (The \(\varepsilon\) hyperparameter is inside the square root).: Note that the numerical stability hyperparameter \(\varepsilon>0\), which is introduced in these algorithms to avoid division by zero, is inside the square root in our definition. This way we avoid division by zero in the derivative too: the first derivative of \(x\mapsto\big{(}\sqrt{x+\varepsilon}\big{)}^{-1}\) is bounded for \(x\geq 0\). This is useful for our analysis. In Theorems SA-2.4 and SA-4.4 in the appendix, the original versions of RMSProp and Adam are also tackled, though with an additional assumption which requires that no component of the gradient can come very close to zero in the region of interest. This is true only for the initial period of learning (whereas Theorem 3.1 tackles the whole period). Practitioners do not seem to make a distinction between the version with \(\varepsilon\) inside vs. outside the square root: tutorials with both versions abound on machine learning related websites. Moreover, the popular
Tensorflow variant of RMSProp has \(\varepsilon\) inside the square root [1] even though in the documentation [2] Kingma & Ba (2015) is cited, where \(\varepsilon\) is outside. While conducting experiments, we also noted that moving \(\varepsilon\) inside or outside the square root does not change the behavior of Adam or RMSProp qualitatively.
### Summary of our main result (in the full-batch case)
We are ready to informally describe our theoretical result (in the full-batch special case). Assume \(E(\mathbf{\theta})\) is the loss, whose partial derivatives up to the fourth order are bounded. Let \(\{\mathbf{\theta}^{(n)}\}\) be iterations of Adam as defined in Definition 2.1. Our main result for this case is finding an ODE whose solution trajectory \(\tilde{\mathbf{\theta}}(t)\) is \(h^{2}\)-close to \(\{\mathbf{\theta}^{(n)}\}\), meaning that for any positive time horizon \(T>0\) there exists a constant \(C>0\) such that for any step size \(h\in(0,T)\) we have \(\|\tilde{\mathbf{\theta}}(nh)-\mathbf{\theta}^{(n)}\|\leq Ch^{2}\) (for \(n\) between \(0\) and \(\lfloor T/h\rfloor\)). The ODE is written the following way (up to terms that rapidly go to zero as \(n\) grows): for the component number \(j\in\{1,\dots,p\}\)
\[\dot{\tilde{\theta}}_{j}(t)=-\frac{1}{\sqrt{|\nabla_{j}E(\tilde{\mathbf{\theta}}( t))|^{2}+\varepsilon}}\big{(}\nabla_{j}E(\tilde{\mathbf{\theta}}(t))+\text{bias} \big{)} \tag{2.2}\]
with initial conditions \(\tilde{\mathbf{\theta}}_{j}(0)=\mathbf{\theta}_{j}^{(0)}\) for all \(j\), where the bias term is
\[\text{bias}:=\frac{h}{2}\Bigg{\{}\frac{1+\beta}{1-\beta}-\frac{1+\rho}{1-\rho }+\frac{1+\rho}{1-\rho}\cdot\frac{\varepsilon}{|\nabla_{j}E(\tilde{\mathbf{ \theta}}(t))|^{2}+\varepsilon}\Bigg{\}}\nabla_{j}\big{\|}\nabla E(\tilde{\mathbf{ \theta}}(t))\big{\|}_{1,\varepsilon}. \tag{2.3}\]
Depending on hyperparameter values and the training stage, the bias term can take two extreme forms, and during most of the training the reality is usually in between. The extreme cases are as follows.
* If \(\sqrt{\varepsilon}\) is **small** compared to all components of \(\nabla E(\tilde{\mathbf{\theta}}(t))\), i. e. \(\min_{j}\big{|}\nabla_{j}E(\tilde{\mathbf{\theta}}(t))\big{|}\gg\sqrt{\varepsilon}\), which is the case during the initial learning stage, then \[\text{bias}=\frac{h}{2}\bigg{\{}\frac{1+\beta}{1-\beta}-\frac{1+\rho}{1-\rho }\bigg{\}}\nabla_{j}\big{\|}\nabla E(\tilde{\mathbf{\theta}}(t))\big{\|}_{1, \varepsilon}.\] (2.4) For small \(\varepsilon\), the perturbed one-norm is indistinguishable from the usual one-norm, and for \(\beta>\rho\) it is penalized (in much the same way as the squared two-norm is implicitly penalized in the case of GD), but for \(\rho>\beta\) its decrease is actually hindered by this term (so the bias is opposite to penalization). The ODE in (2.2) can be approximately rewritten as \[\dot{\tilde{\mathbf{\theta}}}_{j}(t)=-\frac{\nabla_{j}\widetilde{E}(\tilde{\mathbf{ \theta}}(t))}{\big{|}\nabla_{j}E(\tilde{\mathbf{\theta}}(t))\big{|}},\qquad \widetilde{E}(\mathbf{\theta})=E(\mathbf{\theta})+\frac{h}{2}\bigg{\{}\frac{1+\beta}{ 1-\beta}-\frac{1+\rho}{1-\rho}\bigg{\}}\big{\|}\nabla E(\mathbf{\theta})\big{\|}_ {1}.\] (2.5)
* If \(\sqrt{\varepsilon}\) is **large** compared to all gradient components, i. e. \(\max_{j}\big{|}\nabla_{j}E(\tilde{\mathbf{\theta}}(t))\big{|}\ll\sqrt{\varepsilon}\), which may happen during the later learning stage, the fraction with \(\varepsilon\) is the numerator in (2.3) approaches one, the dependence on \(\rho\) cancels out, and \[\big{\|}\nabla E(\tilde{\mathbf{\theta}}(t))\big{\|}_{1,\varepsilon}\approx\sum_{ i=1}^{p}\sqrt{\varepsilon}\Bigg{(}1+\frac{\big{|}\nabla_{i}E(\tilde{\mathbf{ \theta}}(t))\big{|}^{2}}{2\varepsilon}\Bigg{)}=p\sqrt{\varepsilon}+\frac{1}{2 \sqrt{\varepsilon}}\big{\|}\nabla E(\tilde{\mathbf{\theta}}(t))\big{\|}^{2}.\] (2.6) In other words, \(\|\cdot\|_{1,\varepsilon}\) becomes \(\|\cdot\|^{2}/(2\sqrt{\varepsilon})\) up to an additive constant (which is "eaten" by the gradient): \[\text{bias}=\frac{h}{4\sqrt{\varepsilon}}\frac{1+\beta}{1-\beta}\nabla_{j} \big{\|}\nabla E(\tilde{\mathbf{\theta}}(t))\big{\|}^{2}.\] The form of the ODE in this case is \[\dot{\tilde{\mathbf{\theta}}}_{j}(t)=-\nabla_{j}\widetilde{E}(\tilde{\mathbf{\theta}} (t)),\qquad\widetilde{E}(\mathbf{\theta})=\frac{1}{\sqrt{\varepsilon}}\bigg{(}E( \tilde{\mathbf{\theta}}(t))+\frac{h}{4\sqrt{\varepsilon}}\frac{1+\beta}{1-\beta} \big{\|}\nabla E(\tilde{\mathbf{\theta}}(t))\big{\|}^{2}\bigg{)}.\] (2.7)
These two extreme cases are summarized in Table 1. In Figure 1, we use the one-dimensional (\(p=1\)) case to illustrate what kind of term is being implicitly penalized.
This overview also applies to RMSProp by setting \(\beta=0\). See Theorem SA-3.4 in the appendix for the formal result.
**Example 2.3** (Backward Error Analysis for GD with Heavy-ball Momentum).: Assume \(\varepsilon\) is very large compared to all squared gradient components during the whole training process, so that the form of the ODE is approximated by (2.7). Since Adam with a large \(\varepsilon\) and after a certain number of iterations approximates SGD with heavy-ball momentum with step size \(h\frac{1-\beta}{\sqrt{\varepsilon}}\), a linear step size change (and corresponding time change) gives exactly the equations in Theorem 4.1 of Ghosh et al. (2023). Taking \(\beta=0\) (no momentum), we get the implicit regularization of GD from Barrett and Dherin (2021).
## 3 ODE approximating mini-batch Adam trajectories: full statement
We only make one assumption, which is standard in the literature: the loss for each mini-batch is 4 times continuously differentiable, and partial derivatives up to order 4 of each mini-batch loss \(E_{k}\) are bounded by constants, i. e. there exists a positive constant \(M\) such that for \(\boldsymbol{\theta}\) in the region of interest
\[\sup_{k}\Biggl{\{}\sup_{i}|\nabla_{i}E_{k}(\boldsymbol{\theta})|\vee\sup_{i, j}|\nabla_{ij}E_{k}(\boldsymbol{\theta})|\vee\sup_{i,j,s}|\nabla_{ijs}E_{k}( \boldsymbol{\theta})|\vee\sup_{i,j,s,r}|\nabla_{ijsr}E_{k}(\boldsymbol{ \theta})|\Biggr{\}}\leq M. \tag{3.1}\]
We now state the main result for mini-batch Adam, whose proof is in the supplemental appendix (Theorem SA-5.4).
**Theorem 3.1**.: _For any sequence \(\{a_{k}\}\), let \(\mathrm{AV}_{\gamma}^{n}[a_{\cdot}]:=\frac{1}{1-\gamma^{n+1}}\sum_{k=0}^{n} \gamma^{n-k}(1-\gamma)a_{k}\) denote the exponential averaging operator. Assume (3.1) holds. Let \(\{\boldsymbol{\theta}^{(n)}\}\) be iterations of Adam as defined in Definition 2.1,
\begin{table}
\begin{tabular}{|c|c|c|} \hline & \(\varepsilon\) “small” & \(\varepsilon\) “large” \\ \hline \(\beta\geq\rho\) & \(\|\nabla E(\boldsymbol{\theta})\|_{1}\)-penalized & \(\|\nabla E(\boldsymbol{\theta})\|_{2}^{2}\)-penalized \\ \hline \(\rho>\beta\) & \(-\|\nabla E(\boldsymbol{\theta})\|_{1}\)-penalized & \(\|\nabla E(\boldsymbol{\theta})\|_{2}^{2}\)-penalized \\ \hline \end{tabular}
\end{table}
Table 1: Implicit bias of Adam: special cases. “Small” and “large” are in relation to squared gradient components (Adam in the latter case is close to GD with momentum).
\(\tilde{\mathbf{\theta}}(t)\) be the continuous solution to the piecewise ODE_
\[\begin{split}\dot{\tilde{\theta}}_{j}(t)&=-\frac{M_{j} ^{(n)}(\tilde{\mathbf{\theta}}(t))}{R_{j}^{(n)}(\tilde{\mathbf{\theta}}(t))}\\ &+h\Bigg{(}\frac{M_{j}^{(n)}(\tilde{\mathbf{\theta}}(t))\big{(}2P_{j} ^{(n)}(\tilde{\mathbf{\theta}}(t))+\bar{P}_{j}^{(n)}(\tilde{\mathbf{\theta}}(t))\big{)} }{2R_{j}^{(n)}(\tilde{\mathbf{\theta}}(t))^{3}}-\frac{2L_{j}^{(n)}(\tilde{\mathbf{ \theta}}(t))+\bar{L}_{j}^{(n)}(\tilde{\mathbf{\theta}}(t))}{2R_{j}^{(n)}(\tilde{ \mathbf{\theta}}(t))}\Bigg{)}.\end{split} \tag{3.2}\]
_for \(t\in[nh,(n+1)h]\) with the initial condition \(\tilde{\mathbf{\theta}}(0)=\mathbf{\theta}^{(0)}\), where_
\[\begin{split}& R_{j}^{n}(\mathbf{\theta}):=\sqrt{\mathrm{AV}_{\rho}^{ n}[(\nabla_{j}E_{\mathbf{\theta}}))^{2}]+\varepsilon},\qquad\qquad M_{j}^{(n)}(\mathbf{ \theta}):=\mathrm{AV}_{\beta}^{n}[\nabla_{j}E_{\mathbf{\cdot}}(\mathbf{\theta})],\\ & L_{j}^{(n)}(\mathbf{\theta}):=\mathrm{AV}_{\beta}^{n}\Bigg{[}\sum_{ i=1}^{p}\nabla_{ij}E_{\mathbf{\cdot}}(\mathbf{\theta})\sum_{l=\ast}^{n-1}\frac{M_{i}^{(n)}( \mathbf{\theta})}{R_{i}^{(n)}(\mathbf{\theta})}\Bigg{]},\ \ \bar{L}_{j}^{(n)}(\mathbf{\theta}):= \mathrm{AV}_{\beta}^{n}\Bigg{[}\sum_{i=1}^{p}\nabla_{ij}E_{\mathbf{\cdot}}(\mathbf{ \theta})\frac{M_{i}^{(n)}(\mathbf{\theta})}{R_{i}^{(n)}(\mathbf{\theta})}\Bigg{]},\\ & P_{j}^{(n)}(\mathbf{\theta}):=\mathrm{AV}_{\rho}^{n}\Bigg{[}\nabla_ {j}E_{\mathbf{\cdot}}(\mathbf{\theta})\sum_{i=1}^{p}\nabla_{ij}E_{\mathbf{\cdot}}(\mathbf{ \theta})\sum_{l=\ast}^{n-1}\frac{M_{i}^{(n)}(\mathbf{\theta})}{R_{i}^{(n)}(\mathbf{ \theta})}\Bigg{]},\\ &\bar{P}_{j}^{(n)}(\mathbf{\theta}):=\mathrm{AV}_{\rho}^{n}\Bigg{[} \nabla_{j}E_{\mathbf{\cdot}}(\mathbf{\theta})\sum_{i=1}^{p}\nabla_{ij}E_{\mathbf{\cdot}}( \mathbf{\theta})\frac{M_{i}^{(n)}(\mathbf{\theta})}{R_{i}^{(n)}(\mathbf{\theta})}\Bigg{]}.\end{split}\]
_Then, for any fixed positive time horizon \(T>0\) there exists a constant \(C\) such that for any step size \(h\in(0,T)\) we have \(\left\|\tilde{\mathbf{\theta}}(nh)-\mathbf{\theta}^{(n)}\right\|\leq Ch^{2}\) for \(n\in\left\{0,\ldots,\left\lfloor T/h\right\rfloor\right\}\)._
## 4 Discussion
First conclusion.Recall that from Ghosh et al. (2023) the ODE approximating the dynamics of full-batch heavy-ball momentum GD is close to (1.1). The correction term regularizes the training process by penalizing the two-norm of the gradient of the loss. We can conclude that _this_ kind of regularization is typically absent in RMSProp (if \(\varepsilon\) is small) and Adam with \(\rho>\beta\) (if \(\varepsilon\) is small). This may partially explain why these algorithms generalize worse than SGD, and it may be a previously unknown perspective on why they are biased towards higher-curvature regions and find "sharper" minima.
Second conclusion.However, the bias term in (2.3) does contain a kind of "norm" which is the perturbed one-norm \(\|\mathbf{v}\|_{1,\varepsilon}=\sum_{i=1}^{p}\sqrt{v_{i}^{2}+\varepsilon}\). If \(\sqrt{\varepsilon}\) is small compared to gradient components, which is usually true except at the end of the training, we can conclude from (2.5) that it is only in the case \(\beta>\rho\) that the perturbed norm _is_ penalized, and decreasing \(\rho\) or increasing \(\beta\) moves the trajectory towards regions with lower "norm".
Third conclusion.There is currently no theory indicating that penalizing the (perturbed) one-norm of the gradient improves generalization. However, reasoning by analogy (with the case of the two-norm), we can conjecture with lower confidence that at least in some stable regimes of training increasing \(\beta\) and decreasing \(\rho\) should improve the test error.
## 5 Illustration: simple bilinear model
We now analyze the effect of the first-order term for Adam in the same model as Barrett and Dherin (2021) and Ghosh et al. (2023) have studied. Namely, assume the parameter \(\mathbf{\theta}=(\theta_{1},\theta_{2})\) is 2-dimensional, and the loss is given by \(E(\mathbf{\theta}):=1/2(y-\theta_{1}\theta_{2}x)^{2}\), where \(x=2\), \(y=3/2\) are fixed scalars. The loss is minimized on the hyperbola \(\theta_{1}\theta_{2}=y/x\). We graph the trajectories of Adam in this case: Figure 2 shows that increasing \(\beta\) forces the trajectory to the region with smaller \(\|\nabla E(\mathbf{\theta})\|_{1}\), and increasing \(\rho\) does the opposite. Figure 3 shows that increasing the learning rate moves Adam towards the region with smaller \(\left\|\nabla E(\mathbf{\theta})\right\|_{1}\) if \(\beta>\rho\) (just like in the case of gradient descent, except the norm is different if \(\varepsilon\) is small compared to gradient components), and does the opposite if \(\rho>\beta\). All these observations are exactly what Theorem 3.1 predicts.
## 6 Numerical experiments
We offer some preliminary empirical evidence of how the bias term shows up in deep neural networks.
Ma et al. (2022) divides training regimes of Adam into three categories: the spike regime when \(\rho\) is much larger than \(\beta\), in which the training loss curve contains very large spikes and the training is obviously unstable; the (stable) oscillation regime when \(\rho\) is sufficiently close to \(\beta\), in which the loss curve contains fast and small oscillations; the divergence regime when \(\beta\) is much larger than \(\rho\), in which Adam diverges. We of course exclude the last regime. Since it is very unlikely that an unstable Adam trajectory is close to the piecewise ODE emerging from backward error analysis, we exclude the spike regime as well, and confine ourselves to considering the oscillation regime (in which \(\rho\) and \(\beta\) should not be so far apart that spikes appear). This is the regime Ma et al. (2022) recommend to use in practice.
We train Resnet-50 on the CIFAR-10 dataset with full-batch Adam and investigate how the quantity \(\|\nabla E(\mathbf{\theta})\|_{1,\varepsilon}\) and the test error are affected by increasing \(\rho\) or \(\beta\). Figure 4 shows that in the stable oscillation regime increasing \(\rho\) seems to increase the perturbed one-norm (consistent with backward error analysis: the smaller \(\rho\), the more this "norm" is penalized) and decrease the test accuracy. The opposite to the latter was noticed in Cohen et al. (2022), which we think is the case in the spike regime (where the trajectory of Adam is definitely far from the piecewise ODE trajectory at later stages of training). Figure 5 shows that increasing \(\beta\) seems to decrease the perturbed one-norm (consistent with backward error analysis:
Figure 3: The setting is the same as in Figure 2. Increasing the learning rate moves the Adam trajectory towards the regions with smaller one-norm of the gradient if \(\beta\) is significantly larger than \(\rho\) and does the opposite if \(\rho\) is larger than \(\beta\).
Figure 2: Increasing \(\beta\) moves the trajectory of Adam towards the regions with smaller one-norm of the gradient (if \(\varepsilon\) is sufficiently small); increasing \(\rho\) does the opposite. The violet line is the line of global minima, and the cross denotes the limiting point of minimal one-norm of the gradient. All Adam trajectories start at \((2.8,3.5)\).
the larger \(\beta\), the more this norm is penalized) and increase the test accuracy. The picture confirms the finding in Ghosh et al. (2023) (for momentum gradient descent) that increasing the momentum parameter improves the test accuracy.
We obtain a more detailed picture of the perturbed norm's behavior by training Resnet-101 on CIFAR-10 and CIFAR-100 with full-batch Adam. Figure 6 shows the graphs of \(\left\|\nabla E\right\|_{1,\varepsilon}\) as functions of the epoch number. The "norm" decreases, then rises again, and then decreases further until it flatlines. Throughout most of the training, the larger \(\beta\) the smaller the "norm". The "hills" of the "norm" curves are higher with smaller \(\beta\) and larger \(\rho\). This is completely consistent with backward analysis because the larger \(\rho\) compared to \(\beta\), the more \(\left\|\nabla E\right\|_{1,\varepsilon}\) is prevented from falling by the bias term.
## 7 Future directions
As far as we know, the assumption similar to (3.1) is explicitly or implicitly present in all previous work on backward error analysis of gradient-based machine learning algorithms. Apart from the technicali
Figure 4: Resnet-50 on CIFAR-10 trained with full-batch Adam. The test accuracy seems to fall as \(\rho\) increases (in the stable “small oscillations” regime of training). The hyperparameters are as follows: \(h=7.5\cdot 10^{-5}\), \(\varepsilon=10^{-8}\), \(\beta=0.99\). The test accuracies plotted here are maximal after more than 3600 epochs. The perturbed norms are calculated at the same epoch number 900. (It is fair to compare Adam with different parameters at one epoch since the effective learning rates are the same.)
Figure 5: Resnet-50 on CIFAR-10 trained with full-batch Adam. The perturbed one-norm seems to fall as \(\beta\) increases (in the stable oscillation regime of training), and the test accuracy seems to rise. The hyperparameters are as follows: \(h=10^{-4}\), \(\rho=0.999\), \(\varepsilon=10^{-8}\). Both metrics are calculated when the loss first drops below the threshold 0.1.
that ReLU activations cause the loss to not be differentiable everywhere (though it is very common to ignore this), there is evidence that large-batch algorithms often operate at the edge of stability (Cohen et al., 2021, 2022), in which the largest eigenvalue of the hessian can be quite large, making it unclear whether the higher-order partial derivatives can safely be assumed bounded near optimality. However, as Smith et al. (2021) point out, in the mini-batch setting backward error analysis can be more accurate. We leave a qualitative analysis of the behavior (average or otherwise) of first-order terms in Theorem3.1 in the mini-batch case as a future direction.
Also, the constant \(C\) in Theorem3.1 depends on \(\varepsilon\) and goes to infinity as \(\varepsilon\) goes to zero. Theoretically, our proof does not exclude the case where for very small \(\varepsilon\) the trajectory of the piecewise ODE is only close to the Adam trajectory for small, suboptimal learning rates, at least at later stages of learning. (For the initial learning period, this is not a problem.) It appears to also be true of Proposition1 in Ma et al. (2022) (zeroth-order approximation by sign-GD). This is especially noticeable in the large-spike regime of training (see Section6 and Ma et al. (2022)) which, despite being obviously pretty unstable, can still minimize the training loss well and lead to acceptable test errors. It would be interesting to investigate this regime in connection with Theorem3.1 in detail.
We believe these considerations can fruitfully guide future work in this area.
#### Acknowledgments
We specially thank Boris Hanin and Sam Smith for their insightful comments and suggestions. Cattaneo gratefully acknowledges financial support from the National Science Foundation through DMS-2210561 and SES-2241575. Klusowski gratefully acknowledges financial support from the National Science Foundation through CAREER DMS-2239448, DMS-2054808, and HDR TRIPODS CCF-1934924.
|
2309.08425 | Quasi-BPS categories for symmetric quivers with potential | We study certain categories associated to symmetric quivers with potential,
called quasi-BPS categories. We construct semiorthogonal decompositions of the
categories of matrix factorizations for moduli stacks of representations of
(framed or unframed) symmetric quivers with potential, where the summands are
categorical Hall products of quasi-BPS categories. These results generalize our
previous results about the three loop quiver.
We prove several properties of quasi-BPS categories: wall-crossing
equivalence, strong generation, and categorical support lemma in the case of
tripled quivers with potential. We also introduce reduced quasi-BPS categories
for preprojective algebras, which have trivial relative Serre functor and are
indecomposable when the weight is coprime with the total dimension. In this
case, we regard the reduced quasi-BPS categories as noncommutative local
hyperk\"ahler varieties, and as (twisted) categorical versions of crepant
resolutions of singularities of good moduli spaces of representations of
preprojective algebras.
The studied categories include the local models of quasi-BPS categories of K3
surfaces. In a follow-up paper, we establish analogous properties for quasi-BPS
categories of K3 surfaces. | Tudor Pădurariu, Yukinobu Toda | 2023-09-15T14:28:11Z | http://arxiv.org/abs/2309.08425v1 | # Quasi-BPS categories for symmetric quivers with potential
###### Abstract.
We study certain categories associated to symmetric quivers with potential, called quasi-BPS categories. We construct semiorthogonal decompositions of the categories of matrix factorizations for moduli stacks of representations of (framed or unframed) symmetric quivers with potential, where the summands are categorical Hall products of quasi-BPS categories. These results generalize our previous results about the three loop quiver.
We prove several properties of quasi-BPS categories: wall-crossing equivalence, strong generation, and categorical support lemma in the case of tripled quivers with potential. We also introduce reduced quasi-BPS categories for preprojective algebras, which have trivial relative Serre functor and are indecomposable when the weight is coprime with the total dimension. In this case, we regard the reduced quasi-BPS categories as noncommutative local hyperkahler varieties, and as (twisted) categorical versions of crepant resolutions of singularities of good moduli spaces of representations of preprojective algebras.
The studied categories include the local models of quasi-BPS categories of K3 surfaces. In a follow-up paper, we establish analogous properties for quasi-BPS categories of K3 surfaces.
## 1. Introduction
### Motivation
The BPS invariants [14, Section 2 and a half] and BPS cohomologies [15] are central objects in the study of Donaldson-Thomas (DT) theory and of (Kontsevich-Soibelman [17]) cohomological Hall algebras of a Calabi-Yau 3-fold or a quiver with potential. In this paper, we study certain subcategories of matrix factorizations associated with symmetric quivers with potential, called _quasi-BPS categories_. They were introduced by the first named author in [2] to prove a categorical version of the PBW theorem for cohomological Hall algebras [15]. As proved by the second named author [16], quivers with potential describe the local structure of moduli of sheaves on a Calabi-Yau 3-fold (CY3). Thus, the study of quasi-BPS categories for quivers with potential is expected to help in understanding the (yet to be defined) Donaldson-Thomas (DT) categories or quasi-BPS categories for global CY3 geometries.
A particular case of interest is that of _tripled quivers with potential_. A subclass of tripled quivers with potential gives a local model of the moduli stack of (Bridgeland semistable and compactly supported) sheaves on the local K3 surface
\[X=S\times\mathbb{C}, \tag{1.1}\]
where \(S\) is a K3 surface. This local description was used by Halpern-Leistner [HLa] to prove the D-equivalence conjecture for moduli spaces of stable sheaves on K3 surfaces, see [18] for its generalization. Tripled quivers with potential are also of interest in representation theory: the Hall algebras of a tripled quiver with potential are Koszul equivalent to the preprojective Hall algebras introduced by Schiffmann-Vasserot [19], Yang-Zhao [20], Varagnolo-Vasserot
[4], which are categorifications of positive halves of quantum affine algebras [21].
The tripled quiver with potential for the Jordan quiver is the quiver with one vertex and three loops \(\{X,Y,Z\}\), and with potential \(X[Y,Z]\). In our previous papers [26, 27], motivated by the search for a categorical analogue of the MacMahon formula, we studied quasi-BPS categories for the three loop quiver. In particular, we constructed semiorthogonal decompositions for the framed and unframed stacks of representations of the tripled quiver, we proved a categorical support lemma, and so on. The purpose of this paper is to generalize the results in [26, 27] to more general symmetric quivers with potential, with special attention to tripled quivers with potential. We also prove new results on quasi-BPS categories: first, we show that quasi-BPS categories are equivalent under wall-crossing; next, we introduce _reduced quasi-BPS categories_ and show that they are indecomposable when the weight is coprime with the total dimension.
In [26], we use the results of this paper to introduce and study quasi-BPS categories for (local) K3 surfaces, and discuss their relationship with (twisted) categorical crepant resolutions of singular symplectic moduli spaces of semistable sheaves on K3 surfaces.
### Quasi-BPS categories
For a symmetric quiver \(Q=(I,E)\) and a dimension vector \(d\in\mathbb{N}^{I}\), consider
\[\operatorname{Tr}W\colon\mathscr{X}(d):=R(d)/G(d)\to\mathbb{C}\]
the moduli stack of representations of \(Q\) of dimension \(d\), together with the regular function determined by the potential \(W\). Let \(M(d)_{\mathbb{R}}^{W_{d}}\) be the set of Weyl invariant real weights of the maximal torus \(T(d)\subset G(d)\). For \(\delta\in M(d)_{\mathbb{R}}^{W_{d}}\), consider the (ungraded or graded) _quasi-BPS category_:
\[\mathbb{S}^{\bullet}(d;\delta)\subset\operatorname{MF}^{\bullet}(\mathscr{X} (d),\operatorname{Tr}W)\text{ for }\bullet\in\{\emptyset,\operatorname{gr}\}, \tag{1.2}\]
which is the category of matrix factorizations with factors in
\[\mathbb{M}(d;\delta)\subset D^{b}(\mathscr{X}(d)), \tag{1.3}\]
a noncommutative resolution of the coarse space of \(\mathscr{X}(d)\) constructed by Spenko-Van den Bergh [10].
When \(d\) is primitive and \(\delta,\ell\in M(d)_{\mathbb{R}}^{W_{d}}\) are generic of weight zero with respect to the diagonal torus \(\mathbb{C}^{*}\subset G(d)\), Halpern-Leistner-Sam's magic window theorem [12] says that there is an equivalence:
\[\mathbb{M}(d;\delta)\overset{\sim}{\to}D^{b}\big{(}X(d)^{\ell\text{-ss}} \big{)}. \tag{1.4}\]
Here, \(\mathscr{X}(d)^{\ell\text{-ss}}\to X(d)^{\ell\text{-ss}}\) is the GIT quotient of the \(\ell\)-semistable locus, which is a \(\mathbb{C}^{*}\)-gerbe, and \(X(d)^{\ell\text{-ss}}\) is a smooth quasi-projective variety. However, there is no equivalence (1.4) for non-primitive \(d\). In this case, the stack \(\mathscr{X}(d)^{\ell\text{-ss}}\) contains strictly semistable representations, the morphism \(\mathscr{X}(d)^{\ell\text{-ss}}\to X(d)^{\ell\text{-ss}}\) is more complicated, and \(X(d)^{\ell\text{-ss}}\) is usually singular. Nevertheless, under some conditions on \(\delta\), we expect \(\mathbb{M}(d;\delta)\) to behave as the derived category of a smooth quasi-projective variety. Thus it is interesting to investigate the structure of \(\mathbb{M}(d;\delta)\) or \(\mathbb{S}(d;\delta)\), especially when \(d\) is non-primitive.
As its name suggests, the category \(\mathbb{S}(d;\delta)\) was introduced in [28] as a categorical version of _BPS invariants_. The BPS invariants for CY3-folds are fundamental enumerative invariants which determine other enumerative invariants of interest,
such as Donaldson-Thomas (DT) and Gromov-Witten invariants [14, Section 2 and a half]. There are BPS cohomologies whose Euler characteristics equal the BPS invariants, defined by Davison-Meinhardt [15] in the case of symmetric quivers with potential and by Davison-Hennecart-Schlegel Mejia [16] in the case of local K3 surfaces. For general CY 3-fold, up to the existence of a certain orientation data, the BPS cohomologies are defined in [13, Definition 2.11].
In [14], we make the relation between quasi-BPS categories and BPS cohomologies more precise: we describe the topological K-theory of (1.2) in terms of BPS cohomologies, and show that, under some extra condition, they are isomorphic.
### Semiorthogonal decompositions
In [15], the first named author constructed semiorthogonal decompositions of the categorical Hall algebra of \((Q,W)\) in Hall products of quasi-BPS categories for all symmetric quivers \(Q\) and all potentials \(W\). However, the (combinatorial) data which parametrizes the summands is not easy to determine, and it is not very convenient for studying explicit wall-crossing geometries.
In this paper, we construct, for certain symmetric quivers, a different semiorthogonal decomposition which is more amenable to wall-crossing. We state the result in a particular case, see Theorem 4.2 for a more general statement which applies to all tripled quivers with potential.
Before stating Theorem 1.1, we introduce some notations. For a dimension vector \(d=(d^{i})_{i\in I}\), let \(\underline{d}=\sum_{i\in I}d^{i}\) be its total length. We set \(\tau_{d}=\frac{1}{\underline{d}}\left(\sum_{i\in I}\sum_{j=1}^{d^{i}}\beta_{i} ^{j}\right)\), where \(\beta_{i}^{j}\) are weights of the standard representation of \(G(d)\). We consider the following particular examples of quasi-BPS categories (1.2):
\[\mathbb{S}^{\bullet}(d)_{v}:=\mathbb{S}^{\bullet}(d;v\tau_{d}).\]
**Theorem 1.1**.: (Theorem 4.19) _Let \((Q,W)\) be a symmetric quiver with potential such that the number of loops at each vertex is odd and the number of edges between any two different vertices is even. Let \(\bullet\in\{\emptyset,\mathrm{gr}\}\). There is a semiorthogonal decomposition_
\[\mathrm{MF}^{\bullet}(\mathscr{X}(d),\mathrm{Tr}\,W)=\left\langle\bigotimes_{i =1}^{k}\mathbb{S}^{\bullet}(d_{i})_{v_{i}}:\frac{v_{1}}{\underline{d}_{1}}< \cdots<\frac{v_{k}}{\underline{d}_{k}}\right\rangle, \tag{1.5}\]
_where \((d_{i})_{i=1}^{k}\) is a partition of \(d\) and \((v_{i})_{i=1}^{k}\in\mathbb{Z}^{k}\)._
As in [15], we regard the above semiorthogonal decomposition as a categorical version of the PBW theorem for cohomological Hall algebras of Davison-Meinhardt [15].
We also study semiorthogonal decompositions for the moduli spaces \(\mathscr{X}^{f}(d)^{\mathrm{ss}}\) of semistable framed representations of \(Q\), consisting of framed \(Q\)-representations generated by the image of the maps from the framed vertex. We state a particular case, see Theorem 4.1 for a general statement which includes all tripled quivers with potential:
**Theorem 1.2**.: (Theorem 4.18) _In the setting of Theorem 1.1, we further take \(\mu\in\mathbb{R}\setminus\mathbb{Q}\). Then there is a semiorthogonal decomposition_
\[\mathrm{MF}^{\bullet}\left(\mathscr{X}^{f}(d)^{\mathrm{ss}},\mathrm{Tr}\,W \right)=\left\langle\bigotimes_{i=1}^{k}\mathbb{S}^{\bullet}(d_{i})_{v_{i}}: \mu\leqslant\frac{v_{1}}{\underline{d}_{1}}<\cdots<\frac{v_{k}}{\underline{d}_ {k}}<1+\mu\right\rangle, \tag{1.6}\]
_where \((d_{i})_{i=1}^{k}\) is a partition of \(d\) and \((v_{i})_{i=1}^{k}\in\mathbb{Z}^{k}\)._
We proved Theorem 1.2 for the three loop quiver in [PTa, Theorem 1.1] in order to give a categorical analogue of the MacMahon formula for Hilbert schemes of points on \(\mathbb{C}^{3}\). Theorem 1.2 gives a generalization of [PTa, Theorem 1.1]. As explained in [PTa], the semiorthogonal decomposition (1.6) is regarded as a categorical analogue of DT/BPS wall-crossing formula [Bri11, Tod10, Tod12], whose motivic/ cohomological version is due to Meinhardt-Reineke [MR19].
Theorems 1.1 and 1.2 are two of the main tools we use to further investigate quasi-BPS categories. For example, they are central in the proof of Theorem 1.4 and in the results of [PTd].
### Categorical wall-crossing of quasi-BPS categories
Let \(d\) be a primitive dimension vector and let \(\ell,\ell^{\prime}\in M(d)_{\mathbb{R}}^{W_{d}}\) be generic stability conditions. Then there is a birational map between two crepant resolutions of \(X(d)\):
(1.7)
As a corollary of the magic window theorem (1.4) of Halpern-Leistner-Sam, there is a derived equivalence:
\[D^{b}\big{(}X(d)^{\ell\text{-ss}}\big{)}\simeq D^{b}\big{(}X(d)^{\ell^{\prime }\text{-ss}}\big{)},\]
which proves the D/K equivalence conjecture of Bondal-Orlov [BO], Kawamata [Kaw02] for the resolutions (1.7).
We prove an analogous result when \(d\) is not necessary primitive, hence there is no stability condition such that the \(\mathbb{C}^{*}\)-rigidified moduli stack of semistable representations is a Deligne-Mumford stack. For a stability condition \(\ell\) on \(Q\), we define a quasi-BPS category
\[\mathbb{S}^{\ell}(d;\delta)\subset\operatorname{MF}\big{(}\mathscr{X}(d)^{ \ell\text{-ss}},\operatorname{Tr}W\big{)}\]
which is, locally on the good moduli space \(\mathscr{X}(d)^{\ell\text{-ss}}\to X(d)^{\ell\text{-ss}}\), modeled by a category (1.2).
**Theorem 1.3**.: (Theorem 3.14) _Let \((Q,W)\) be a symmetric quiver with potential, and let \(\ell,\ell^{\prime}\) be generic stability conditions. Then there is a dense open subset \(U\subset M(d)_{\mathbb{R}}^{W_{d}}\) such that, for \(\delta\in U\), there is an equivalence:_
\[\mathbb{S}^{\ell}(d;\delta)\simeq\mathbb{S}^{\ell^{\prime}}(d;\delta). \tag{1.8}\]
Note that BPS invariants are preserved under wall-crossing [Tod23, Lemma 4.7]. We regard Theorem 1.3 as the categorical analogue of this property.
### Categorical support lemma for quasi-BPS categories of tripled quivers
For a quiver \(Q^{\circ}=(I,E^{\circ})\), let \((Q^{\circ,d},\mathscr{I})\) be its doubled quiver with relation \(\mathscr{I}\), and let \((Q,W)\) be its tripled quiver with potential, see Subsection 2.2.6. Tripled quivers with potential form an important class of symmetric quivers with potential. Hall algebras of tripled quivers with potential are isomorphic to preprojective Hall algebras, which are themselves positive parts of quantum affine algebras [NSS]. An important ingredient in the study of these Hall algebras is Davison's support lemma [Dava, Lemma 4.1] for BPS sheaves of tripled quivers with potential, which is used to prove purity of various cohomologies [Dava, Davb].
Inspired by Davison's support lemma, we studied in [PTb, Theorem 1.1] the support of objects in quasi-BPS categories for the tripled quiver with potential of
the Jordan quiver, and we used it to obtain generators for the integral equivariant K-theory of certain quasi-BPS categories [16, Theorem 1.2].
We prove an analogous result to [16, Theorem 1.1] for (certain) tripled quivers with potential, see Theorem 1.4. The examples we study include all Ext-quivers of polystable sheaves (for a Bridgeland stability condition) on a local K3 surface as in (1.1). We use Theorem 1.4 to show relative properness of reduced quasi-BPS categories in Theorem 1.5.
Let \(d=(d^{i})_{i\in I}\in\mathbb{N}^{I}\) be a dimension vector and let \(\mathfrak{g}(d)\) be the Lie algebra of \(G(d)\). There is a projection map which remembers the linear map on the added loops (i.e. edges of the tripled quiver which are added to the doubled quiver):
\[\mathcal{X}(d)\to\mathfrak{g}(d)/G(d).\]
It induces a map
\[\pi\colon\operatorname{Crit}\left(\operatorname{Tr}W\right)\hookrightarrow \mathcal{X}(d)\to\mathfrak{g}(d)/G(d)\to\mathfrak{g}(d)/\!\!/G(d)=\prod_{i\in I }\operatorname{Sym}^{d^{i}}(\mathbb{C}).\]
Consider the diagonal map
\[\Delta\colon\mathbb{C}\hookrightarrow\prod_{i\in I}\operatorname{Sym}^{d^{i}} (\mathbb{C}).\]
For two vertices \(a,b\in I\) of the quiver \(Q^{\circ}\), let \(\delta_{ab}=1\) if \(a=b\) and \(\delta_{ab}=0\) otherwise, and define:
\[\alpha_{a,b}:=\sharp(a\to b\text{ in }E^{\circ})+\sharp(b\to a\text{ in }E^{\circ})-2\delta_{ab}. \tag{1.9}\]
**Theorem 1.4**.: (Theorem 5.1) _Let \(Q^{\circ}=(I,E^{\circ})\) be a quiver such that \(\alpha_{a,b}\) is even for any \(a,b\in I\). Let \((Q,W)\) be the tripled quiver with potential of \(Q^{\circ}\). If \(\gcd(v,\underline{d})=1\), then any object of \(\mathbb{S}(d)_{v}\) is supported over \(\pi^{-1}(\Delta)\)._
### Quasi-BPS categories for reduced stacks
We now explain a modification of the categories (1.2) with better geometric properties in the case of tripled quivers with potential. We first introduce notations related to stacks of representations of doubled quivers.
Let \(Q^{\circ}=(I,E^{\circ})\) be a quiver with stack of representations \(\mathcal{X}^{\circ}(d)=R^{\circ}(d)/G(d)\). Let \(\mathcal{P}(d):=\mu^{-1}(0)/G(d)\) be the derived moduli stack of dimension \(d\) representations of the preprojective algebra of \(Q^{\circ}\) (equivalently, of \((Q^{\circ,d},\mathcal{I})\)-representations), where \(\mu^{-1}(0)\) is the derived zero locus of the moment map
\[\mu\colon T^{*}R^{\circ}(d)\to\mathfrak{g}(d). \tag{1.10}\]
Consider the good moduli space
\[\mathcal{P}(d)^{\mathrm{cl}}\to P(d)=\mu^{-1}(0)/\!\!/G(d).\]
In many cases, \(P(d)\) is a singular symplectic variety and the study of its (geometric or non-commutative or categorical) resolutions is related to the study of hyperkahler varieties. Note that the variety \(P(d)\) may not have geometric crepant resolutions of singularities, for example if \(Q^{\circ}\) is the quiver with one loop and \(g\geqslant 2\) loops, \(d\geqslant 2\), and \((g,d)\neq(2,2)\), see [11, Proposition 3.5, Theorem 6.2].
Under the Koszul equivalence, the graded quasi-BPS category \(\mathbb{S}^{\mathrm{gr}}(d)_{v}\) for the tripled quiver with potential of the quiver \(Q^{\circ}\) is equivalent to the preprojective quasi-BPS category:
\[\mathbb{T}(d)_{v}\subset D^{b}\left(\mathcal{P}(d)\right).\]
The stack \(\mathscr{P}(d)\) is never classical because the image of the moment map \(\mu\) lies in the Lie subalgebra \(\mathfrak{g}(d)_{0}\subset\mathfrak{g}(d)\) of traceless elements. We consider the reduced stack
\[\mathscr{P}(d)^{\operatorname{red}}:=\mu_{0}^{-1}(0)/G(d),\]
where \(\mu_{0}\colon T^{*}R^{\circ}(d)\to\mathfrak{g}(d)_{0}\). We study the reduced quasi-BPS category
\[\mathbb{T}:=\mathbb{T}(d)_{v}^{\operatorname{red}}\subset D^{b}\big{(} \mathscr{P}(d)^{\operatorname{red}}\big{)}. \tag{1.11}\]
Recall \(\alpha_{a,b}\) from (1.9) and define \(\alpha_{Q^{\circ}}:=\min\{\alpha_{a,b}\mid a,b\in I\}\). We use Theorem 1.4 to prove the following:
**Theorem 1.5**.: (Propositions 4.22 and 5.9, Theorem 5.10, Corollary 5.13) _In the setting of Theorem 1.4, suppose that \(\gcd(v,\underline{d})=1\). Then:_
_(i) If \(\alpha_{Q^{\circ}}\geqslant 2\), the category \(\mathbb{T}\) is regular, and it is proper over \(P(d)\)._
_(ii) Suppose furthermore that \(P(d)\) is Gorenstein, e.g. \(\alpha_{Q^{\circ}}\geqslant 3\). Then there exists a relative Serre functor \(\mathbb{S}_{\mathbb{T}/P(d)}\) of \(\mathbb{T}\) over \(P(d)\), and it satisfies \(\mathbb{S}_{\mathbb{T}/P(d)}\cong\operatorname{id}_{\mathbb{T}}\)._
_(iii) In the situation of (ii), \(\mathbb{T}\) does not admit any non-trivial semiorthogonal decomposition._
Inspired by the above theorem, we regard (1.11) as a noncommutative local hyperkahler variety, which is a (twisted) categorical version of a crepant resolution of singularities of \(P(d)\). It is an interesting question to see the relation with categorical crepant resolutions in the sense of Kuznetsov [15] or noncommutative crepant resolutions in the sense of Van den Bergh [21]. We plan to investigate this relation in future work.
In [19], we use Theorem 1.5 to study reduced quasi-BPS categories for a K3 surface \(S\). In particular, we show that these categories are a (twisted) categorical version of a crepant resolution of the moduli space
\[M_{S}^{H}(v)\]
of \(H\)-Gieseker semistable sheaves on \(S\), where \(H\) is a generic stability condition and \(v\) is a non-primitive Mukai vector such that \(\langle v,v\rangle\geqslant 2\), compare with [16] in the geometric case.
### Acknowledgments
We thank Tasuki Kinjo, Davesh Maulik, Yalong Cao, Junliang Shen, Georg Oberdieck, and Jorgen Rennemo for discussions related to this work. T. P. is grateful to Columbia University in New York and to Max Planck Institute for Mathematics in Bonn for their hospitality and financial support during the writing of this paper. The project of this paper started when Y. T. was visiting Columbia University in April 2023. Y. T. thanks Columbia University for their hospitality. Y. T. is supported by World Premier International Research Center Initiative (WPI initiative), MEXT, Japan, and Grant-in Aid for Scientific Research grant (No. 19H01779) from MEXT, Japan.
### Notations
We list the main notation used in the paper in Table 1.
All the spaces \(\mathscr{X}=X/G\) considered are quasi-smooth (derived) quotient stacks over \(\mathbb{C}\), where \(G\) is an algebraic group. The classical truncation of \(\mathscr{X}\) is denoted by \(\mathscr{X}^{\operatorname{cl}}=X^{\operatorname{cl}}/G\). We assume that \(X^{\operatorname{cl}}\) is a quasi-projective scheme. We denote by \(\mathbb{L}_{\mathscr{X}}\) the cotangent complex of \(\mathscr{X}\). Any dg-category considered is a \(\mathbb{C}\)-linear pre-triangulated dg-category, in particular its homotopy category is a triangulated category. We denote by \(D_{\operatorname{qc}}(\mathscr{X})\) the unbounded derived category of quasi-coherent sheaves, by \(D^{b}(\mathscr{X})\) the bounded derived category of coherent sheaves, and by \(\operatorname{Perf}(\mathscr{X})\) its subcategory of perfect complexes.
Let \(R\) be a set. Consider a set \(O\subset R\times R\) such that for any \(i,j\in R\) we have \((i,j)\in O\), or \((j,i)\in O\), or both \((i,j)\in O\) and \((j,i)\in O\). Let \(\mathbb{T}\) be a pre-triangulated dg-category. We will construct semiorthogonal decompositions
\[\mathbb{T}=\langle\mathbb{A}_{i}\mid i\in R\rangle \tag{1.12}\]
with summands pre-triangulated subcategories \(\mathbb{A}_{i}\) indexed by \(i\in R\) such that for any \(i,j\in R\) with \((i,j)\in O\) and for any objects \(\mathcal{A}_{i}\in\mathbb{A}_{i}\), \(\mathcal{A}_{j}\in\mathbb{A}_{j}\), we have \(\operatorname{Hom}_{\mathbb{T}}(\mathcal{A}_{i},\mathcal{A}_{j})=0\).
Consider a morphism \(\pi\colon\mathcal{X}\to S\). We say the semiorthogonal decomposition (1.12) is _\(S\)-linear_ if \(\mathbb{A}_{i}\otimes\pi^{*}\mathrm{Perf}(S)\subset\mathbb{A}_{i}\) for all \(i\in R\).
We use the terminology of _good moduli spaces_ of Alper, see [1, Section 8] for examples of stacks with good moduli spaces.
## 2. Preliminaries
### Matrix factorizations
We briefly review the definition of categories of matrix factorizations. For more details, see [11, Subsection 2.6].
Consider a smooth quotient stack \(\mathcal{X}=X/G\), where \(G\) is an algebraic group acting on a smooth affine scheme \(X\), with a regular function \(f\colon\mathcal{X}\to\mathbb{C}\). Consider the category of matrix factorizations by
\[\operatorname{MF}(\mathcal{X},f),\]
whose objects are tuples
\[(\alpha\colon A\leftrightarrows B\colon\beta),\ \alpha\circ\beta=\cdot f,\ \beta\circ\alpha=\cdot f, \tag{2.1}\]
where \(A,B\in\operatorname{Coh}(\mathcal{X})\). If \(\mathbb{M}\subset D^{b}(\mathcal{X})\) is a subcategory, let
\[\operatorname{MF}(\mathbb{M},f)\subset\operatorname{MF}(\mathcal{X},f) \tag{2.2}\]
the subcategory consisting of totalizations of tuples (2.1) with \(A,B\in\mathbb{M}\), see [11, Subsection 2.6] for the precise definition. If \(\mathbb{M}\) is generated by a set of vector bundles \(\{\mathcal{V}_{i}\}_{i\in I}\) on \(\mathcal{X}\), then (2.2) is generated by matrix factorizations whose factors are direct sums of vector bundles from \(\{\mathcal{V}_{i}\}_{i\in I}\), see [11, Lemma 2.3].
Given an action of \(\mathbb{C}^{*}\) on \(\mathcal{X}\) for which \(f\) is of weight \(2\), we also consider the category of graded matrix factorizations \(\operatorname{MF}^{\mathrm{gr}}(\mathcal{X},f)\). Its objects consist of tuples (2.1) where \(A,B\) are \(\mathbb{C}^{*}\)-equivariant and \(\alpha,\beta\) are of \(\mathbb{C}^{*}\)-weight one. For a subcategory \(\mathbb{M}\subset D^{b}(\mathcal{X})\), we define \(\operatorname{MF}^{\mathrm{gr}}(\mathbb{M},f)\subset\operatorname{MF}^{ \mathrm{gr}}(\mathcal{X},f)\) similarly to (2.2).
Let \(\mathcal{Z}\subset\mathcal{X}\) be a closed substack. A matrix factorization \(F\) in \(\operatorname{MF}(\mathcal{X},f)\) has support in \(\mathcal{Z}\) if its restriction to \(\operatorname{MF}(\mathcal{X}\setminus\mathcal{Z},f)\) is zero. Every matrix factorization \(F\) has support included in \(\operatorname{Crit}(f)\subset\mathcal{X}\), so for any open substack \(\mathcal{U}\subset\mathcal{X}\) which contains \(\operatorname{Crit}(f)\), the following restriction functor is an equivalence:
\[\operatorname{MF}(\mathcal{X},f)\xrightarrow{\sim}\operatorname{MF}(\mathcal{ U},f). \tag{2.3}\]
We deduce semiorthogonal decompositions for a quiver with potential \((Q,W)\) from the case of zero potential, see for example [11, Proposition 2.5], [12, Proposition 2.1]. We extensively use the Koszul equivalence, see Theorem 2.5.
We consider either ungraded categories of matrix factorizations or graded categories which are Koszul equivalent to derived categories of bounded complexes of coherent sheaf on a quasi-smooth stack. When considering the product of two categories of matrix factorizations, which is in the context of the Thom-Sebastiani theorem, we consider the product of dg-categories over \(\mathbb{C}(\beta)\) for \(\beta\) of homological
degree \(-2\) in the ungraded case, see [Pre, Theorem 4.1.3], and the product of dg-categories over \(\mathbb{C}\) in the graded case, see [1, Corollary 5.18] (alternatively in the graded case, one can use the Koszul equivalence).
### Quivers, weights, and partitions
#### 2.2.1. Basic notions
Let \(Q=(I,E)\) be a quiver, i.e. a directed graph with set of vertices \(I\) and set of edges \(E\). Let \(d=(d^{a})_{a\in I}\in\mathbb{N}^{I}\) be a dimension vector. Denote
Figure 1. Notation used in the paper
by
\[\mathscr{X}(d)=R(d)/G(d)\]
the stack of representations of \(Q\) of dimension \(d\). Here \(R(d)\), \(G(d)\) are given by
\[R(d)=\bigoplus_{(a\to b)\in E}\operatorname{Hom}(V^{a},V^{b}),\ G(d)=\prod_{a \in I}GL(V^{a}).\]
We say that \(Q\) is _symmetric_ if for any \(a,b\in I\), the number of arrows from \(a\) to \(b\) is the same as those from \(b\) to \(a\). In this case, \(R(d)\) is a self-dual \(G(d)\)-representation. We have the good moduli space morphism (or GIT quotient)
\[\pi_{X,d}=\pi_{X}\colon\mathscr{X}(d)\to X(d):=R(d)/\!\!/G(d).\]
For a quiver \(Q\), let \(\mathbb{C}[Q]\) be its path algebra. A potential \(W\) of a quiver \(Q\) is an element
\[W\in\mathbb{C}[Q]/[\mathbb{C}[Q],\mathbb{C}[Q]].\]
A pair \((Q,W)\) is called a _quiver with potential_. Given a potential \(W\), there is a regular function
\[\operatorname{Tr}W\colon\mathscr{X}(d)\to\mathbb{C}. \tag{2.4}\]
By the property of the good moduli space, the function \(\operatorname{Tr}W\) factors through the good moduli space \(\operatorname{Tr}W\colon\mathscr{X}(d)\xrightarrow{\pi_{X,d}}X(d)\to\mathbb{C}\).
We will consider the derived category \(D^{b}(\mathscr{X}(d))\) of coherent sheaves on \(\mathscr{X}(d)\) and the category of matrix factorizations \(\operatorname{MF}(\mathscr{X}(d),\operatorname{Tr}W)\). Since the diagonal torus \(\mathbb{C}^{*}\subset T(d)\) acts on \(R(d)\) trivially, there are orthogonal decompositions
\[D^{b}(\mathscr{X}(d))=\bigoplus_{w\in\mathbb{Z}}D^{b}(\mathscr{X}(d))_{w},\ \operatorname{MF}(\mathscr{X}(d),\operatorname{Tr}W)=\bigoplus_{w\in\mathbb{Z} }\operatorname{MF}(\mathscr{X}(d),\operatorname{Tr}W)_{w} \tag{2.5}\]
where each summand corresponds to the diagonal \(\mathbb{C}^{*}\)-weight \(w\)-part.
#### 2.2.2. The weight lattice
We fix a maximal torus \(T(d)\) of \(G(d)\). Let \(M(d)\) be the weight lattice of \(T(d)\). For \(a\in I\) and for \(d^{a}\in\mathbb{N}\), denote by \(\beta_{i}^{a}\) for \(1\leqslant i\leqslant d^{a}\) the weights of the standard representation of \(T(d^{a})\). We have
\[M(d)=\bigoplus_{a\in I}\bigoplus_{1\leqslant i\leqslant d^{a}}\mathbb{Z} \beta_{i}^{a}.\]
By abuse of notation, for a \(T(d)\)-representation \(U\) we also denote by \(U\in M(d)\) the sum of weights in \(U\), equivalently, the class of the character \(\det U\). A weight
\[\chi=\sum_{a\in I}\sum_{1\leqslant i\leqslant d^{a}}x_{i}^{a}\beta_{i}^{a}\]
is _dominant (or antidominant)_ if \(x_{i}^{a}\leqslant x_{i+1}^{a}\) (or \(x_{i}^{a}\geqslant x_{i+1}^{a}\)) for all \(a\in I\) and \(1\leqslant i\leqslant d^{a}\). For a dominant weight \(\chi\), we denote by \(\Gamma_{G(d)}(\chi)\) the irreducible representation of \(G(d)\) with highest weight \(\chi\). The dominant or antidominant cocharacters are also defined for elements of the cocharacter lattice \(N(d)=\operatorname{Hom}(M(d),\mathbb{Z})\). We denote by \(1_{d}\in N(d)\) the diagonal cocharacter.
We denote by \(M(d)_{0}\subset M(d)\) the hyperplane of weights with sum of coefficients equal to zero, and set
\[M(d)_{\mathbb{R}}:=M(d)\otimes_{\mathbb{Z}}\mathbb{R},\ M(d)_{0,\mathbb{R}}:= M(d)_{0}\otimes_{\mathbb{Z}}\mathbb{R}.\]
Note that \(M(d)_{0}\) is the weight lattice of the subtorus \(ST(d)\subset T(d)\) defined by
\[ST(d):=\ker(\det\colon T(d)\to\mathbb{C}^{*}),\ (g^{a})_{a\in I}\stackrel{{ \mathrm{det}}}{{\mapsto}}\prod_{a\in I}\det(g^{a}).\]
We denote by \(\langle\,,\,\rangle\colon N(d)\times M(d)\to\mathbb{Z}\) the natural pairing, and we use the same notation for its real version. If \(\lambda\) is a cocharacter of \(T(d)\) and \(V\) is a \(T(d)\)-representation, we may abuse notation and write
\[\langle\lambda,V\rangle=\langle\lambda,\det(V)\rangle\]
to ease notation.
We denote by \(W_{d}\) the Weyl group of \(G(d)\) and \(M(d)^{W_{d}}\subset M(d)\) the Weyl-invariant subset. For \(d=(d^{a})_{a\in I}\), let \(\underline{d}=\sum_{a\in I}d^{a}\) be its total length. Define the Weyl-invariant weights in \(M(d)_{\mathbb{R}}\):
\[\sigma_{d}:=\sum_{a\in I}\sum_{1\leqslant i\leqslant d^{a}}\beta_{i}^{a},\ \tau_{d}:=\frac{\sigma_{d}}{\underline{d}}.\]
We denote by \(\mathfrak{g}(d)\) the Lie algebra of \(G(d)\), and by \(\rho\) half the sum of positive roots of \(\mathfrak{g}(d)\):
\[\rho=\frac{1}{2}\sum_{a\in I}\sum_{1\leqslant i<j\leqslant d^{a}}(\beta_{j}^{ a}-\beta_{i}^{b}).\]
#### 2.2.3.
Let \((d_{i})_{i=1}^{k}\) be a partition of \(d\). There is an identification
\[\bigoplus_{i=1}^{k}M(d_{i})\cong M(d),\]
where \(\beta_{1}^{a},\dots,\beta_{d_{1}}^{a}\) correspond to the the weights of standard representation of \(GL(d_{1}^{a})\) in \(M(d_{1}^{a})\) for \(a\in I\), etc.
**Definition 2.1**.: Let \(\underline{e}=(e_{i})_{i=1}^{l}\) and \(\underline{d}=(d_{i})_{i=1}^{k}\) be two partitions of \(d\in\mathbb{N}^{I}\). We write \(\underline{e}\geqslant\underline{d}\) if there exist integers
\[a_{0}=0<a_{1}<\dots<a_{k-1}\leqslant a_{k}=l\]
such that for any \(0\leqslant j\leqslant k-1\), we have
\[\sum_{i=a_{j}+1}^{a_{j+1}}e_{i}=d_{j+1}.\]
We next define a tree which is useful in decomposing dominant weights of \(M(d)_{\mathbb{R}}\).
**Definition 2.2**.: We define \(\mathcal{T}\) to be the unique (oriented) tree such that:
1. each vertex is indexed by a partition \((d_{1},\dots,d_{k})\) of some \(d\in\mathbb{N}^{I}\),
2. for each \(d\in\mathbb{N}^{I}\), there is a unique vertex indexed by the partition \((d)\) of size one,
3. if \(\bullet\) is a vertex indexed by \((d_{1},\dots,d_{k})\) and \(d_{m}=(e_{1},\dots,e_{s})\) is a partition of \(d_{m}\) for some \(1\leqslant m\leqslant k\), then there is a unique vertex \(\bullet^{\prime}\) indexed by \((d_{1},\dots,d_{m-1},e_{1},\dots,e_{s},d_{m+1},\dots,d_{k})\) and with an edge from \(\bullet\) to \(\bullet^{\prime}\), and
4. all edges in \(\mathcal{T}\) are as in (3).
Note that each partition \((d_{1},\ldots,d_{k})\) of some \(d\in\mathbb{N}^{I}\) gives an index of some (not necessary unique) vertex. A subtree \(T\subset\mathcal{T}\) is called a _path of partitions_ if it is connected, contains a vertex indexed by \((d)\) for some \(d\in\mathbb{N}^{I}\) and a unique end vertex \(\bullet\). The partition \((d_{1},\ldots,d_{k})\) at the end vertex \(\bullet\) is called the associated partition of \(T\). We define the Levi group associated to \(T\) to be
\[L(T):=\times_{i=1}^{k}G(d_{i}).\]
#### 2.2.4. Framed quivers
Consider a quiver \(Q=(I,E)\). Define the _framed quiver_:
\[Q^{f}=(I^{f},E^{f})\]
with set of vertices \(I^{f}=I\sqcup\{\infty\}\) and set of edges \(E^{f}=E\sqcup\{e_{a}\mid a\in I\}\), where \(e_{a}\) is an edge from \(\infty\) to \(a\in I\). Let \(V(d)=\bigoplus_{a\in I}V^{a}\), where \(V^{a}\) is a \(\mathbb{C}\)-vector space of dimension \(d^{a}\). Denote by
\[R^{f}(d)=R(d)\oplus V(d)\]
the affine space of representations of \(Q^{f}\) of dimension \(d\) and consider the moduli stack of framed representations
\[\mathcal{X}^{f}(d):=R^{f}(d)/G(d).\]
We consider GIT stability on \(Q^{f}\) given by the character \(\sigma_{\underline{d}}\). It coincides with the King stability condition on \(Q^{f}\) such that the (semi)stable representations of dimension \((1,d)\) are the representations of \(Q^{f}\) with no subrepresentations of dimension \((1,d^{\prime})\) for \(d^{\prime}\) different from \(d\), see [Toda, Lemma 5.1.9]. Consider the smooth variety obtained as a GIT quotient:
\[\mathcal{X}^{f}(d)^{\rm ss}:=R^{f}(d)^{\rm ss}/G(d).\]
#### 2.2.5. The categorical Hall product
For a cocharacter \(\lambda\colon\mathbb{C}^{*}\to T(d)\) and a \(T(d)\)-representation \(V\), let \(V^{\lambda\geqslant 0}\subset V\) be the subspace generated by weights \(\beta\) such that \(\langle\lambda,\beta\rangle\geqslant 0\), and let \(V^{\lambda}\subset V\) be the subspace generated by weights \(\beta\) such that \(\langle\lambda,\beta\rangle=0\). We denote by \(G(d)^{\lambda}\subset G(d)^{\lambda\geqslant 0}\) the associated Levi and parabolic subgroup of \(G(d)\). If \(V\) is a \(G(d)\)-representation, consider the quotient stack \(\mathcal{X}=V/G(d)\) and set:
\[\mathcal{X}^{\lambda\geqslant 0}=V^{\lambda\geqslant 0}/G(d)^{\lambda\geqslant 0 },\ \mathcal{X}^{\lambda}=V^{\lambda}/G(d)^{\lambda}.\]
The projection \(V^{\lambda\geqslant 0}\twoheadrightarrow V^{\lambda}\) and the inclusion \(V^{\lambda\geqslant 0}\hookrightarrow V\) induce maps
\[\mathcal{X}^{\lambda}\leftarrow\mathcal{X}^{\lambda\geqslant 0}\rightarrow \mathcal{X}. \tag{2.6}\]
We apply the above construction for \(V=R(d)\) to obtain maps:
\[\mathcal{X}(d)^{\lambda}=\times_{i=1}^{k}\mathcal{X}(d_{i})\stackrel{{ \underline{d_{\lambda}}}}{{\leftarrow}}\mathcal{X}(d)^{\lambda\geqslant 0} \stackrel{{ p_{\lambda}}}{{\rightarrow}}\mathcal{X}(d).\]
Suppose that \(\lambda\) is antidominant with associated partition \((d_{i})_{i=1}^{k}\) of \(d\in\mathbb{N}^{I}\), meaning that
\[\mathcal{X}(d)^{\lambda}=\times_{i=1}^{k}\mathcal{X}(d_{i}).\]
The multiplication for the categorical Hall algebra of \((Q,0)\) (or of \((Q,W)\) for a potential \(W\) of \(Q\) and possibly a grading) is defined by the functors [Pad22], where \(\bullet\in\{\emptyset,\mathrm{gr}\}\):
\[m_{\lambda} :=p_{\lambda*}q_{\lambda}^{*}\colon\boxtimes_{i=1}^{k}D^{b}( \mathcal{X}(d_{i}))\to D^{b}(\mathcal{X}(d)),\] \[m_{\lambda} :=p_{\lambda*}q_{\lambda}^{*}\colon\boxtimes_{i=1}^{k}\mathrm{MF}^ {\bullet}(\mathcal{X}(d_{i}),\mathrm{Tr}\,W)\rightarrow\mathrm{MF}^{\bullet}( \mathcal{X}(d),\mathrm{Tr}\,W). \tag{2.7}\]
#### 2.2.6. Doubled quiver
Let \(Q^{\circ}=(I,E^{\circ})\) be a quiver. Its _doubled quiver_ is the quiver
\[Q^{\circ,d}=(I,E^{\circ,d})\]
with set of edges \(E^{\circ,d}=\{e,\overline{e}\ |\ e\in E^{\circ}\}\), where \(\overline{e}\) is is the edge with the opposite orientation of \(e\in E^{\circ}\). Consider the relation \(\mathcal{I}\) of \(\mathbb{C}[Q^{\circ,d}]\):
\[\mathcal{I}:=\sum_{e\in E^{\circ}}[e,\overline{e}]\in\mathbb{C}[Q^{\circ,d}]. \tag{2.8}\]
For \(d\in\mathbb{N}^{I}\), consider the stack of representations of the quiver \(Q^{\circ,d}\) of dimension \(d\):
\[\mathcal{Y}(d):=\overline{R}(d)/G(d):=T^{*}R^{\circ}(d)/G(d)\]
with good moduli space map
\[\pi_{Y,d}=\pi_{Y}\colon\mathcal{Y}(d)\to Y(d):=\overline{R}(d)/\!\!/G(d).\]
Note that
\[\overline{R}(d):=T^{*}R^{\circ}(d)=\bigoplus_{(a\to b)\in E^{\circ}}\mathrm{ Hom}(V^{a},V^{b})\oplus\mathrm{Hom}(V^{b},V^{a}).\]
For \(x\in T^{*}R^{\circ}(d)\) and an edge \((a\to b)\in E^{\circ}\), consider the components \(x(e)\in\mathrm{Hom}(V^{a},V^{b})\) and \(x(\overline{e})\in\mathrm{Hom}(V^{b},V^{a})\). The relation (2.8) determines the moment map
\[\mu\colon T^{*}R^{\circ}(d)\to\mathfrak{g}(d),\ x\mapsto\sum_{e\in E^{\circ}} [x(e),x(\overline{e})]. \tag{2.9}\]
Let \(\mu^{-1}(0)\) be the derived zero locus of \(\mu\). Define the stack of representations of \(Q^{\circ,d}\) with relation \(\mathcal{I}\), alternatively of the preprojective algebra \(\Pi_{Q^{\circ}}:=\mathbb{C}[Q^{\circ,d}]/(\mathcal{I})\):
\[j\colon\mathcal{P}(d):=\mu^{-1}(0)/G(d)\hookrightarrow\mathcal{Y}(d). \tag{2.10}\]
The stack \(\mathcal{P}(d)^{\mathrm{cl}}\) has a good moduli space map:
\[\pi_{P,d}=\pi_{P}\colon\mathcal{P}(d)^{\mathrm{cl}}\to P(d):=\mu^{-1}(0)^{ \mathrm{cl}}/\!\!/G(d). \tag{2.11}\]
Let \(\lambda\) be an antidominant cocharacter of \(T(d)\) corresponding to the decomposition \((d_{i})_{i=1}^{k}\) of \(d\). Similarly to (2.7), there is a categorical Hall product [11, 12], see also (2.19):
\[m_{\lambda}\colon\boxtimes_{i=1}^{k}D^{b}(\mathcal{P}(d_{i}))\to D^{b}( \mathcal{P}). \tag{2.12}\]
**Remark 2.3**.: As mentioned in the introduction, moduli stacks of representations of preprojective algebras are interesting for at least two reasons: they describe locally the moduli stack of (Bridgeland semistable) sheaves on a Calabi-Yau surface, and their K-theory (or Borel-Moore homology) can be used to construct positive halves of quantum affine algebras (or of Yangians).
#### 2.2.7. Tripled quivers with potential
Consider a quiver \(Q^{\circ}=(I,E^{\circ})\). Its _tripled quiver_
\[Q=(I,E)\]
has set of edges \(E=E^{\circ,d}\sqcup\{\omega_{a}\ |\ a\in I\}\), where \(\omega_{a}\) is a loop at the vertex \(a\in I\). The tripled potential \(W\) (which is a potential of \(Q\)) is defined as follows:
\[W:=\left(\sum_{a\in I}\omega_{a}\right)\left(\sum_{e\in E^{\circ}}[e, \overline{e}]\right)\in\mathbb{C}[Q]. \tag{2.13}\]
The quiver with potential \((Q,W)\) constructed above is called the _tripled quiver with potential_ associated to \(Q^{\circ}\).
For \(d\in\mathbb{N}^{f}\), let \(\mathscr{X}(d)\) be the stack of representations of \(Q\). It is given by
\[\mathscr{X}(d)=\left(T^{*}R^{\circ}(d)\oplus\mathfrak{g}(d)\right)/G(d)=\left( \overline{R}(d)\oplus\mathfrak{g}(d)\right)/G(d). \tag{2.14}\]
Recall the function \(\operatorname{Tr}W\) on \(\mathscr{X}(d)\) from (2.4). The critical locus
\[\operatorname{Crit}(\operatorname{Tr}W)\subset\mathscr{X}(d)\]
is the moduli stack of \((Q,W)\)-representations, alternatively of the Jacobi algebra \(\mathbb{C}[Q]/\mathscr{J}\), where \(\mathscr{J}\) is the two-sided ideal generated by the partial derivatives \(\partial W/\partial e\) for all \(e\in E\).
Consider the grading on \(\mathscr{X}(d)\) which is of weight \(0\) on \(\overline{R}(d)\) and of weight \(2\) on \(\mathfrak{g}(d)\). The Koszul equivalence (which we recall later, in Theorem 2.5) says that there is an equivalence:
\[\Theta\colon D^{b}\left(\mathscr{P}(d)\right)\overset{\sim}{\to} \operatorname{MF}^{\operatorname{gr}}\left(\mathscr{X}(d),\operatorname{Tr}W \right). \tag{2.15}\]
**Remark 2.4**.: Tripled quivers with potential are interesting for at least two reasons: they model the local structure of moduli stacks of semistable sheaves on an arbitrary CY3, and they can be used, in conjunction with dimensional reduction techniques (such as the Koszul equivalence (2.15)), to study (moduli of representations of) preprojective algebras.
### The Koszul equivalence
Let \(Y\) be an affine smooth scheme with an action of a reductive group \(G\). Consider the quotient stack \(\mathscr{Y}=Y/G\). Let \(\mathscr{V}\to\mathscr{Y}\) be a vector bundle and let \(s\in\Gamma(\mathscr{V},\mathscr{V})\). Let \(\mathscr{P}=s^{-1}(0)\) be the derived zero locus of \(s\), so its structure complex is the Koszul complex
\[\mathcal{O}_{\mathscr{P}}:=\left(\operatorname{Sym}_{\mathcal{O}_{\mathscr{Y} }}(\mathscr{V}^{\vee}[1]),d_{\mathscr{P}}\right)\]
with differential \(d_{\mathscr{P}}\) induced by \(s\). Let \(\mathscr{X}=\operatorname{Tot}_{\mathscr{X}}(\mathscr{V}^{\vee})\) and define the function \(f\) by
\[f\colon\mathscr{X}\to\mathbb{C},\ f(y,v)=\langle s(y),v\rangle\]
for \(y\in\mathscr{Y}\) and \(v\in\mathscr{V}^{\vee}|_{y}\). There are maps:
\[\mathscr{P}\overset{j}{\hookrightarrow}\mathscr{Y}\overset{\eta}{\leftarrow} \mathscr{X}\overset{f}{\to}\mathbb{C}, \tag{2.16}\]
where \(j\) is the natural inclusion and \(\eta\) is the natural projection. The following is the Koszul equivalence:
**Theorem 2.5**.: ([14, Hir17, Toda]) _There are equivalences:_
\[\Theta\colon D^{b}(\mathscr{P})\overset{\sim}{\to}\operatorname{MF}^{ \operatorname{gr}}(\mathscr{X},f),\ \Theta\colon\operatorname{Ind}D^{b}(\mathscr{P})\overset{\sim}{\to} \operatorname{MF}^{\operatorname{gr}}_{\operatorname{qc}}(\mathscr{X},f).\]
_The grading on the right hand side has weight zero on \(\mathscr{Y}\) and weight \(2\) on the fibers of \(\eta\colon\mathscr{X}\to\mathscr{Y}\)._
The equivalence \(\Theta\) is given by the functor
\[\Theta(-)=(-)\otimes_{\mathcal{O}_{\mathscr{P}}}\mathcal{K}, \tag{2.17}\]
where \(\mathcal{K}\) is the Koszul factorization \(\mathcal{K}:=(\mathcal{O}_{\mathscr{P}}\otimes_{\mathcal{O}_{\mathscr{Y}}} \mathcal{O}_{\mathscr{X}},d_{\mathcal{K}})\) with \(d_{\mathcal{K}}=d_{\mathscr{P}}\otimes 1+\kappa\), where \(\kappa\in\mathcal{V}^{\vee}\otimes\mathcal{V}\) corresponds to \(\operatorname{id}\in\operatorname{Hom}(\mathcal{V},\mathcal{V})\), see [Toda, Theorem 2.3.3]. The following lemma is easily proved from the above description of \(\Theta\).
**Lemma 2.6**.: _Let \(\{V_{j}\}_{j\in J}\) be a set of \(G\)-representations and let \(\mathbb{S}\subset\operatorname{MF}^{\operatorname{gr}}(\mathscr{X},f)\) be the subcategory generated by matrix factorizations whose factors are direct sums of vector bundles \(\mathcal{O}_{\mathscr{X}}\otimes V_{j}\). Then an object \(\mathcal{E}\in D^{b}(\mathscr{P})\) satisfies \(\Theta(\mathcal{E})\in\mathbb{S}\) if and only if \(j_{*}\mathcal{E}\in D^{b}(\mathscr{Y})\) is generated by \(\mathcal{O}_{\mathscr{Y}}\otimes V_{j}\) for \(j\in J\)._
Proof.: The same argument used to prove [4, Lemma 4.5] applies here.
For the later use, we will compare internal homomorphisms under Koszul equivalence. For \(\mathcal{E}_{1},\mathcal{E}_{2}\in D^{b}(\mathscr{P})\), there exists an internal homomorphism, see [1, Remark 3.4]:
\[\mathcal{H}om(\mathcal{E}_{1},\mathcal{E}_{2})\in D_{\operatorname{qc}}( \mathscr{P})\]
It satisfies the following tensor-Hom adjunction: for any \(A\in D_{\operatorname{qc}}(\mathscr{P})\), there are natural isomorphisms:
\[\operatorname{Hom}_{D_{\operatorname{qc}}(\mathscr{P})}(A,\mathcal{H}om( \mathcal{E}_{1},\mathcal{E}_{2}))\cong\operatorname{Hom}_{\operatorname{Ind}D ^{b}(\mathscr{P})}(A\otimes\mathcal{E}_{1},\mathcal{E}_{2}).\]
On the other side of the Koszul equivalence, consider the internal Hom of \(\mathcal{F}_{1},\mathcal{F}_{2}\in\operatorname{MF}^{\operatorname{gr}}( \mathscr{X},f)\):
\[\mathcal{H}om(\mathcal{F}_{1},\mathcal{F}_{2})\in\operatorname{MF}^{ \operatorname{gr}}(\mathscr{X},0).\]
**Lemma 2.7**.: _For \(\mathcal{E}_{1},\mathcal{E}_{2}\in D^{b}(\mathscr{P})\), the equivalence \(\Theta\) induces an isomorphism_
\[j_{*}\mathcal{H}om(\mathcal{E}_{1},\mathcal{E}_{2})\stackrel{{ \simeq}}{{\to}}\eta_{*}\mathcal{H}om(\Theta(\mathcal{E}_{1}),\Theta( \mathcal{E}_{2})) \tag{2.18}\]
_in \(D_{\operatorname{qc}}(\mathscr{Y})=\operatorname{MF}^{\operatorname{gr}}_{ \operatorname{qc}}(\mathscr{Y},0)\)._
Proof.: For \(A\in D_{\operatorname{qc}}(\mathscr{Y})\), we have
\[\operatorname{Hom}(A,j_{*}\mathcal{H}om(\mathcal{E}_{1},\mathcal{ E}_{2})) \cong\operatorname{Hom}(j^{*}A,\mathcal{H}om(\mathcal{E}_{1}, \mathcal{E}_{2}))\] \[\cong\operatorname{Hom}(j^{*}A\otimes\mathcal{E}_{1},\mathcal{ E}_{2}).\]
We also have
\[\operatorname{Hom}(A,\eta_{*}\mathcal{H}om(\Theta(\mathcal{E}_{1} ),\Theta(\mathcal{E}_{2})) \cong\operatorname{Hom}(\eta^{*}A,\mathcal{H}om(\Theta(\mathcal{E} _{1}),\Theta(\mathcal{E}_{2})))\] \[\cong\operatorname{Hom}(\eta^{*}A\otimes\Theta(\mathcal{E}_{1}), \Theta(\mathcal{E}_{2}))\] \[\cong\operatorname{Hom}(\Theta(j^{*}A\otimes\mathcal{E}_{1}), \Theta(\mathcal{E}_{2})),\]
where the last isomorphism follows from the explicit form of \(\Theta\) in (2.17). Therefore \(\Theta\) induces an isomorphism
\[\operatorname{Hom}(A,j_{*}\mathcal{H}om(\mathcal{E}_{1},\mathcal{E}_{2})) \stackrel{{\sim}}{{\to}}\operatorname{Hom}(A,\eta_{*}\mathcal{H }om(\Theta(\mathcal{E}_{1}),\Theta(\mathcal{E}_{2}))),\]
which implies the isomorphism (2.18).
Let \(\lambda\) be a cocharacter of \(G\). Consider the sections induced by \(s\):
\[s^{\lambda\geqslant 0}\in\Gamma(\mathscr{Y}^{\lambda\geqslant 0},\mathscr{V}^{ \lambda\geqslant 0}),\ s^{\lambda}\in\Gamma(\mathscr{Y}^{\lambda},\mathscr{V}^{ \lambda})\]
and their derived zero loci
\[\mathscr{P}^{\lambda\geqslant 0}:=\left(s^{\lambda\geqslant 0}\right)^{-1}(0),\ \mathscr{P}^{\lambda}:=\left(s^{\lambda}\right)^{-1}(0).\]
Similarly to (2.6), consider the maps:
\[\mathscr{P}^{\lambda}\stackrel{{ q}}{{\leftarrow}}\mathscr{P}^{ \lambda\geqslant 0}\stackrel{{ p}}{{\to}}\mathscr{P},\]
where \(q\) is quasi-smooth and \(p\) is proper. Consider the functor, which is a generalization of the categorical Hall product for preprojective algebras [11, 12]:
\[m_{\lambda}=p_{*}q^{*}\colon D^{b}(\mathscr{P}^{\lambda})\to D^{b}(\mathscr{P}). \tag{2.19}\]
We have the following compatibility of categorical Hall products under Koszul equivalence. For the proof, see [23, Proposition 3.1] or [21, Lemma 2.4.4, 2.4.7].
**Proposition 2.8**.: _The following diagram commutes:_
_The horizontal arrows are categorical Hall products for \(\mathcal{P}\) and \(\mathcal{X}\). We denote by \(\Theta\) the Koszul equivalence for both \(\mathcal{X}\) and \(\mathcal{X}^{\lambda}\), and the left vertical map is the functor \(\Theta^{\prime}(-):=\Theta(-)\otimes\det(\mathcal{V}^{\lambda>0})^{\vee}[ \operatorname{rank}\mathcal{V}^{\lambda>0}]\)._
### Polytopes and categories
Let \(Q=(I,E)\) be a symmetric quiver and let \(d=(d^{a})_{a\in I}\in\mathbb{N}^{I}\) be a dimension vector. Consider the multisets of \(T(d)\)-weights
\[\mathcal{A} :=\{\beta_{i}^{a}-\beta_{j}^{b}\mid a,b\in I,(a\to b)\in E,1\leqslant i \leqslant d^{a},1\leqslant j\leqslant d^{b}\},\] \[\mathcal{B} :=\{\beta_{i}^{a}\mid a\in I,1\leqslant i\leqslant d^{a}\},\] \[\mathcal{C} :=\mathcal{A}\sqcup\mathcal{B}. \tag{2.20}\]
Here, \(\mathcal{A}\) is the set of \(T(d)\)-weights of \(R(d)\) and \(\mathcal{C}\) is the set of \(T(d)\)-weights of \(R^{f}(d)\). Define the polytopes
\[\mathbf{W}(d) :=\frac{1}{2}\mathrm{sum}_{\beta\in\mathcal{A}}[0,\beta]\subset M (d)_{0,\mathbb{R}}\subset M(d)_{\mathbb{R}},\] \[\mathbf{V}(d) :=\frac{1}{2}\mathrm{sum}_{\beta\in\mathcal{C}}[0,\beta]\subset M (d)_{\mathbb{R}}, \tag{2.21}\]
where the sums above are Minkowski sums in the space of weights \(M(d)_{\mathbb{R}}\).
**Definition 2.9**.: For a weight \(\delta_{d}\in M(d)_{\mathbb{R}}^{W_{d}}\), define
\[\mathbb{M}(d;\delta_{d})\subset D^{b}(\mathcal{X}(d)) \tag{2.22}\]
to be the full subcategory of \(D^{b}(\mathcal{X}(d))\) generated by vector bundles \(\mathcal{O}_{\mathcal{X}(d)}\otimes\Gamma_{G(d)}(\chi)\), where \(\chi\in M(d)\) is a dominant weight such that
\[\chi+\rho-\delta_{d}\in\mathbf{W}(d). \tag{2.23}\]
The category (2.22) is a particular case of the noncommutative resolutions of quotient singularities introduced by Spenko-Van den Bergh [20]. We may call (2.22) a "magic category" following [10]. For \(\lambda\) a cocharacter of \(T(d)\), define
\[n_{\lambda}=\left\langle\lambda,\det\left(\mathbb{L}_{\mathcal{X}(d)}|_{0}^{ \lambda>0}\right)\right\rangle=\left\langle\lambda,\det\left(R(d)^{\vee} \right)^{\lambda\geqslant 0}\right\rangle-\left\langle\lambda,\det\left( \mathfrak{g}(d)^{\vee}\right)^{\lambda>0}\right\rangle. \tag{2.24}\]
The category (2.22) has also the following alternative description.
**Lemma 2.10**.: ([10, Lemma 2.9]) _The subcategory \(\mathbb{M}(d;\delta_{d})\) of \(D^{b}(\mathcal{X}(d))\) is generated by vector bundles \(\mathcal{O}_{\mathcal{X}(d)}\otimes\Gamma\) for a \(G(d)\)-representation \(\Gamma\) such that, for any \(T(d)\)-weight \(\chi\) of \(\Gamma\) and any cocharacter \(\lambda\) of \(T(d)\), we have that:_
\[\left\langle\lambda,\chi-\delta_{d}\right\rangle\in\left[-\frac{1}{2}n_{ \lambda},\frac{1}{2}n_{\lambda}\right]. \tag{2.25}\]
**Remark 2.11**.: The subcategory (2.22) is contained in \(D^{b}(\mathcal{X}(d))_{w}\) for \(w=\langle 1_{d},\delta_{d}\rangle\). In particular, if (2.22) is non-zero, then \(\langle 1_{d},\delta_{d}\rangle\in\mathbb{Z}\).
We also define a larger subcategory corresponding to the polytope \(\mathbf{V}(d)\). Let
\[\mathbb{D}(d;\delta_{d})\subset D^{b}(\mathscr{X}(d)) \tag{2.26}\]
be generated by vector bundles \(\mathcal{O}_{\mathscr{X}(d)}\otimes\Gamma_{G(d)}(\chi)\), where \(\chi\) is a dominant weight of \(G(d)\) such that
\[\chi+\rho-\delta_{d}\in\mathbf{V}(d).\]
The following definition will be used later in decomposing \(\mathbb{D}(d;\delta_{d})\) into categorical Hall products of \(\mathbb{M}(d;\delta_{d})\).
**Definition 2.12**.: A weight \(\delta_{d}\in M(d)_{\mathbb{R}}^{W_{d}}\) is called _good_ if for all dominant cocharacters \(\lambda\) such that \(\langle\lambda,\beta_{a}^{i}\rangle\in\{-1,0\}\) for all \(i\in I\) and \(1\leqslant a\leqslant d^{i}\), one has that \(\langle\lambda,2\delta_{d}\rangle\notin\mathbb{Z}\).
Next, for a quiver with potential \((Q,W)\), we define quasi-BPS categories:
**Definition 2.13**.: Define the quasi-BPS category to be
\[\mathbb{S}(d;\delta_{d}):=\operatorname{MF}(\mathbb{M}(d;\delta_{d}), \operatorname{Tr}W)\subset\operatorname{MF}(\mathscr{X}(d),\operatorname{Tr}W). \tag{2.27}\]
Suppose that \((Q,W)\) is a tripled quiver of a quiver \(Q^{\circ}=(I,E^{\circ})\). In this case, the graded version is similarly defined
\[\mathbb{S}^{\operatorname{gr}}(d;\delta_{d}):=\operatorname{MF}^{\operatorname {gr}}(\mathbb{M}(d;\delta_{d}),\operatorname{Tr}W)\subset\operatorname{MF}^{ \operatorname{gr}}(\mathscr{X}(d),\operatorname{Tr}W). \tag{2.28}\]
Next, we define quasi-BPS categories for preprojective algebras. Let \(Q^{\circ}\) be a quiver and consider the moduli stack \(\mathscr{P}(d)\) of dimension \(d\) representations of the preprojective algebra of \(Q^{\circ}\) and recall the closed immersion \(j\colon\mathscr{P}(d)\hookrightarrow\mathscr{Y}(d)\). The quasi-BPS category for \(\mathscr{P}(d)\) is defined as follows:
**Definition 2.14**.: Let \(\widetilde{\mathbb{T}}(d;\delta_{d})\subset D^{b}(\mathscr{Y}(d))\) be the subcategory generated by vector bundles \(\mathcal{O}_{\mathscr{Y}(d)}\otimes\Gamma_{G(d)}(\chi)\), where \(\chi\) is a dominant weight of \(G(d)\) satisfying:
\[\chi+\rho-\delta_{d}\in\frac{1}{2}\text{sum}_{\beta\in\mathscr{A}}[0,\beta],\]
where \(\mathscr{A}\) is the set of \(T(d)\)-weights of \(\overline{R}(d)\oplus\mathfrak{g}(d)\). Define the preprojective quasi-BPS category (also called quasi-BPS category of preprojective algebra):
\[\mathbb{T}(d;\delta_{d})\subset D^{b}(\mathscr{P}(d))\]
as the full subcategory of \(D^{b}(\mathscr{P}(d))\) with objects \(\mathcal{E}\) such that \(j_{*}\mathcal{E}\in\widetilde{\mathbb{T}}(d;\delta_{d})\).
By Lemma 2.6, the Koszul equivalence (2.15) restricts to the equivalence
\[\Theta\colon\mathbb{T}(d;\delta_{d})\overset{\sim}{\to}\mathbb{S}^{ \operatorname{gr}}(d;\delta_{d}). \tag{2.29}\]
For \(v\in\mathbb{Z}\) and \(\bullet\in\{\emptyset,\operatorname{gr}\}\), we will use the following shorthand notation:
\[\mathbb{M}(d)_{v}:=\mathbb{M}(d;\delta_{d}),\,\mathbb{S}^{\bullet}(d)_{v}:= \mathbb{S}^{\bullet}(d;v\tau_{d}),\,\mathbb{T}(d)_{v}:=\mathbb{T}(d;v\tau_{d}).\]
## 3. The categorical wall-crossing equivalence
In this section, we prove a wall-crossing equivalence for quasi-BPS categories for symmetric quivers. We thus generalize (the particular case of moduli stacks of representations of quivers of) the theorem of Halpern-Leistner-Sam [10, Theorem 3.2] to the situation when there is no stability condition such that the \(\mathbb{C}^{*}\)-rigidified stack of semistable representations is Deligne-Mumford.
### Preliminaries
Let \(Q=(I,E)\) be a symmetric quiver. For \(d\in\mathbb{N}^{I}\), recall from Subsection 2.4 that \(\mathbf{W}(d)\subset M(d)_{0,\mathbb{R}}\) is the polytope given by the Minkowski sum
\[\mathbf{W}(d)=\frac{1}{2}\text{sum}_{\beta\in\mathcal{A}}[0,\beta]\subset M(d)_{ 0,\mathbb{R}},\]
where
\[\mathcal{A}:=\{\beta_{i}^{a}-\beta_{j}^{b}\mid a,b\in I,(a\to b)\in E,1 \leqslant i\leqslant d^{a},1\leqslant j\leqslant d^{b}\}.\]
We denote by
\[H_{1},\dots,H_{m}\subset\mathbf{W}(d)\]
the codimension one faces of \(\mathbf{W}(d)\), and by
\[\lambda,\dots,\lambda_{m}\colon\mathbb{C}^{*}\to ST(d) \tag{3.1}\]
the cocharacters of \(ST(d)\) such that \(\lambda_{i}^{\perp}\subset M(d)_{0,\mathbb{R}}\) is parallel to \(H_{i}\) for all \(1\leqslant i\leqslant m\). Note the following lemma:
**Lemma 3.1**.: _Let \(\lambda\in\{\lambda_{1},\dots,\lambda_{m}\}\) be an antidominant cocharacter such that_
\[M(d)_{0,\mathbb{R}}^{W_{d}}\subset\lambda^{\perp}\]
_and \(\mathcal{X}(d)^{\lambda}=\times_{j=1}^{k}\mathcal{X}(d_{j})\). Then \(d_{j}\) is proportional to \(d\) for all \(1\leqslant j\leqslant k\)._
Proof.: For simplicity, suppose that \(\lambda\) corresponds to a decomposition \(d=d_{1}+d_{2}\). Since \(\lambda^{\perp}\) is spanned by a subset of weights in \(R(d)\), for fixed vertices \(a,b\in I\) we can write:
\[\frac{\beta_{1}^{a}+\dots+\beta_{d^{a}}^{a}}{d^{a}}-\frac{\beta_{1}^{b}+\dots+ \beta_{d^{a}}^{b}}{d^{b}}=\sum_{i,j,p,q}c_{ij}^{pq}(\beta_{i}^{p}-\beta_{j}^{ q})\]
for some \(c_{ij}^{pq}\in\mathbb{R}\), where the sum on the right hand side is over all \(p,q\in I\) and \(1\leqslant i\leqslant d_{1}^{p},1\leqslant j\leqslant d_{1}^{q}\) or \(d_{1}^{p}<i\leqslant d^{p},d_{1}^{q}<j\leqslant d^{q}\). An identity as above holds since the left hand side is an element of \(M(d)_{0,\mathbb{R}}^{W_{d}}\), hence an element of \(\lambda^{\perp}\) by the hypothesis.
In the right hand side, the sum of the coefficients of \(\beta_{i}^{p}\) for all \(p\in I\) and \(1\leqslant i\leqslant d_{1}^{p}\) is zero. Therefore, by taking the sum of coefficients of such \(\beta_{i}^{p}\) in the left hand side, we conclude that
\[\frac{d_{1}^{a}}{d^{a}}=\frac{d_{1}^{b}}{d^{b}}\]
for all \(a,b\in I\). Therefore \(d_{1},d_{2}\) are proportional to \(d\).
Note that there is a natural pairing (we abuse notation and use the same notation as for the pairing in Subsection 2.2.2):
\[\langle-,-\rangle\colon\mathbb{R}^{I}\times M(d)_{\mathbb{R}}^{W_{d}}\to \mathbb{R} \tag{3.2}\]
defined by
\[\langle e^{b},\det V^{a}\rangle=\delta^{ab},\]
where \(e^{b}\) is the basis element of \(\mathbb{R}^{I}\) corresponding to \(b\in I\). Note that an element \(\ell\in M(d)_{0,\mathbb{R}}^{W_{d}}\), is written as
\[\ell=\sum_{a}\ell^{a}\det V^{a},\ \langle d,\ell\rangle=\sum_{a}d^{a}\ell^{a}=0\]
i.e. \(\ell\) is a \(\mathbb{R}\)-character of \(G(d)\) which is trivial on the diagonal torus \(\mathbb{C}^{*}\subset G(d)\).
**Definition 3.2**.: An element \(\ell\in M(d)_{0,\mathbb{R}}^{W_{d}}\) is _a generic weight_ if the following conditions hold:
* if \(\lambda\in\{\lambda_{1},\ldots,\lambda_{m}\}\) such that \(\ell\in\lambda^{\perp}\), then \(M(d)_{0,\mathbb{R}}^{W_{d}}\subset\lambda^{\perp}\), and
* if \(d^{\prime}\in\mathbb{N}^{I}\) is a summand of a partition of \(d\) such that \(d^{\prime}\) is not proportional to \(d\), then \(\langle d^{\prime},\ell\rangle\neq 0\).
It is obvious that the set of generic weights is a dense open subset in \(M(d)_{0,\mathbb{R}}^{W_{d}}\). Let \(\ell\in M(d)_{0,\mathbb{R}}^{W_{d}}\) be generic and consider the open substack \(\mathscr{X}(d)^{\ell\text{-ss}}\subset\mathscr{X}(d)\) of \(\ell\)-semistable points. By [10], the \(\ell\)-semistable locus consists of \(Q\)-representations \(R\) such that, for any subrepresentation \(R^{\prime}\subset R\) of dimension vector \(d^{\prime}\), we have that \(\langle d^{\prime},\ell\rangle\geqslant 0\). Consider the good moduli space morphism:
\[\mathscr{X}(d)^{\ell\text{-ss}}\to X(d)^{\ell\text{-ss}}.\]
By the genericity of \(\ell\), a closed point of \(X(d)^{\ell\text{-ss}}\) corresponds to a direct sum
\[\bigoplus_{i=1}^{k}V^{(i)}\otimes R^{(i)},\]
where \(V^{(i)}\) is a finite dimensional \(\mathbb{C}\)-vector space and \(R^{(i)}\) is an \(\ell\)-stable \(Q\)-representation whose dimension vector is proportional to \(d\) for each \(1\leqslant i\leqslant k\).
### Quasi-BPS categories for semistable stacks
For each \(\delta_{d}\in M(d)_{\mathbb{R}}^{W_{d}}\), recall from Definition 2.9 the quasi-BPS category:
\[\mathbb{M}(d;\delta_{d})\subset D^{b}(\mathscr{X}(d)). \tag{3.3}\]
By Lemma 2.10, it is the subcategory of objects \(P\) such that, for any cocharacter \(\lambda\colon\mathbb{C}^{*}\to T(d)\), we have that:
\[\operatorname{wt}_{\lambda}(P)\subset\left[-\frac{1}{2}n_{\lambda},\frac{1}{2 }n_{\lambda}\right]+\langle\lambda,\delta\rangle, \tag{3.4}\]
where \(n_{\lambda}:=\left\langle\lambda,\det\left(\mathbb{L}_{\mathscr{X}(d)}^{ \lambda>0}|_{0}\right)\right\rangle\), see (2.24).
Consider a complex \(A\in D^{b}(B\mathbb{C}^{*})\). Write \(A=\bigoplus_{w\in\mathbb{Z}}A_{w}\), where \(\mathbb{C}^{*}\) acts with weight \(w\) on \(A_{w}\). Define the set of weights
\[\operatorname{wt}(A):=\{w\mid A_{w}\neq 0\}\subset\mathbb{Z}. \tag{3.5}\]
Define also the integers
\[\operatorname{wt}^{\max}(A):=\max\left(\operatorname{wt}(A)\right),\, \operatorname{wt}^{\min}(A):=\min\left(\operatorname{wt}(A)\right). \tag{3.6}\]
We define a version of quasi-BPS categories for semistable loci. Let
\[\mathbb{M}^{\ell}(d;\delta_{d})\subset D^{b}(\mathscr{X}(d)^{\ell\text{-ss}}) \tag{3.7}\]
be the subcategory of objects \(P\) such that, for any map \(\nu\colon B\mathbb{C}^{*}\to\mathscr{X}(d)^{\ell\text{-ss}}\), we have:
\[\operatorname{wt}(\nu^{*}P)\subset\left[-\frac{1}{2}n_{\nu},\frac{1}{2}n_{\nu} \right]+\operatorname{wt}(\nu^{*}\delta_{d}), \tag{3.8}\]
where \(n_{\nu}:=\operatorname{wt}\left(\det\left(\left(\nu^{*}\mathbb{L}_{\mathscr{X }(d)}\right)^{>0}\right)\right)\in\mathbb{Z}\). In Corollary 3.11, we show that, for \(\ell=0\), the category (3.7) is \(\mathbb{M}(d;\delta_{d})\).
Consider the restriction functor:
\[\operatorname{res}\colon D^{b}(\mathscr{X}(d))\twoheadrightarrow D^{b}( \mathscr{X}(d)^{\ell\text{-ss}}). \tag{3.9}\]
**Lemma 3.3**.: _The functor (3.9) restricts to the functor_
\[\operatorname{res}\colon\mathbb{M}(d;\delta_{d})\to\mathbb{M}^{\ell}(d;\delta_{d}). \tag{3.10}\]
Proof.: A map \(\nu\colon\mathbb{C}^{*}\to\mathcal{X}(d)\) corresponds (possibly after conjugation) to a point \(x\in R(d)\) together with a cocharacter \(\lambda\colon\mathbb{C}^{*}\to T(d)\) which fixes \(x\). Then \(n_{\nu}=n_{\lambda}\), so the functor (3.9) restricts to the functor (3.10).
There is an orthogonal decomposition
\[D^{b}(\mathcal{X}(d)^{\ell\text{-ss}})=\bigoplus_{w\in\mathbb{Z}}D^{b}( \mathcal{X}(d)^{\ell\text{-ss}})_{w}\]
as in (2.5). There are analogous such decompositions for \(D^{b}(\mathcal{X}(d)^{\lambda})\). The following lemma is a generalization of [10, Proposition 3.11], where a similar result was obtained when the \(\mathbb{C}^{*}\)-rigidification of \(\mathcal{X}(d)^{\ell\text{-ss}}\) is Deligne-Mumford.
**Lemma 3.4**.: _Let \(w\in\mathbb{Z}\). For \(\ell\in M(d)_{0,\mathbb{R}}^{W_{d}}\) and \(\delta_{d}\in M(d)_{\mathbb{R}}^{W_{d}}\) with \(\langle 1_{d},\delta_{d}\rangle=w\), the category \(D^{b}(\mathcal{X}(d)^{\ell\text{-ss}})_{w}\) is generated by \(\operatorname{res}\left(\mathbb{M}(d;\delta_{d})\right)\) and objects of the form \(m_{\lambda}(P)\), where \(P\in D^{b}(\mathcal{X}(d)^{\lambda})_{w}\) is generated by \(\Gamma_{G^{\lambda}(d)}(\chi^{\prime\prime})\otimes\mathcal{O}_{\mathcal{X} ^{\lambda}}(d)\) such that_
\[\langle\lambda,\chi^{\prime\prime}\rangle<-\frac{1}{2}n_{\lambda}+\langle \lambda,\delta_{d}\rangle, \tag{3.11}\]
_and \(\lambda\in\{\lambda_{1},\ldots,\lambda_{m}\}\) is an antidominant cocharacter of \(T(d)\) such that \(\langle\lambda,\ell\rangle=0\). The functor \(m_{\lambda}\) is the categorical Hall product for the cocharacter \(\lambda\), see (2.7)._
Proof.: We explain how to modify the proof of [10, Proposition 3.11] to obtain the desired conclusion. The vector bundles
\[\Gamma_{G(d)}(\chi)\otimes\mathcal{O}_{\mathcal{X}(d)^{\ell\text{-ss}}} \tag{3.12}\]
for \(\chi\) a dominant weight of \(G(d)\) such that \(\langle 1_{d},\chi\rangle=w\) generate the category \(D^{b}\left(\mathcal{X}(d)^{\ell\text{-ss}}\right)_{w}\). We show that (3.12) is generated by the objects in the statement using induction on the pair \((r_{\chi},p_{\chi})\) (with respect to the lexicographic order), where
\[r_{\chi}:=\min\left\{r\geqslant 0\mid\chi+\rho-\delta_{d}\in r\mathbf{W}(d)\right\}\]
and \(p_{\chi}\) is the smallest possible number of \(a_{\beta}\) equal to \(-r_{\chi}\) among all ways of writing
\[\chi+\rho-\delta_{d}=\sum_{\beta\in\mathcal{A}}a_{\beta}\beta,\]
with \(a_{\beta}\in[-r_{\chi},0]\). If \(r_{\chi}\leq 1/2\), then \(\Gamma_{G(d)}(\chi)\otimes\mathcal{O}_{\mathcal{X}(d)^{\ell\text{-ss}}}\) is an object in \(\operatorname{res}\left(\mathbb{M}(d;\delta_{d})\right)\), so we may assume that \(r_{\chi}>1/2\).
As in the argument of [10, Proposition 3.11], there is an antidominant cocharacter \(\lambda\) of \(ST(d)\) such that:
* \(\langle\lambda,\chi\rangle\leqslant\langle\lambda,\mu\rangle\) for any \(\mu\in-\rho+r\mathbf{W}(d)+\delta_{d}\), and
* \(\lambda^{\perp}\) is parallel to a face in \(\mathbf{W}(d)\).
Suppose first that \(\langle\lambda,\ell\rangle\neq 0\). Then as in the proof of [10, Proposition 3.11], there is a complex of vector bundles consisting of \(\Gamma_{G(d)}(\chi)\otimes\mathcal{O}_{\mathcal{X}(d)}\) (which appears once) and \(\Gamma_{G(d)}(\chi^{\prime})\otimes\mathcal{O}_{\mathcal{X}(d)}\) such that \((r_{\chi^{\prime}},p_{\chi^{\prime}})\) is smaller than \((r_{\chi},p_{\chi})\), and supported on the \(\ell\)-unstable locus. The conclusion then follows by induction.
Suppose next that \(\langle\lambda,\ell\rangle=0\) (observe that this case did not occur in [10, Proposition 3.11]). The object
\[m_{\lambda}(\Gamma_{G(d)^{\lambda}}(\chi)\otimes\mathcal{O}_{\mathcal{X}^{ \lambda}(d)})\]
is quasi-isomorphic to a complex of vector bundles (see [4, Proposition 2.1]) consisting of \(\Gamma_{G(d)}(\chi)\otimes\mathcal{O}_{\mathfrak{X}(d)}\) (which appears once) and \(\Gamma_{G(d)}(\chi^{\prime})\otimes\mathcal{O}_{\mathfrak{X}(d)}\) such that \((r_{\chi^{\prime}},p_{\chi^{\prime}})\) is smaller than \((r_{\chi},p_{\chi})\), see the last part of the proof of [10, Proposition 4.1]. Since we have
\[r_{\chi}=-\frac{\langle\lambda,\chi+\rho-\delta_{d}\rangle}{\langle\lambda,R^ {\lambda>0}(d)\rangle}>\frac{1}{2}\]
and \(\lambda\) is antidominant, the inequality (3.11) holds for \(\chi^{\prime\prime}=\chi\). The conclusion then follows using induction on \((r_{\chi},p_{\chi})\).
### The categorical wall-crossing equivalence for symmetric quivers
In this subsection, we give some sufficient conditions for the functor (3.10) to be an equivalence. As a corollary, we obtain the categorical wall-crossing equivalence for quasi-BPS categories of symmetric quivers (and potential zero). The argument is similar to [11, Theorem 3.2], with a modification due to the existence of faces of \(\mathbf{W}(d)\) parallel to hyperplanes which contain \(M(d)_{0,\mathbb{R}}^{W_{d}}\).
For a stability condition \(\ell\in M(d)_{0,\mathbb{R}}^{W_{d}}\), let
\[\mathfrak{X}(d)=\mathcal{S}_{1}\sqcup\cdots\sqcup\mathcal{S}_{N}\sqcup \mathfrak{X}(d)^{\ell\text{-ss}} \tag{3.13}\]
be the Kempf-Ness stratification of \(\mathfrak{X}(d)\) with center \(\mathcal{Z}_{i}\subset\mathcal{S}_{i}\) for \(1\leqslant i\leqslant N\). Consider the associated one parameter subgroups
\[\mu_{1},\dots,\mu_{N}\colon\mathbb{C}^{*}\to T(d), \tag{3.14}\]
see [12, Section 2] for a review of Kempf-Ness stratification.
**Proposition 3.5**.: _Suppose that \(2\langle\mu_{i},\delta_{d}\rangle\notin\mathbb{Z}\) for all \(1\leqslant i\leqslant N\). Then the functor (3.10) is fully-faithful:_
\[\operatorname{res}\colon\mathbb{M}(d;\delta_{d})\hookrightarrow\mathbb{M}^{ \ell}(d;\delta_{d}).\]
Proof.: For a choice \(k_{\bullet}=(k_{i})_{i=1}^{N}\in\mathbb{R}^{N}\), there is a "window category" \(\mathbb{W}^{\ell}_{k_{\bullet}}\subset D^{b}(\mathfrak{X}(d))\) such that the functor (3.9) restricts to the equivalence (see [12, 1]):
\[\operatorname{res}\colon\mathbb{W}^{\ell}_{k_{\bullet}}\stackrel{{ \sim}}{{\to}}D^{b}(\mathfrak{X}(d)^{\ell\text{-ss}}).\]
The subcategory \(\mathbb{W}^{\ell}_{k_{\bullet}}\) consists of objects \(P\) such that \(P|_{\mathcal{Z}_{i}}\) has \(\mu_{i}\)-weights contained in the interval \([k_{i},k_{i}+n_{\mu_{i}})\), where the width \(n_{\mu_{i}}\) is defined by (2.24) for \(\lambda=\mu_{i}\). For \(1\leqslant i\leqslant N\), let
\[k_{i}:=-n_{\mu_{i}}/2+\langle\mu_{i},\delta_{d}\rangle.\]
By the assumption of \(\delta_{d}\), we have that \(\mathbb{M}(d;\delta_{d})\subset\mathbb{W}^{\ell}_{k_{\bullet}}\). It thus follows that the functor (3.10) is fully-faithful.
**Proposition 3.6**.: _Suppose that \(\ell\in M(d)_{0,\mathbb{R}}^{W_{d}}\) satisfies the following condition: for any \(\lambda\in\{\lambda_{1},\dots,\lambda_{m}\}\) with \(\langle\lambda,\ell\rangle=0\) and associated partition \((d_{i})_{i=1}^{k}\) of \(d\), we have that \(\langle d_{i},\ell\rangle=0\) for any \(1\leqslant i\leqslant m\). Then, for any \(\delta\in M(d)_{\mathbb{R}}^{W_{d}}\), the functor (3.10) is essentially surjective:_
\[\operatorname{res}\colon\mathbb{M}(d;\delta_{d})\twoheadrightarrow\mathbb{M}^{ \ell}(d;\delta_{d}).\]
Proof.: We will use Lemma 3.4. We first explain that it suffices to show that
\[\operatorname{Hom}\left(\mathbb{M}^{\ell}(d;\delta_{d}),\operatorname{res} \left(m_{\lambda}(P)\right)\right)=0, \tag{3.15}\]
where \(m_{\lambda}\) is the categorical Hall product and \(\lambda\) and \(P\) are given as in Lemma 3.4. If the above vanishing holds, then
\[\operatorname{Hom}\left(\operatorname{res}\left(\mathbb{M}(d;\delta_{d}) \right),\operatorname{res}\left(m_{\lambda}(P)\right)\right)=0,\]
hence by Lemma 3.4 there is a semiorthogonal decomposition
\[D^{b}(\mathcal{X}(d)^{\ell\text{-ss}})=\left\langle\mathcal{C},\operatorname {res}\left(\mathbb{M}(d;\delta_{d})\right)\right\rangle,\]
where \(\mathcal{C}\) is the subcategory of \(D^{b}(\mathcal{X}(d)^{\ell\text{-ss}})\) generated by objects \(\operatorname{res}\left(m_{\lambda}(P)\right)\) as above (or as in Lemma 3.4). Observe that \(\mathbb{M}^{\ell}(d;\delta_{d})\) is in the left complement of \(\mathcal{C}\) in \(D^{b}(\mathcal{X}(d)^{\ell\text{-ss}})\). Thus the vanishing (3.15) implies that indeed (3.10) is essentially surjective.
Below we show the vanishing (3.15). The stack \(\mathcal{X}^{\lambda\geqslant 0}(d)\) is the stack of filtrations
\[0=R_{0}\subset R_{1}\subset R_{2}\subset\cdots\subset R_{k}=R \tag{3.16}\]
of \(Q\)-representations such that \(R_{i}/R_{i-1}\) has dimension vector \(d_{i}\). By the assumption on \(\ell\), we have that \(\left\langle\ell,d_{i}\right\rangle=0\) for \(1\leqslant i\leqslant k\). It follows that \(R\) is \(\ell\)-semistable if and only if every \(R_{i}/R_{i-1}\) is \(\ell\)-semistable. Therefore there are Cartesian squares:
(3.17)
where each vertical arrow is an open immersion. The stack \(\mathcal{N}\) is the moduli stack of filtrations (3.16) such that each \(R_{i}/R_{i-1}\) is \(\ell\)-semistable with dimension vector \(d_{i}\). Using proper base change and adjunction, the vanishing (3.15) is equivalent to the vanishing of
\[\operatorname{Hom}(E,p_{\lambda*}^{\ell}q_{\lambda}^{\ell*}j^{*}P)= \operatorname{Hom}(p_{\lambda}^{\ell*}E,q_{\lambda}^{\ell*}j^{*}P) \tag{3.18}\]
for any \(E\in\mathbb{M}^{\ell}(d;\delta_{d})\). Let \(\pi\) be the following composition:
\[\pi\colon\mathcal{N}\to\mathcal{X}(d)^{\ell\text{-ss}}\to X(d)^{\ell\text{- ss}}.\]
By the local-to-global Hom spectral sequence, it is enough to show the vanishing of
\[\pi_{*}\mathcal{H}om(p_{\lambda}^{\ell*}E,q_{\lambda}^{\ell*}j^{*}P)=0. \tag{3.19}\]
Further, it suffices to show the vanishing (3.19) formally locally at each point \(p\in X(d)^{\ell\text{-ss}}\). We abuse notation and also denote by \(p\) the unique closed point in the fiber of \(\mathcal{X}(d)^{\ell\text{-ss}}\to X(d)^{\ell\text{-ss}}\) and we may assume that \(\lambda\) is a cocharacter of \(G_{p}:=\operatorname{Aut}(p)\). By Lemma 3.7 below, the top diagram of (3.17) is, formally locally over \(p\in X(d)^{\ell\text{-ss}}\), of the form
\[A^{\lambda}/G_{p}^{\lambda}\gets A^{\lambda\geqslant 0}/G_{p}^{\lambda \geqslant 0}\to A/G_{p}, \tag{3.20}\]
where \(A\) is a smooth affine scheme (of finite type over a complete local ring) with a \(G_{p}\)-action and good moduli space map \(\pi_{A}\colon A/G_{p}\to A/\!\!/G_{p}\). Then the vanishing (3.19) at \(p\) holds since the \(\lambda\)-weights of \(p_{\lambda}^{\ell*}E\) are strictly larger than those of \(q_{\lambda}^{\ell*}j^{*}P\) by the definition of \(\mathbb{M}^{\ell}(d;\delta_{d})\) and the inequality (3.11), see [10, Corollary 3.17, Amplification 3.18], [11, Proposition 4.2].
We have used the following lemma:
**Lemma 3.7**.: _Let \(X(d)_{p}^{\ell\text{-ss}}=\operatorname{Spec}\widehat{\mathcal{O}}_{X(d)^{\ell \text{-ss}},p}\). The top diagram of (3.17) pulled back via \(X(d)_{p}^{\ell\text{-ss}}\to X(d)^{\ell\text{-ss}}\) is of the form (3.20)._
Proof.: Consider the stack \(\Theta=\mathbb{A}^{1}/\mathbb{C}^{*}\). Since \(\mathcal{N}\) is the moduli stack of filtrations of \(\ell\)-semistable objects (3.16), the top diagram of (3.17) is a component of the diagram
\[\operatorname{Map}(B\mathbb{C}^{*},\mathcal{X}(d)^{\ell\text{-ss}})\leftarrow \operatorname{Map}(\Theta,\mathcal{X}(d)^{\ell\text{-ss}})\rightarrow\mathcal{ X}(d)^{\ell\text{-ss}}, \tag{3.21}\]
where the horizontal arrows are evaluation maps for mapping stacks from \(B\mathbb{C}^{*}\) or \(\Theta\), see [HLb]. Namely, \(\mathcal{N}\) is an open and closed substack of \(\operatorname{Map}(\Theta,\mathcal{X}(d)^{\ell\text{-ss}})\), and the top diagram in (3.17) is the restriction of (3.21) to \(\mathcal{N}\). By Luna etale slice theorem, the pull-back of \(\mathcal{X}(d)^{\ell\text{-ss}}\to X(d)^{\ell\text{-ss}}\) via \(X(d)^{\ell\text{-ss}}_{p}\to X(d)^{\ell\text{-ss}}\) is of the form \(A/G_{p}\), where \(A\) is a smooth affine scheme of finite type over \(X(d)^{\ell\text{-ss}}_{p}\) with an action of \(G_{p}\) and good moduli space map \(\pi_{A}\colon A/G_{p}\to A/\!\!/G_{p}\) Since the mapping stacks from \(B\mathbb{C}^{*}\) or \(\Theta\) commute with pull-backs of maps to good moduli spaces, see [HLb, Corollary 1.30.1], the pull-back of the top diagram in (3.17) via \(X(d)^{\ell\text{-ss}}_{p}\to X(d)^{\ell\text{-ss}}\) consists of compatible connected components of the stacks:
\[\operatorname{Map}(B\mathbb{C}^{*},A/G_{p})\leftarrow\operatorname{Map}( \Theta,A/G_{p})\to A/G_{p}. \tag{3.22}\]
Such connected components are of the form (3.20) for some cocharacter \(\lambda\) of \(G_{p}\), see [HLb, Theorem 1.37], and thus the conclusion follows.
Propositions 3.5 and 3.6 imply the following:
**Theorem 3.8**.: _Recall the cocharacters \(\{\lambda_{1},\dots,\lambda_{m}\}\) from (3.1) and the cocharacters \(\{\mu_{1},\dots,\mu_{N}\}\) from (3.14). Assume the pair \((\ell,\delta_{d})\in M(d)^{W_{d}}_{0,\mathbb{R}}\times M(d)^{W_{d}}_{\mathbb{R}}\) satisfies the following conditions:_
1. _for each_ \(1\leqslant i\leqslant N\)_, we have that_ \(2\langle\mu_{i},\delta_{d}\rangle\notin\mathbb{Z}\)_, and_
2. _for any_ \(\lambda\in\{\lambda_{1},\dots,\lambda_{m}\}\) _with_ \(\langle\lambda,\ell\rangle=0\) _and associated partition_ \((d_{j})^{k}_{j=1}\) _of_ \(d\)_, we have that_ \(\langle d_{j},\ell\rangle=0\) _for all_ \(1\leqslant j\leqslant k\)_._
_Then the functor (3.10) is an equivalence:_
\[\operatorname{res}\colon\mathbb{M}(d;\delta_{d})\stackrel{{ \sim}}{{\rightarrow}}\mathbb{M}^{\ell}(d;\delta_{d}).\]
**Definition 3.9**.: For \(\ell\in M(d)^{W_{d}}_{0,\mathbb{R}}\), let \(U_{\ell}\subset M(d)^{W_{d}}_{\mathbb{R}}\) be the subset of weights \(\delta_{d}\) such that the pair \((\ell,\delta_{d})\) satisfies the conditions (1) and (2) from the statement of Theorem 3.8.
We mention two corollaries of Theorem 3.8:
**Corollary 3.10**.: _Let \(\ell\in M(d)^{W_{d}}_{0,\mathbb{R}}\) be a generic weight. Then \(U_{\ell}\subset M(d)^{W_{d}}_{\mathbb{R}}\) is a dense open subset and for \(\delta_{d}\in U_{\ell}\) there is an equivalence:_
\[\operatorname{res}\colon\mathbb{M}(d;\delta_{d})\stackrel{{ \sim}}{{\rightarrow}}\mathbb{M}^{\ell}(d;\delta_{d}).\]
_In particular, for generic weights \(\ell,\ell^{\prime}\in M(d)^{W_{d}}_{0,\mathbb{R}}\) and \(\delta_{d}\in U_{\ell}\cap U_{\ell^{\prime}}\), there is an equivalence:_
\[\mathbb{M}^{\ell}(d;\delta_{d})\simeq\mathbb{M}^{\ell^{\prime}}(d;\delta_{d}).\]
Proof.: If \(\ell\) is generic, by Lemma 3.1 the condition (2) in Theorem 3.8 is satisfied as \(\langle d,\ell\rangle=0\) and each term \(d_{i}\) is proportional to \(d\). The subset of \(\delta\in M(d)^{W_{d}}_{\mathbb{R}}\) satisfying (1) in Theorem 3.8 is a dense open subset. The conclusion then follows.
**Corollary 3.11**.: _For any \(\delta\in M(d)^{W_{d}}_{\mathbb{R}}\), we have \(\mathbb{M}(d;\delta_{d})=\mathbb{M}^{\ell=0}(d;\delta_{d})\)._
Proof.: For \(\ell=0\), there are no Kempf-Ness loci, so condition (1) in Theorem 3.8 is automatic. Further, we have that \(\langle d^{\prime},\ell\rangle=0\) for any \(d^{\prime}\in\mathbb{N}^{I}\), so condition (2) also holds. Therefore we obtain the corollary.
**Remark 3.12**.: The condition (1) in Theorem 3.8 is satisfied for \(\delta_{d}=\varepsilon\cdot\ell\) for \(0<\varepsilon\ll 1\) since \(\langle\ell,\mu_{i}\rangle\in\mathbb{Z}\setminus\{0\}\). Similarly, in Corollary 3.10, the weight \(\delta_{d}=\varepsilon\cdot\ell+\varepsilon^{\prime}\cdot\ell^{\prime}\) satisfies \(\delta_{d}\in U_{\ell}\cap U_{\ell^{\prime}}\) if \(0<\varepsilon,\varepsilon\ll 1\) and \((\varepsilon,\varepsilon^{\prime})\) are linearly independent over \(\mathbb{Q}\).
### The categorical wall-crossing equivalence for symmetric quivers with potential
Let \((Q,W)\) be a symmetric quiver with potential. Similarly to (3.7), we define a category
\[\mathbb{S}^{\ell}(d;\delta_{d})\subset\operatorname{MF}(\mathscr{X}(d)^{\ell \text{-ss}},\operatorname{Tr}W). \tag{3.23}\]
First, for an object \(A\in\operatorname{MF}(B\mathbb{C}^{*},0)\), write \(A=\bigoplus_{w\in\mathbb{Z}}A_{w}\) with \(A_{w}\in\operatorname{MF}(B\mathbb{C}^{*},0)_{w}\). Consider the set of weights:
\[\operatorname{wt}(A):=\{w\mid A_{w}\neq 0\}\subset\mathbb{Z}.\]
Then (3.23) is the subcategory of \(\operatorname{MF}(\mathscr{X}(d)^{\ell\text{-ss}},\operatorname{Tr}W)\) containing objects \(P\) such that, for any map \(\nu\colon B\mathbb{C}^{*}\to\mathscr{X}(d)^{\ell\text{-ss}}\) with \(\nu^{*}\operatorname{Tr}W=0\), the set \(\operatorname{wt}\left(\nu^{*}P\right)\) satisfies the condition (3.8). Note that if \(\nu^{*}\operatorname{Tr}W\neq 0\), then \(\operatorname{MF}(B\mathbb{C}^{*},\nu^{*}\operatorname{Tr}W)=0\) so that the condition of weights in \(\nu^{*}P\) is vacuous in this case.
In the graded case (of a the tripled quiver), the subcategory
\[\mathbb{S}^{\operatorname{gr},\ell}(d;\delta_{d})\subset\operatorname{MF}^{ \operatorname{gr}}(\mathscr{X}(d)^{\ell\text{-ss}},\operatorname{Tr}W)\]
is defined to be the pull-back of \(\mathbb{S}^{\ell}(d;\delta_{d})\) by the forget-the-grading functor
\[\operatorname{forg}\colon\operatorname{MF}^{\operatorname{gr}}(\mathscr{X}(d )^{\ell\text{-ss}},\operatorname{Tr}W)\to\operatorname{MF}(\mathscr{X}(d)^{ \ell\text{-ss}},\operatorname{Tr}W). \tag{3.24}\]
Let \(\bullet\in\{\emptyset,\operatorname{gr}\}\). The following results are proved as Theorem 3.8, Corollary 3.10, and Corollary 3.11.
**Theorem 3.13**.: _Suppose that the pair \((\ell,\delta_{d})\in M(d)^{W_{d}}_{0,\mathbb{R}}\times M(d)^{W_{d}}_{\mathbb{R }}\) satisfies the conditions (1), (2) in Theorem 3.8. Then the restriction functor induces an equivalence:_
\[\operatorname{res}\colon\mathbb{S}^{\bullet}(d;\delta_{d})\overset{\sim}{ \to}\mathbb{S}^{\bullet,\ell}(d;\delta_{d}).\]
**Corollary 3.14**.: _Let \(\ell\in M(d)^{W_{d}}_{0,\mathbb{R}}\) be a generic weight. Then, for \(\delta\in U_{\ell}\), there is an equivalence \(\operatorname{res}\colon\mathbb{S}^{\bullet}(d;\delta_{d})\overset{\sim}{ \to}\mathbb{S}^{\bullet,\ell}(d;\delta_{d})\). In particular, for generic weights \(\ell,\ell^{\prime}\in M(d)^{W_{d}}_{0,\mathbb{R}}\), and \(\delta_{d}\in U_{\ell}\cap U_{\ell^{\prime}}\), there is an equivalence_
\[\mathbb{S}^{\bullet,\ell}(d;\delta_{d})\simeq\mathbb{S}^{\bullet,\ell^{ \prime}}(d;\delta_{d}).\]
**Corollary 3.15**.: _For any \(\delta_{d}\in M(d)^{W_{d}}_{\mathbb{R}}\), we have \(\mathbb{S}^{\bullet}(d;\delta_{d})=\mathbb{S}^{\bullet,\ell=0}(d;\delta_{d})\)._
### The categorical wall-crossing for preprojective algebras
In this subsection we will use the notations from Subsections 2.2.6 and 2.2.7. Consider a quiver \(Q^{\circ}=(I,E^{\circ})\). Let \(Q^{\circ,d}=(I,E^{\circ,d})\) be its doubled quiver. For \(\ell\in M(d)^{W_{d}}_{0,\mathbb{R}}\), the moment map \(\mu\colon T^{*}R^{\circ}(d)\to\mathfrak{g}(d)\) induces a map \(\mu^{\ell\text{-ss}}\colon(T^{*}R^{\circ}(d))^{\ell\text{-ss}}\to\mathfrak{g}(d)\). Let
\[\mathscr{P}(d)^{\ell\text{-ss}}:=\bigl{(}\mu^{\ell\text{-ss}}\bigr{)}^{-1}(0) \big{/}G(d)\subset\mathscr{P}(d)\]
be the derived open substack of \(\ell\)-semistable representations of the preprojective algebra of \(Q^{\circ}\). Consider the restriction functor
\[\operatorname{res}\colon D^{b}(\mathscr{P}(d))\twoheadrightarrow D^{b}( \mathscr{P}(d)^{\ell\text{-ss}}). \tag{3.25}\]
The closed immersion (2.10) restricts to the closed immersion
\[j\colon\mathscr{P}(d)^{\ell\text{-ss}}\hookrightarrow\mathscr{Y}(d)^{\ell \text{-ss}}. \tag{3.26}\]
For \(\delta_{d}\in M(d)_{\mathbb{R}}^{W_{d}}\), define the subcategory
\[\mathbb{T}^{\ell}(d;\delta_{d})\subset D^{b}(\mathcal{P}(d)^{\ell\text{-ss}}) \tag{3.27}\]
with objects \(\mathcal{E}\) such that, for any map \(\nu\colon B\mathbb{C}^{*}\to\mathcal{P}(d)^{\ell\text{-ss}}\), we have
\[\operatorname{wt}(\nu^{*}j^{*}j_{*}\mathcal{E})\subset\left[-\frac{1}{2}n_{ \nu},\frac{1}{2}n_{\nu}\right]+\operatorname{wt}(\nu^{*}\delta_{d}). \tag{3.28}\]
Here, we let \(n_{\nu}:=\operatorname{wt}(\det(\nu^{*}\mathbb{L}_{\mathcal{X}(d)}|_{ \mathcal{P}(d)})^{\nu>0})\), where \(\mathcal{X}(d)\) is the moduli stack of representations (2.14) of the tripled quiver \(Q\) of \(Q^{\circ}\).
**Remark 3.16**.: The subcategory (3.27) is the intrinsic window subcategory for the quasi-smooth stack \(\mathcal{P}(d)^{\ell\text{-ss}}\) defined in [Toda, Definition 5.2.13].
**Proposition 3.17**.: _The Koszul equivalence (2.15) descends to an equivalence:_
\[\Theta\colon D^{b}(\mathcal{P}(d)^{\ell\text{-ss}})\overset{\sim}{\to} \operatorname{MF}^{\operatorname{gr}}(\mathcal{X}(d)^{\ell\text{-ss}}, \operatorname{Tr}W), \tag{3.29}\]
_which restricts to an equivalence:_
\[\Theta\colon\mathbb{T}^{\ell}(d;\delta_{d})\overset{\sim}{\to}\mathbb{S}^{ \operatorname{gr},\ell}(d;\delta_{d}). \tag{3.30}\]
Proof.: Let \(\eta\colon\mathcal{X}(d)\to\mathcal{Y}(d)\) be the projection. Then we have
\[\operatorname{Crit}(\operatorname{Tr}W)\cap\mathcal{X}(d)^{\ell\text{-ss}} \subset\eta^{-1}(\mathcal{Y}(d)^{\ell\text{-ss}})\subset\mathcal{X}(d)^{\ell \text{-ss}}, \tag{3.31}\]
where the first inclusion is proved in [HLa, Lemma 4.3.22] and the second inclusion is immediate from the definition of \(\ell\)-stability. We obtain equivalences
\[D^{b}(\mathcal{P}(d)^{\ell\text{-ss}})\overset{\sim}{\to}\operatorname{MF}^{ \operatorname{gr}}(\eta^{-1}(\mathcal{Y}(d)^{\ell\text{-ss}}),\operatorname{ Tr}W)\overset{\sim}{\leftarrow}\operatorname{MF}^{\operatorname{gr}}( \mathcal{X}(d)^{\ell\text{-ss}},\operatorname{Tr}W),\]
where the first equivalence is the Koszul equivalence in Theorem 2.5 and the second equivalence follows from (3.31) together with the fact that matrix factorizations are supported on critical locus, see (2.3). Therefore we obtain the equivalence (3.29).
For an object \(\mathcal{E}\in D^{b}(\mathcal{P}(d)^{\ell\text{-ss}})\), the object \(P=\Theta(\mathcal{E})\) is in \(\mathbb{S}^{\operatorname{gr},\ell}(d;\delta_{d})\) if and only if, for any map \(\nu\colon B\mathbb{C}^{*}\to\mathcal{X}(d)^{\ell\text{-ss}}\) with \(\nu^{*}\operatorname{Tr}W=0\), the set \(\operatorname{wt}\left(\nu^{*}\mathrm{forg}(P)\right)\) satisfies the weight condition (3.8). As \(P\) is supported on \(\operatorname{Crit}(\operatorname{Tr}W)\), we may assume that the image of \(\nu\) is contained in \(\operatorname{Crit}(\operatorname{Tr}W)\cap\mathcal{X}(d)^{\ell\text{-ss}}\). By the \(\mathbb{C}^{*}\)-equivariance of \(P\) for the fiberwise weight \(2\)-action on \(\eta\colon\mathcal{X}(d)\to\mathcal{Y}(d)\) and upper semicontinuity, we have that
\[\operatorname{wt}\left(\nu^{*}\mathrm{forg}(P)\right)\subset\operatorname{wt} \left(\nu^{\prime*}\mathrm{forg}(P)\right),\]
where \(\nu^{\prime}\) is the composition
\[\nu^{\prime}\colon B\mathbb{C}^{*}\overset{\nu}{\to}\mathcal{X}(d)\overset{ \eta}{\to}\mathcal{Y}(d)\overset{0}{\hookrightarrow}\mathcal{X}(d).\]
The image of \(\nu^{\prime}\) lies in \(\mathcal{P}(d)^{\ell\text{-ss}}\hookrightarrow\mathcal{Y}(d)^{\ell\text{-ss}} \overset{0}{\hookrightarrow}\mathcal{X}(d)^{\ell\text{-ss}}\). Therefore we may assume that the image of \(\nu\) is contained in \(\mathcal{P}(d)^{\ell\text{-ss}}\). Since the object \(P\) is represented by
\[P=\Theta(\mathcal{E})=(\mathcal{E}\otimes_{\mathcal{O}_{\mathcal{P}(d)}} \mathcal{O}_{\mathcal{X}(d)})|_{\mathcal{X}(d)^{\ell\text{-ss}}}=(j_{*} \mathcal{E}\otimes_{\mathcal{O}_{\mathcal{Y}(d)}}\mathcal{O}_{\mathcal{X}(d)}) |_{\mathcal{X}(d)^{\ell\text{-ss}}},\]
it follows that \(\operatorname{wt}\left(\nu^{*}\mathrm{forg}(P)\right)\) satisfies the condition (3.8) if and only \(\operatorname{wt}\left(\nu^{*}(j_{*}\mathcal{E})\right)\) satisfies the condition (3.28). Therefore \(\Theta(\mathcal{E})\) is in \(\mathbb{S}^{\operatorname{gr},\ell}(d;\delta_{d})\) if and only if \(\mathcal{E}\) is in \(\mathbb{T}^{\ell}(d;\delta_{d})\).
By combining Proposition 3.17 with Theorem 3.13, Corollary 3.14 and Corollary 3.15, we obtain the following:
**Theorem 3.18**.: _Suppose that \((\ell,\delta_{d})\in M(d)_{0,\mathbb{R}}^{W_{d}}\times M(d)_{\mathbb{R}}^{W_{d}}\) satisfies the conditions (1), (2) in Theorem 3.8. Then the restriction functor (3.25) induces an equivalence:_
\[\operatorname{res}\colon\mathbb{T}(d;\delta_{d})\stackrel{{ \sim}}{{\to}}\mathbb{T}^{\ell}(d;\delta_{d}).\]
**Corollary 3.19**.: _Let \(\ell\in M(d)_{0,\mathbb{R}}^{W_{d}}\) be generic. Then, for \(\delta_{d}\in U_{\ell}\), there is an equivalence \(\operatorname{res}\colon\mathbb{T}(d;\delta_{d})\stackrel{{ \sim}}{{\to}}\mathbb{T}^{\ell}(d;\delta_{d})\). In particular, for generic \(\ell,\ell^{\prime}\in M(d)_{0,\mathbb{R}}^{W_{d}}\) and \(\delta_{d}\in U_{\ell}\cap U_{\ell^{\prime}}\), there is an equivalence_
\[\mathbb{T}^{\ell}(d;\delta_{d})\simeq\mathbb{T}^{\ell^{\prime}}(d;\delta_{d}).\]
**Corollary 3.20**.: _For any \(\delta_{d}\in M(d)_{\mathbb{R}}^{W_{d}}\), we have \(\mathbb{T}(d;\delta_{d})=\mathbb{T}^{\ell=0}(d;\delta_{d})\)._
### Quasi-BPS categories under Knorrer periodicity
In this subsection, we apply Corollaries 3.11 and 3.15 to obtain an equivalence of quasi-BPS categories under Knorrer periodicity, which is a particular case of the Koszul equivalence. Other than the use of Corollaries 3.11 and 3.15, the current subsection is independent of the other results and constructions discussed in Section 3. We will use the results of this subsection in Section 4 and in [PTd].
We will use the notations from Subsection 2.2. For a symmetric quiver \(Q\) and \(d\in\mathbb{N}^{I}\), let \(U\) be a \(G(d)\)-representation. Consider the closed immersion
\[j\colon\mathcal{X}(d):=R(d)/G(d)\hookrightarrow\mathcal{Y}=(R(d)\oplus U)/G(d)\]
into \((R(d)\oplus\{0\})/G(d)\). We consider the quotient stack \(\mathcal{X}^{\underline{1}}\) and regular function \(f^{\underline{1}}\):
\[\mathcal{X}^{\underline{1}}=(R(d)\oplus U\oplus U^{\vee})/G(d)\stackrel{{ f}}{{\to}}\mathbb{C},\ f(x,u,u^{\prime})=\langle u,u^{\prime}\rangle.\]
We consider the following Cartesian diagram
where \(\eta\) is the natural projection. Let \(\mathbb{C}^{*}\) act with weight \(0\) on \(R(d)\oplus U\) and with weight \(2\) on \(U^{\vee}\). In this case, the Koszul equivalence in Theorem 2.5 is given by, see [Toda, Remark 2.3.5]:
\[\Theta=s_{*}v^{*}\colon D^{b}(\mathcal{X}(d))\stackrel{{\sim}}{{ \to}}\operatorname{MF}^{\operatorname{gr}}(\mathcal{X}^{\underline{1}},f). \tag{3.32}\]
Such an equivalence is also called Knorrer periodicity [11, 12]. For a weight \(\delta_{d}^{\underline{1}}\in M(d)_{\mathbb{R}}^{W_{d}}\), define the subcategory
\[\mathbb{S}^{\operatorname{gr}}(d;\delta_{d}^{\underline{1}})\subset \operatorname{MF}^{\operatorname{gr}}(\mathcal{X}^{\underline{1}},f) \tag{3.33}\]
in a way similar to (2.28). By Lemma 2.10, it consists of matrix factorizations whose factors are direct sums of vector bundles \(\mathcal{O}_{\mathcal{X}^{\underline{1}}}\otimes\Gamma\), where \(\Gamma\) is a \(G(d)\)-representations such that, for any weight \(\chi^{\prime}\) of \(\Gamma\) and any cocharacter \(\lambda\) of \(T(d)\), we have
\[\langle\lambda,\chi^{\prime}-\delta_{d}^{\underline{1}}\rangle\in\left[-\frac{ 1}{2}n_{\lambda}^{\underline{1}},\frac{1}{2}n_{\lambda}^{\underline{1}}\right]. \tag{3.34}\]
Here, we define \(n_{\lambda}^{\underline{1}}\) by:
\[n_{\lambda}^{\underline{1}}=\langle\lambda,\mathbb{L}_{\mathcal{X}^{\underline {1}}}^{\lambda>0}\rangle=n_{\lambda}+\langle\lambda,U^{\lambda>0}\rangle+ \langle\lambda,(U^{\vee})^{\lambda>0}\rangle, \tag{3.35}\]
where recall the definition of \(n_{\lambda}\) from (2.24). The following is the main result we prove in this subsection:
**Proposition 3.21**.: _Let \(\delta_{d}^{\mathfrak{I}}=\delta_{d}-\frac{1}{2}\det U\in M(d)_{\mathbb{R}}^{W_{d}}\). The equivalence (3.32) restricts to the equivalence:_
\[\Theta\colon\mathbb{M}(d;\delta_{d})\overset{\sim}{\to}\mathbb{S}^{\mathbb{gr }}(d;\delta_{d}^{\mathfrak{I}}). \tag{3.36}\]
Proof.: We first note that, by Lemma 2.6, an object \(\mathcal{E}\in D^{b}(\mathcal{X}(d))\) satisfies \(\Theta(\mathcal{E})\in\mathbf{S}^{\mathrm{gr}}(d;\delta_{d}^{\mathfrak{I}})\) if and only if \(j_{*}\mathcal{E}\) is generated by vector bundles \(\mathcal{O}_{\mathfrak{Y}}\otimes\Gamma^{\prime}\), where any weight \(\chi^{\prime}\) of \(\Gamma^{\prime}\) satisfies (3.34).
By Lemma 2.10, the category \(\mathbb{M}(d;\delta_{d})\) is generated by vector bundles \(\mathcal{O}_{\mathcal{X}(d)}\otimes\Gamma\) such that any weight \(\chi\) of \(\Gamma\) satisfies (2.25) for any \(\lambda\). Consider the Koszul resolution
\[j_{*}(\Gamma\otimes\mathcal{O}_{\mathcal{X}(d)})=\Gamma\otimes\mathrm{Sym}_{ \mathfrak{Y}}(\mathcal{U}^{\vee}[1]), \tag{3.37}\]
where \(\mathcal{U}\to\mathcal{Y}\) is the vector bundle associated with the \(G(d)\)-representation \(U\). Therefore, the category \(j_{*}\mathbb{M}(d;\delta)\) is generated by vector bundles \(\mathcal{O}_{\mathfrak{Y}}\otimes\Gamma^{\prime}\) such that any weight \(\chi^{\prime}\) of \(\Gamma^{\prime}\) satisfies
\[\langle\lambda,\chi^{\prime}-\delta_{d}\rangle\in\left[-\frac{n_{\lambda}}{2}+ \langle\lambda,(U^{\vee})^{\lambda<0}\rangle,\frac{n_{\lambda}}{2}+\langle \lambda,(U^{\vee})^{\lambda>0}\rangle\right] \tag{3.38}\]
for any \(\lambda\). By (3.35), we have
\[\frac{n_{\lambda}^{\mathfrak{I}}}{2}=\frac{n_{\lambda}}{2}+\langle\lambda,(U^ {\vee})^{\lambda>0}\rangle+\frac{1}{2}\langle\lambda,U\rangle=\frac{n_{\lambda }}{2}-\langle\lambda,(U^{\vee})^{\lambda<0}\rangle-\frac{1}{2}\langle\lambda,U\rangle. \tag{3.39}\]
Therefore (3.38) implies (3.34) for \(\delta_{d}^{\mathfrak{I}}=\delta_{d}-\frac{1}{2}\det U\), hence the functor (3.32) sends \(\mathbb{M}(d;\delta)\) to \(\mathbb{S}^{\mathrm{gr}}(d;\delta_{d}^{\mathfrak{I}})\), which shows the fully-faithfullness of (3.36).
To show essential surjectivity of (3.36), let \(\mathcal{E}\in D^{b}(\mathcal{X}(d))\) be such that \(j_{*}\mathcal{E}\) is generated by the vector bundles \(\mathcal{O}_{\mathfrak{Y}}\otimes\Gamma^{\prime}\), where any weight \(\chi^{\prime}\) of \(\Gamma^{\prime}\) satisfies (3.34). We will show that \(\mathcal{E}\in\mathbb{M}^{\ell=0}(d;\delta_{d})\), and thus that \(\mathcal{E}\in\mathbb{M}(d;\delta_{d})\) by Corollary 3.11.
Let \(\nu\colon B\mathbb{C}^{*}\to\mathcal{X}(d)\) be a map, which corresponds to a point \(x\in R(d)\) and a cocharacter \(\lambda\colon\mathbb{C}^{*}\to T(d)\) which fixes \(x\). By the condition (3.34) for weights of \(\Gamma^{\prime}\), we have
\[\mathrm{wt}^{\max}(\nu^{*}j^{*}j_{*}\mathcal{E})\leqslant\frac{1}{2}n^{ \prime}_{\lambda}+\langle\lambda,\delta^{\prime}_{d}\rangle,\]
see (3.6) for the definition of \(\mathrm{wt}^{\max}\). On the other hand, by the Koszul resolution (3.37), we have
\[\mathrm{wt}^{\max}(\nu^{*}j^{*}j_{*}\mathcal{E})=\mathrm{wt}^{\max}(\nu^{*} \mathcal{E})-\langle\lambda,(U^{\vee})^{\lambda>0}\rangle.\]
Therefore we have
\[\mathrm{wt}^{\max}(\nu^{*}\mathcal{E})\leqslant\frac{n^{\prime}_{\lambda}}{2} +\langle\lambda,\delta^{\prime}_{d}\rangle-\langle\lambda,(U^{\vee})^{\lambda> 0}\rangle=\frac{n_{\lambda}}{2}+\langle\lambda,\delta_{d}\rangle,\]
where the last equality follows from (3.39). The lower bound
\[\mathrm{wt}^{\min}(\nu^{*}\mathcal{E})\geqslant-\frac{n_{\lambda}}{2}+ \langle\lambda,\delta_{d}\rangle\]
is proved similarly. We then have that:
\[\mathrm{wt}\,(\nu^{*}\mathcal{E})\subset\left[-\frac{n_{\lambda}}{2}+\langle \lambda,\delta_{d}\rangle,\frac{n_{\lambda}}{2}+\langle\lambda,\delta_{d} \rangle\right].\]
Thus \(\mathcal{E}\in\mathbb{M}(d;\delta_{d})^{\ell=0}\), and then \(\mathcal{E}\in\mathbb{M}(d;\delta_{d})\) by Corollary 3.11.
Let \(W\) be a potential of \(Q\). By abuse of notation, we denote by \(\operatorname{Tr}W\colon\mathcal{X}^{\mathfrak{I}}\to\mathbb{C}\) the pull-back of \(\operatorname{Tr}W\colon\mathcal{X}(d)\to\mathbb{C}\) by the natural projection \(\mathcal{X}^{\mathfrak{I}}\to\mathcal{X}(d)\). There is
an equivalence similar to (3.32), also called Knorrer periodicity, see [18, Theorem 4.2], [19]:
\[\Theta=s_{*}v^{*}\colon\operatorname{MF}(\mathscr{X}(d),\operatorname{Tr}W) \stackrel{{\sim}}{{\to}}\operatorname{MF}(\mathscr{X}^{\mathfrak{I} },\operatorname{Tr}W+f). \tag{3.40}\]
The subcategory
\[\mathbb{S}(d;\delta_{d})\subset\operatorname{MF}(\mathscr{X}^{\mathfrak{I}}, \operatorname{Tr}W+f)\]
is defined similarly to (3.33). The following proposition is proved as Proposition 3.21, using Corollary 3.15 instead of Corollary 3.11.
**Proposition 3.22**.: _Let \(\delta_{d}^{\mathfrak{I}}=\delta_{d}-\frac{1}{2}\det U\in M(d)_{\mathbb{R}}^{ W_{d}}\). The equivalence (3.40) restricts to the equivalence:_
\[\Theta\colon\mathbb{S}(d;\delta_{d})\stackrel{{\sim}}{{\to}} \mathbb{S}(d;\delta_{d}^{\mathfrak{I}}).\]
## 4. The semiorthogonal decompositions of DT categories
In this section, we construct semiorthogonal decompositions for the moduli of (framed or unframed) representations of certain symmetric quivers (see Subsection 4.8) in terms of quasi-BPS categories, see Theorem 4.1, 4.2, 4.19 and Corollary 4.16. The results generalize the decomposition of DT categories of points on \(\mathbb{C}^{3}\) from [14, Theorem 1.1] and the decomposition of the Hall algebra of \(\mathbb{C}^{3}\) (equivalently, of the Porta-Sala Hall algebra of \(\mathbb{C}^{2}\)) from [19, Theorem 1.1], [20, Theorem 1.1].
### Semiorthogonal decompositions
The following is the main result in this section, which provides a semi-orthogonal decomposition of \(D^{b}\left(\mathscr{X}^{f}(d)^{\operatorname{ss}}\right)\) in products of quasi-BPS categories of \(Q\). Recall the definition of a good weight from Definition 2.12, the category \(\mathbb{M}(d;\delta)\) from Definition (2.9), and the Weyl-invariant real weights \(\tau_{d},\sigma_{d}\) from Subsection 2.2.2. Recall also the convention about the product of categories of matrix factorizations from Subsection 2.1.
**Theorem 4.1**.: _Let \(Q\) be a symmetric quiver such that the number of loops at each vertex \(i\in I\) has the same parity. Let \(d\in\mathbb{N}^{I}\), let \(\delta_{d}\in M(d)_{\mathbb{R}}^{W_{d}}\), and let \(\mu\in\mathbb{R}\) such that \(\delta_{d}+\mu\sigma_{d}\) is a good weight. For a partition \((d_{i})_{i=1}^{k}\) of \(d\), let \(\lambda\) be an associated antidominant cocharacter and define the weights \(\delta_{d_{i}}\in M(d_{i})_{\mathbb{R}}^{W_{d_{i}}}\), \(\theta_{i}\in\frac{1}{2}M(d_{i})^{W_{d_{i}}}\) for \(1\leqslant i\leqslant k\) by:_
\[\sum_{i=1}^{k}\delta_{d_{i}}=\delta_{d},\ \sum_{i=1}^{k}\theta_{i}=-\frac{1}{2 }R(d)^{\lambda>0}+\frac{1}{2}\mathfrak{g}(d)^{\lambda>0}. \tag{4.1}\]
_There is a semiorthogonal decomposition_
\[D^{b}\left(\mathscr{X}^{f}(d)^{\operatorname{ss}}\right)=\left\langle\bigotimes _{i=1}^{k}\mathbb{M}(d_{i};\theta_{i}+\delta_{d_{i}}+v_{i}\tau_{d_{i}}):\mu \leqslant\frac{v_{1}}{\underline{d}_{1}}<\cdots<\frac{v_{k}}{\underline{d}_{ k}}<1+\mu\right\rangle, \tag{4.2}\]
_where the right hand side in (4.2) is after all partitions \((d_{i})_{i=1}^{k}\) of \(d\) and real numbers \((v_{i})_{i=1}^{k}\in\mathbb{R}^{k}\) such that the sum of coefficients of \(\theta_{i}+\delta_{d_{i}}+v_{i}\tau_{d_{i}}\) is an integer for all \(1\leqslant i\leqslant k\). The order on the summands is as in Subsection 4.6._
_The functor from a summand on the right hand side to \(D^{b}\left(\mathscr{X}^{f}(d)^{\operatorname{ss}}\right)\) is the composition of the Hall product with the pullback along the forget-the-framing map \(\mathscr{X}^{f}(d)^{\operatorname{ss}}\to\mathscr{X}(d)\). Further, the decomposition (4.2) is \(X(d)\)-linear for the map \(\pi_{f,d}\colon\mathscr{X}^{f}(d)^{\operatorname{ss}}\to\mathscr{X}(d)\)\(\xrightarrow{\pi_{X,d}}X(d)\)._
The same argument also applies to obtain the following similar semiorthogonal decomposition using Theorem 4.13 for unframed moduli stacks. Note that this decomposition is different from the one discussed in [10, Theorem 1.1], which we could not use to obtain a decomposition of \(D^{b}\left(\mathscr{X}^{f}(d)^{\text{ss}}\right)\) as in Theorem 4.1.
**Theorem 4.2**.: _Let \(Q\) be a symmetric quiver such that the number of loops at each vertex \(i\in I\) has the same parity. Let \(d\in\mathbb{N}^{I}\) and let \(\delta_{d}\in M(d)_{\mathbb{R}}^{W_{d}}\). For a partition \((d_{i})_{i=1}^{k}\) of \(d\), define the weights \(\delta_{d_{i}}\in M(d_{i})_{\mathbb{R}}^{W_{d_{i}}}\), \(\theta_{i}\in\frac{1}{2}M(d_{i})^{W_{d_{i}}}\) for \(1\leqslant i\leqslant k\) as in (4.1). There is a semiorthogonal decomposition_
\[D^{b}\left(\mathscr{X}(d)\right)=\left\langle\bigotimes_{i=1}^{k}\mathbb{M}(d _{i};\theta_{i}+\delta_{d_{i}}+v_{i}\tau_{d_{i}}):\frac{v_{1}}{\underline{d}_ {1}}<\cdots<\frac{v_{k}}{\underline{d}_{k}}\right\rangle, \tag{4.3}\]
_where the right hand side in (4.3) is after all partitions \((d_{i})_{i=1}^{k}\) of \(d\) and real numbers \((v_{i})_{i=1}^{k}\in\mathbb{R}^{k}\) such that the sum of coefficients of \(\theta_{i}+\delta_{d_{i}}+v_{i}\tau_{d_{i}}\) is an integer for all \(1\leqslant i\leqslant k\). The order on the summands is as in Subsection 4.6._
_The functor from a summand on the right hand side to \(D^{b}\left(\mathscr{X}^{f}(d)^{\text{ss}}\right)\) is given by the Hall product. The decomposition (4.3) is \(X(d)\)-linear._
The plan of proof for both of Theorem 4.1 and 4.2 is as follows: we first prove them in a particular case, that of very symmetric quivers; then we use the Koszul equivalence to obtain the more general statements stated above.
Using [10, Proposition 2.3], [10, Proposition 2.1], one obtains versions for quivers with potentials of Theorems 4.1 and 4.2, see Subsection 4.12. Note that both theorems apply to tripled quivers, and thus to tripled quivers with potential.
### Very symmetric quivers
We introduce a class of quivers for which we can prove Theorems 4.1 and 4.2 using the arguments employed for the quiver with one vertex and three loops in [10, 11].
**Definition 4.3**.: A quiver \(Q=(I,E)\) is a _very symmetric quiver_ if there exists an integer \(A\in\mathbb{Z}_{\geqslant 1}\) such that, for any vertices \(a,b\in I\), the number of edges from \(a\) to \(b\) is \(A\).
The first step in proving Theorem 4.1 is:
**Theorem 4.4**.: _Let \(Q\) be a very symmetric quiver. Then Theorem 4.1 holds for \(Q\)._
Until Subsection 4.8, we assume the quiver \(Q\) is very symmetric.
The proof of Theorem 4.4 follows closely the proof of [10, Theorem 3.2]. As in loc. cit., the claim follows from the following semiorthogonal decomposition for subcategories of \(D^{b}(\mathscr{X}(d))\) using "window categories". Recall the definition of the categories \(\mathbb{D}(d;\delta)\) from (2.26).
**Theorem 4.5**.: _Let \(\delta_{d}\in M(d)_{\mathbb{R}}^{W_{d}}\) and let \(\mu\in\mathbb{R}\) such that \(\delta_{d}+\mu\sigma_{d}\) is a good weight. Then there is a semiorthogonal decomposition_
\[\mathbb{D}(d;\delta_{d}+\mu\sigma_{d})=\left\langle\bigotimes_{i=1}^{k} \mathbb{M}(d_{i};\theta_{i}+\delta_{d_{i}}+v_{i}\tau_{d_{i}})\right\rangle, \tag{4.4}\]
_where the right hand side is as in Theorem 4.1._
Proof of Theorem 4.4 assuming Theorem 4.5.: We briefly explain why Theorem 4.5 implies Theorem 4.4, for full details see [10, Proof of Theorem 3.2]. Consider the
morphisms
\[\mathcal{X}^{f}(d)^{\rm ss}\stackrel{{ j}}{{\hookrightarrow}}\mathcal{X }^{f}(d)\stackrel{{\pi}}{{\twoheadrightarrow}}\mathcal{X}(d). \tag{4.5}\]
where \(j\) is an open immersion and \(\pi\) is the natural projection. Let
\[\mathbb{E}(d;\delta_{d}+\mu\sigma_{d})\subset D^{b}(\mathcal{X}^{f}(d))\]
be the subcategory generated by the complexes \(\pi^{*}(D)\) for \(D\in\mathbb{D}(d;\delta_{d}+\mu\sigma_{d})\). If \(\delta_{d}+\mu\sigma_{d}\) is a good weight, then there is an equivalence of categories
\[j^{*}\colon\mathbb{E}(d;\delta_{d}+\mu\sigma_{d})\stackrel{{ \sim}}{{\to}}D^{b}\left(\mathcal{X}^{f}(d)^{\rm ss}\right),\]
see [4, Proof of Proposition 3.13]. The equivalence follows from the theory of "window categories" of Halpern-Leistner [10], Ballard-Favero-Katzarkov [1], and the description of "window categories" via explicit generators (due to Halpern-Leistner-Sam [12]) for the self-dual representation of \(G(d)\):
\[R^{f}(d)\oplus V(d)^{\vee}=R(d)\oplus V(d)\oplus V(d)^{\vee}.\]
Further, there is a semiorthogonal decomposition
\[D^{b}(\mathcal{X}^{f}(d))=\langle\pi^{*}D^{b}(\mathcal{X}(d))_{w}:w\in\mathbb{ Z}\rangle\]
and equivalences \(\pi^{*}\colon D^{b}(\mathcal{X}(d))_{w}\stackrel{{\sim}}{{\to}} \pi^{*}D^{b}(\mathcal{X}(d))_{w}\) for each \(w\in\mathbb{Z}\). Therefore, by Theorem 4.5, there is a semiorthogonal decomposition
\[\mathbb{E}(d;\delta_{d}+\mu\sigma_{d})=\left\langle\bigotimes_{i=1}^{k} \mathbb{M}(d_{i};\theta_{i}+\delta_{d_{i}}+v_{i}\tau_{d_{i}})\right\rangle,\]
where the right hand side is as in Theorem 4.4. Therefore the theorem holds.
### Decompositions of weights
The proof of Theorem 4.5 follows closely the proof of [4, Proposition 3.9]. The proof of Theorem 4.5 uses the decomposition of categorical Hall algebras for quivers with potential in quasi-BPS categories from [2]. The summands in this semiorthogonal decomposition are indexed by decompositions of weights of \(T(d)\), which we now briefly review.
Before stating it, we introduce some notations. For \(d_{a}\) a summand of a partition of \(d\), denote by \(M(d_{a})\subset M(d)\) the subspace as in the decomposition from Subsection 2.2.3. Assume that \(\ell\) is a partition of a dimension \(d_{a}\in\mathbb{N}^{I}\), alternatively, \(\ell\) is an edge of the tree \(\mathcal{T}\) introduced in Subsection 2.2. Let \(\lambda_{\ell}\) be the corresponding antidominant cocharacter of \(T(d_{a})\). Recall the set
\[\mathcal{A}=\{(\beta_{i}^{a}-\beta_{j}^{b})^{\times A}\mid a,b\in I,1\leqslant i \leqslant i\leqslant d^{a},1\leqslant j\leqslant d^{b}\}\]
from (2.20). Let
\[\mathcal{A}_{\ell}\subset M(d_{a})\cap\mathcal{A}\]
be the multiset of weights in \(M(d_{a})\cap\mathcal{A}\) such that \(\langle\lambda_{\ell},\beta\rangle>0\). Define \(N_{\ell}\) by
\[N_{\ell}:=\sum_{\beta\in\mathcal{A}_{\ell}}\beta.\]
**Proposition 4.6**.: _Let \(\chi\) be a dominant weight in \(M(d)_{\mathbb{R}}\), let \(\delta_{d}\in M(d)_{\mathbb{R}}^{W_{d}}\), and let \(w=\langle 1_{d},\chi-\delta_{d}\rangle\in\mathbb{R}\). There exists:_
1. _a path of partitions_ \(T\)_, see Subsection_ 2.2_, with decomposition_ \((d_{i})_{i=1}^{k}\) _at the end vertex,_
2. _coefficients_ \(r_{\ell}\) _for_ \(\ell\in T\) _such that_ \(r_{\ell}>1/2\) _if_ \(\ell\) _corresponds to a partition with length_ \(>1\)_, and_ \(r_{\ell}=0\) _otherwise; further, if_ \(\ell,\ell^{\prime}\in T\) _are vertices corresponding to partitions with length_ \(>1\)_, and with a path from_ \(\ell\) _to_ \(\ell^{\prime}\)_, then_ \(r_{\ell}>r_{\ell^{\prime}}>\frac{1}{2}\)_, and_
3. _dominant weights_ \(\psi_{i}\in\mathbf{W}(d_{i})\) _for_ \(1\leqslant i\leqslant k\) _such that:_ (4.6) \[\chi+\rho-\delta_{d}=-\sum_{\ell\in T}r_{\ell}N_{\ell}+\sum_{i=1}^{k}\psi_{i}+ w\tau_{d}.\]
Proof.: The above is proved in [10, Subsection 3.1.2], see also [10, Subsection 3.2.8].
We briefly explain the process of obtain the decomposition (4.6). Choose \(r\) such that \(\chi+\rho-\delta_{d}-w\tau_{d}\) is on the boundary of \(2r\mathbf{W}(d)\) (that is, let \(r\) be the \(r\)-invariant of \(\chi+\rho-w\tau_{d}\)). The first partition \(\ell_{1}\) in \(T\) corresponds to the face of \(2r\mathbf{W}(d)\) which contains \(\chi+\rho-\delta_{d}-w\tau_{d}\) in its interior. Assume \(\ell_{1}\) corresponds to a partition \((e_{i})_{i=1}^{s}\). Then there exist weights \(\chi_{i}^{\prime}\in M(e_{i})_{0}\) for \(1\leqslant i\leqslant s\) such that
\[\chi+\rho-\delta_{d}-w\tau_{d}+r_{\ell_{1}}N_{\ell_{1}}=\sum_{i=1}^{s}\chi_{i} ^{\prime}.\]
By the choice of \(r\) and \(\ell_{1}\), the weights \(\chi_{i}^{\prime}\) are inside the polytopes \(2r\mathbf{W}(e_{i})\). One repeats the process above to decompose further the weights \(\chi_{i}^{\prime}\) until the decomposition (4.6) is obtained.
Let \(w\in\mathbb{Z}\) and assume that \(\langle 1_{d},\delta_{d}\rangle=w\). Let \(M(d)_{w}^{+}\) be the subset of \(M(d)\) of integral dominant weights \(\chi\) with \(\langle 1_{d},\chi\rangle=w\). We denote by \(L^{d}_{\delta_{d}}\) the set of all paths of partitions \(T\) with coefficients \(r_{\ell}\) for \(\ell\in T\) satisfying (2) from the statement of Proposition 4.6 for a dominant integral weight \(\chi\in M(d)_{w}^{+}\). By Proposition 4.6, there is a map:
\[\Upsilon\colon M(d)_{w}^{+}\to L^{d}_{\delta_{d}},\ \Upsilon(\chi)=(T,r_{ \ell}).\]
The set \(L^{d}_{\delta_{d}}\) was used in [10] to index summands in semiorthogonal decompositions of \(D^{b}(\mathfrak{X}(d))_{w}\). We will show in the next subsection that \(L^{d}_{\delta_{d}}\) has an explicit description.
### Partitions associated to dominant weights
We continue with the notation from Subsection 4.3. Using the following proposition, the weight \(-\sum_{\ell\in T}r_{\ell}N_{\ell}\) from (4.6) is a linear combinations of \(\tau_{d_{i}}\) for \(1\leqslant i\leqslant k\).
**Proposition 4.7**.: _Let \(\lambda\) be an antidominant cocharacter associated to the partition \((d_{i})_{i=1}^{k}\) of \(d\). Then \(R(d)^{\lambda>0}\) is a linear combination of the weights \(\tau_{d_{i}}\) (alternatively, of \(\sigma_{d_{i}}\)) for \(1\leqslant i\leqslant k\)._
Proof.: This follows from a direct computation, for example when \(k=2\), one computes directly that
\[R(d)^{\lambda>0}=A\sum_{a,b\in I}\sum_{d_{1}^{a}<i\leqslant d^{a},1\leqslant j \leqslant d_{1}^{a}}(\beta_{j}^{b}-\beta_{i}^{a})=A\left(\underline{d}_{2} \sigma_{d_{1}}-\underline{d}_{1}\sigma_{d_{2}}\right).\]
**Remark 4.8**.: The conclusion of Proposition 4.7 is not true for a general symmetric quiver.
Consider the decomposition (4.6) and let \(\lambda\) be the antidominant cocharacter corresponding to \((d_{i})_{i=1}^{k}\). We define \(v_{i}\in\mathbb{R}\) for \(1\leqslant i\leqslant k\) by
\[\sum_{i=1}^{k}v_{i}\tau_{d_{i}}=-\sum_{\ell\in T}\left(r_{\ell}-\frac{1}{2} \right)N_{\ell}+w\tau_{d}=-\sum_{\ell\in T}r_{\ell}N_{\ell}+\frac{1}{2}R(d)^{ \lambda>0}+w\tau_{d}. \tag{4.7}\]
Here the right hand side is a linear combination of \(\tau_{d_{i}}\) by Proposition 4.7, so \(v_{i}\in\mathbb{R}\) is well-defined. We define the weights \(\theta_{i}\in M(d_{i})_{\mathbb{R}}^{W_{d_{i}}}\) by
\[\sum_{i=1}^{k}\theta_{i}=-\frac{1}{2}R(d)^{\lambda>0}+\frac{1}{2}\mathfrak{g} (d)^{\lambda>0}. \tag{4.8}\]
Then we rewrite (4.6) as
\[\chi=\sum_{i=1}^{k}\theta_{i}+\sum_{i=1}^{k}v_{i}\tau_{d_{i}}+\sum_{i=1}^{k}( \psi_{i}-\rho_{i}+\delta_{d_{i}}), \tag{4.9}\]
where \(\rho_{i}\) is half the sum of positive roots of \(\mathfrak{g}(d_{i})\). The next proposition follows as in [PTa, Proposition 3.5]:
**Proposition 4.9**.: _Let \(\chi\) be a dominant weight in \(M(d)\) and consider the weights \((v_{i})_{i=1}^{k}\) from (4.7). Then_
\[\frac{v_{1}}{\underline{d}_{1}}<\ldots<\frac{v_{k}}{\underline{d}_{k}}. \tag{4.10}\]
Let \(T_{w}^{d}\) be the set of tuples \(A=(d_{i},v_{i})_{i=1}^{k}\) of \((d,w)\) such that \((d_{i})_{i=1}^{k}\) is a partition of \(d\) and the real numbers \((v_{i})_{i=1}^{k}\in\mathbb{R}^{k}\) are such that:
* \(\sum_{i=1}^{k}v_{i}=w\),
* the inequality (4.10) holds,
* for each \(1\leqslant i\leqslant k\), the sum of coefficients of \(\theta_{i}+v_{i}\tau_{d_{i}}+\delta_{d_{i}}\) is an integer.
By Proposition 4.9, there is a map
\[\varphi\colon L_{\delta_{d}}^{d}\to T_{w}^{d}. \tag{4.11}\]
**Proposition 4.10**.: _The map (4.11) is a bijection._
The above follows as [PTa, Proposition 3.8]. The proof goes by constructing an inverse \(\varphi^{\prime}\). To construct \(\varphi^{\prime}\), one chooses \(\psi_{i}\in\mathbf{W}(d_{i})\) such that \(\chi\) defined in (4.9) is an integral weight.
Thus the summands appearing in the semiorthogonal decomposition of \(D^{b}(\mathcal{X}(d))_{w}\) from [Pad, Theorem 1.1] are labeled by elements of \(T_{w}^{d}\).
### Partitions for framed quivers
The following proposition is the analogue of [PTa, Proposition 3.7].
**Proposition 4.11**.: _Let \(\delta_{d}\in M(d)_{\mathbb{R}}^{W_{d}}\) and let \(\chi\in M(d)\) be a dominant weight. Consider the decomposition (4.9) with associated partition \((d_{i})_{i=1}^{k}\) and weights \((v_{i})_{i=1}^{k}\in\mathbb{R}^{k}\). Let \(\mu\in\mathbb{R}\) and assume that_
\[\chi+\rho-\delta_{d}-\mu\sigma_{d}\in\mathbf{V}(d). \tag{4.12}\]
_Then_
\[\mu\leqslant\frac{v_{1}}{\underline{d}_{1}}<\ldots<\frac{v_{k}}{\underline{d} _{k}}\leqslant 1+\mu. \tag{4.13}\]
Proof.: Using the decomposition (4.9), we have that
\[\chi+\rho-\delta_{d}-\mu\sigma_{d}=-\frac{1}{2}R(d)^{\lambda>0}+\sum_{i=1}^{k}(v_ {i}-\mu\underline{d}_{i})\tau_{d_{i}}+\sum_{i=1}^{k}\psi_{i}. \tag{4.14}\]
Let \(\alpha_{k}\) be the (dominant) cocharacter of \(T(d)\) which acts with weight \(1\) on \(\beta_{i}^{a}\) for \(a\in I\) and \(d^{a}-d_{k}^{a}<i\leqslant d^{a}\) and with weight \(0\) on \(\beta_{i}^{a}\) for \(a\in I\) and \(d^{a}-d_{k}^{a}\geqslant i\). By (4.14), we have that
\[\langle\alpha_{k},\chi+\rho-\delta_{d}-\mu\sigma_{d}\rangle=\left\langle \alpha_{k},-\frac{1}{2}R(d)^{\lambda>0}\right\rangle+v_{k}-\mu\underline{d}_{ k}. \tag{4.15}\]
On the other hand, we have
\[\mathbf{V}(d)=\frac{1}{2}\mathrm{sum}_{\beta\in\mathcal{C}}[0,\beta],\]
where recall that
\[\mathcal{C}=\{(\beta_{i}^{a}-\beta_{j}^{b})^{\times A},\beta_{i}^{a}\ |\ a,b\in I,1 \leqslant i\leqslant d^{a},1\leqslant j\leqslant d^{b}\}.\]
Then \(-R(d)^{\lambda>0}/2+\underline{d}_{k}\tau_{d_{k}}\) has maximum \(\alpha_{k}\)-weight among weights in \(\mathbf{V}(d)\). Therefore from (4.12) we obtain that
\[\langle\alpha_{k},\chi+\rho-\delta_{d}-\mu\sigma_{d}\rangle\leqslant\left\langle \alpha_{k},-\frac{1}{2}R(d)^{\lambda>0}\right\rangle+\underline{d}_{k}. \tag{4.16}\]
By comparing (4.15) and (4.16), we conclude that \(v_{k}-\mu\underline{d}_{k}\leqslant\underline{d}_{k}\), so \(v_{k}/\underline{d}_{k}\leqslant 1+\mu\). A similar argument also shows the lower bound.
**Corollary 4.12**.: _In the setting of Proposition 4.11, consider the decomposition (4.14). Recall the sets \(\mathcal{A}\) and \(\mathcal{B}\) from (2.20) and define \(\mathcal{A}_{\lambda}:=\{\beta\in\mathcal{A}\ |\ \langle\lambda,\beta\rangle>0\}\). Then there are \(\psi_{i}\in\mathbf{W}(d_{i})\) for \(1\leqslant i\leqslant k\) such that:_
\[-\frac{1}{2}R(d)^{\lambda>0}\in\frac{1}{2}\mathrm{sum}_{\beta\in\mathcal{A}_{ \lambda}}[0,-\beta],\ \sum_{i=1}^{k}(v_{i}-\mu\underline{d}_{i})\tau_{d_{i}}\in\mathrm{sum}_{\beta \in\mathcal{B}}[0,\beta].\]
Proof.: The inclusion
\[\sum_{i=1}^{k}(v_{i}-\mu\underline{d}_{i})\tau_{d_{i}}\in\mathrm{sum}_{\beta \in\mathcal{B}}[0,\beta]\]
follows from Proposition 4.11, see [PTa, Proposition 3.8]. The rest of the decomposition is immediate.
### Comparison of partitions
In this subsection, we explain the order used in the semiorthogonal decomposition from Theorem 4.4, see also the discussion in Subsection 1.8.
Fix \(d\in\mathbb{N}^{I}\) and a weight \(\delta^{\circ}\in M(d)_{0,\mathbb{R}}^{W_{d}}\). Define \(L^{d}_{\delta^{\circ},w}:=L^{d}_{\delta^{\circ}+w\tau_{d}}\) and \(L^{d}_{\delta^{\circ}}:=\bigcup_{w\in\mathbb{Z}}L^{d}_{\delta^{\circ},w}\). We define a set
\[O\subset L^{d}_{\delta^{\circ}}\times L^{d}_{\delta^{\circ}}\]
which is used to compare summands of semiorthogonal decompositions, see Subsection 1.8.
For \(w>w^{\prime}\), let \(O_{w,w^{\prime}}:=L^{d}_{\delta^{\circ},w}\times L^{d}_{\delta^{\circ},w^{ \prime}}\). For \(w<w^{\prime}\), let \(O_{w,w^{\prime}}\) be the empty set.
We now define \(O_{w,w}\subset L^{d}_{\delta^{\circ},w}\times L^{d}_{\delta^{\circ},w}\). The general procedure for defining such a set, equivalently for comparing two partitions for an arbitrary symmetric quiver is described in [Pad, Subsection 3.3.4]. Consider the path of partitions \(T_{A}\) with
coefficients \(r_{\ell,A}\) as in (4.6) corresponding to \(A\in L^{d}_{\delta^{\circ},w}\). Order the coefficients \(r_{\ell,A}\) in decreasing order \(r^{\prime}_{1,A}>r^{\prime}_{2,A}>\cdots>r^{\prime}_{f(A),A}.\) Each \(r^{\prime}_{i,A}\) for \(1\leqslant i\leqslant f(A)\) corresponds to a partition \(\pi_{i,A}\). Similarly, consider the path of partitions \(T_{B}\) with coefficients \(r_{\ell,B}\) corresponding to \(B\in L^{d_{\circ},w}_{\delta^{\circ},w}\). Define similarly \(r^{\prime}_{1,B}>\cdots>r^{\prime}_{f(B),B}\) and \(\pi_{i,B}\) for \(1\leqslant i\leqslant f(B)\).
Define the set \(R\subset L^{d}_{\delta^{\circ},w}\times L^{d}_{\delta^{\circ},w}\) which contains pairs \((A,B)\) such that
* there exists \(n\geqslant 1\) such that \(r^{\prime}_{n,A}>r^{\prime}_{n,B}\) and \(r^{\prime}_{i,A}=r^{\prime}_{i,B}\) for \(i<n\), or
* there exists \(n\geqslant 1\) such that \(r^{\prime}_{i,A}=r^{\prime}_{i,B}\) for \(i\leqslant n\), \(\pi_{i,A}=\pi_{i,B}\) for \(i<n\), and \(\pi_{n,B}\geqslant\pi_{n,A}\), see Subsection 2.1, or
* are of the form \((A,A)\).
We then let \(O_{w,w}:=L^{d}_{\delta^{\circ},w}\times L^{d}_{\delta^{\circ},w}\setminus R\) and
\[O:=\bigcup_{w,w^{\prime}\in\mathbb{Z}}O_{w,w^{\prime}}. \tag{4.17}\]
We will only use that such an order exists in the current paper.
In order to make the above process more accessible, we explain how to compute \(r^{\prime}_{1,A}\) and \(\pi_{1,A}\). From Proposition 4.10, there is an isomorphism of sets \(L^{d}_{\delta^{\circ},w}\cong T^{d}_{w}\).
For a dominant weight \(\theta\in M(d)^{+}_{\mathbb{R}}\) with \(\langle 1_{d},\theta\rangle=w\), its \(r\)-invariant is the smallest \(r\) such that \(\theta-wr\tau_{d}\in 2r\mathbf{W}\). Equivalently, the \(r\)-invariant of a dominant weight \(\theta\in M(d)^{+}_{\mathbb{R}}\) is the maximum after all dominant cocharacters \(\mu\) of \(ST(d)\):
\[r(\theta)=\max_{\mu}\frac{\langle\mu,\theta\rangle}{\left\langle\mu,R(d)^{ \mu>0}\right\rangle}, \tag{4.18}\]
see [Pad, Subsection 3.1.1]. Assume that \(A=(d_{i},v_{i})_{i=1}^{k}\in T^{d}_{w}\); we also denote by \(A\) the corresponding element of \(T^{d}_{w}\). Then one can show that
\[r^{\prime}_{1,A}=r(\chi_{A}+\rho),\]
where \(\chi_{A}:=\sum_{i=1}^{k}v_{i}\tau_{d_{i}}+\sum_{i=1}^{k}\theta_{i}\in M(d)_{ \mathbb{R}}\), see (4.8) for the definition of \(\theta_{i}\). We let \(\lambda\) be the antidominant cocharacter corresponding to the partition \((d_{i})_{i=1}^{k}\). By (4.7) and Proposition 4.7, write
\[\chi_{A}+\frac{1}{2}\mathfrak{g}(d)^{\lambda<0}=\sum_{i=1}^{k}w_{i}\tau_{d_{ i}}.\]
There is a transformation:
\[(d_{i},v_{i})_{i=1}^{k}\mapsto(d_{i},w_{i})_{i=1}^{k}.\]
We will compute \(r^{\prime}_{1,A}\) in terms of \((w_{i})_{i=1}^{k}\). Let \(\mu\) be a dominant cocharacter attaining the maximum above and assume the associated partition of \(\mu\) is \((e_{i})_{i=1}^{s}\). Then the maximum in (4.18) is also attained for the cocharacter \(\mu\) with associated partition \(\left(\sum_{i\leqslant b}e_{i},\sum_{i>b}e_{i}\right)\), see [Pad, Proposition 3.2]. We have that \((d_{i})_{i=1}^{k}\geqslant(e_{i})_{i=1}^{s}\), see Subsection 2.1 for the notation, and Proposition 4.6.
Let \(\mu_{a}\) be a cocharacter which acts with weight \(\sum_{i\leqslant a}d_{i}\) on \(\beta_{j}\) for \(j>a\) and weight \(-\sum_{i>a}d_{i}\) on \(\beta_{j}\) for \(j\leqslant a\). We have that
\[r(\chi_{A}+\rho)=\max_{a}\frac{\langle\mu_{a},\chi_{A}+\rho\rangle}{\left\langle \mu_{a},R(d)^{\mu_{a}>0}\right\rangle},\]
where the maximum is taken after all \(1\leqslant a<k\). We compute
\[\left\langle\mu_{a},R(d)^{\mu_{a}>0}\right\rangle=A\underline{d}\left(\sum_{i>a} \underline{d}_{i}\right)\left(\sum_{i\leqslant a}\underline{d}_{i}\right).\]
Then
\[r(\chi_{A}+\rho)=r_{1,A}^{\prime}=\frac{1}{A}\text{max}_{a}\left(\frac{\sum_{i> a}w_{i}}{\sum_{i>a}\underline{d}_{i}}-\frac{\sum_{i\leqslant a}w_{i}}{\sum_{i \leqslant a}\underline{d}_{i}}\right),\]
where the maximum is taken after \(1\leqslant a\leqslant k\). The partition \(\pi_{1,A}\) can be reconstructed from all \(1\leqslant a<k\) for which \(\mu_{a}\) attains the maximum of (4.18). Assume the set of all such \(1\leqslant a<k\) is \(1\leqslant a_{2}<\dots<a_{s}<k\). Then \(\pi_{1,A}=(e_{i})_{i=1}^{s}\) is the partition of \(d\) with terms:
\[(d_{1}+\dots+d_{a_{2}},\dots,d_{a_{s}+1}+\dots+d_{k}).\]
### Semiorthogonal decompositions for very symmetric quivers
We now prove Theorem 4.5, and thus Theorem 4.4.
Proof of Theorem 4.5.: The same argument used to prove [PTa, Proposition 3.9] applies here. The argument in loc. cit. was organized in three steps:
1. for \((d_{i})_{i=1}^{k}\) a partition of \(d\) and \((v_{i})_{i=1}^{k}\in\mathbb{R}^{k}\) as in the statement of Theorem 4.5, the categorical Hall product \[\boxtimes_{i=1}^{k}\mathbb{M}(d_{i};\theta_{i}+\delta_{d_{i}}+v_{i}\tau_{d_{i }})\to D^{b}(\mathscr{X}(d))\] has image in \(\mathbb{D}(d;\delta_{d}+\mu\sigma_{d})\),
2. the categories on the left hand side of (4.4) are semiorthogonal for the ordering introduced in Subsection 4.6, see [Pad, Subsection 3.3], [PTa, Subsection 3.4],
3. the categories on the left hand side of (4.4) generate \(\mathbb{D}(d;\delta_{d}+\mu\sigma_{d})\).
The proofs of (2) and (3) are exactly as in loc. cit. and follow from the semiorthogonal decomposition of the categorical Hall algebra for a quiver from [Pad].
We explain the shifts used in defining the categories on the right hand side of (4.4). Consider the weights \(\chi_{i}\in M(d_{i})\) such that \(\sum_{i=1}^{k}\chi_{i}=\chi\). The decomposition (4.9) can be rewritten as
\[\sum_{i=1}^{k}\left(\chi_{i}+\rho_{i}-\delta_{d_{i}}\right)=\sum_{i=1}^{k} \left(\theta_{i}+v_{i}\tau_{d_{i}}+\psi_{i}\right),\]
and so
\[\chi_{i}+\rho_{i}-(\theta_{i}+\delta_{d_{i}}+v_{i}\tau_{d_{i}})=\psi_{i}\in \mathbf{W}(d_{i}).\]
The proof of (1) follows from Corollary (4.12) and an explicit resolution by vector bundles of the Hall product of generators of categories \(\mathbb{M}(d_{i};\theta_{i}+\delta_{d_{i}}+v_{i}\tau_{d_{i}})\) for \(1\leqslant i\leqslant k\), see Step 1 in the proof of [PTa, Proposition 3.9] and [PTa, Proposition 2.1].
In Theorem 4.5, we obtained a semiorthogonal decomposition after decomposing the polytope \(\mathbf{V}(d)\) in translates of direct sums of the polytopes \(\mathbf{W}(e)\) for dimension vectors \(e\) which are parts of partitions of \(d\). We can also decompose the full weight lattice \(M(d)\) to prove Theorem 4.13, which follows [Pad, Theorem 1.1] for the quiver \(Q\) and Proposition 4.9, see also the argument in [Pad23, Corollary 3.3].
**Theorem 4.13**.: _Let \(Q\) be a very symmetric quiver. Then Theorem 4.2 holds for \(Q\)._
Note that Theorem 4.13 follows from the analogues (for the full weight lattice) of Steps (2) and (3) from the proof of Theorem 4.5.
### A class of symmetric quivers
In this section, we discuss some preliminaries before proving Theorems 4.1 and 4.2. We stop assuming \(Q\) is a very symmetric quiver.
Let \(Q=(I,E)\) be a symmetric quiver such that the number of loops at each vertex \(a\in I\) has the same parity \(\varepsilon\in\mathbb{Z}/2\mathbb{Z}\). We construct a very symmetric quiver \(Q^{\mathfrak{J}}=(I,E^{\mathfrak{J}})\) with potential as follows. For \(a,b\in I\), let \(e_{ab}\) be the number of edges from \(a\) to \(b\) in \(Q\). Choose \(A\in\mathbb{Z}_{\geqslant 1}\) such that
\[A\geqslant\max\{e_{ab}\mid a,b\in I\}\text{ and }A\equiv\varepsilon\,(\text{ mod }2).\]
For \(a\in I\), let \(c_{a}\in\mathbb{N}\) be defined by
\[c_{a}:=\frac{A-e_{aa}}{2}.\]
Add loops \(\{\omega_{k}\mid 1\leqslant k\leqslant c_{a}\}\) and their opposites \(\{\overline{\omega}_{k}\mid 1\leqslant k\leqslant c_{a}\}\) at \(a\) and define the potential
\[W_{a}:=\sum_{k=1}^{c_{a}}\omega_{k}\overline{\omega}_{k}.\]
Fix a total ordering on \(I\). For two different vertices \(a<b\) in \(I\), let \(c_{ab}:=A-e_{ab}\). Add edges \(\{e_{k}\mid 1\leqslant k\leqslant c_{ab}\}\) from \(a\) to \(b\) and their opposites \(\{\overline{e}_{k}\mid 1\leqslant k\leqslant c_{ab}\}\) from \(b\) to \(a\). Let
\[W_{ab}:=\sum_{k=1}^{c_{ab}}e_{k}\overline{e}_{k}.\]
Consider the potential \(W^{\mathfrak{J}}\) of \(Q^{\mathfrak{J}}\)
\[W^{\mathfrak{J}}:=\sum_{a\in I}W_{a}+\sum_{a,b\in I,a<b}W_{ab}\]
of the quiver \(Q^{\mathfrak{J}}\). For \(d\in\mathbb{N}^{I}\), let \(U(d)\) be the affine space of linear maps corresponding to the edges
\[\bigsqcup_{a\in I}\{\omega_{k}\mid 1\leqslant k\leqslant c_{a}\}\sqcup \bigsqcup_{a<b}\{e_{k}\mid 1\leqslant k\leqslant c_{ab}\}.\]
The stack of representations of dimension \(d\) of \(Q^{\mathfrak{J}}\) is
\[\mathfrak{X}^{\mathfrak{J}}(d):=R^{\mathfrak{J}}(d)/G(d):=\left(R(d)\oplus U (d)\oplus U(d)^{\vee}\right)/G(d).\]
Consider the action of \(\mathbb{C}^{*}\) on
\[R^{\mathfrak{J}}(d):=R(d)\oplus U(d)\oplus U(d)^{\vee}\]
of weight \((0,0,2)\). Let \(s,v\) be the maps
\[\mathfrak{X}(d)\overset{v}{\leftarrow}\left(R(d)\oplus U(d)^{\vee}\right)/G( d)\overset{s}{\rightarrow}\mathfrak{X}^{\prime}(d)\]
where \(s\) is the inclusion and \(v\) is the projection. The Koszul equivalence (3.32) gives an equivalence (Knorrer periodicity):
\[\Theta_{d}=s_{*}v^{*}\colon D^{b}(\mathfrak{X}(d))\overset{\sim}{\rightarrow} \operatorname{MF}^{\operatorname{gr}}(\mathfrak{X}^{\mathfrak{J}}(d), \operatorname{Tr}W^{\mathfrak{J}}). \tag{4.19}\]
We may write \(\Theta\) instead of \(\Theta_{d}\) when the dimension vector \(d\) is clear from the context.
Denote by \(\mathfrak{X}^{f}(d)^{\operatorname{ss}}\) and \(\mathfrak{X}^{\mathfrak{J}f}f(d)^{\operatorname{ss}}\) the varieties of stable framed representations of \(Q\) and \(Q^{\mathfrak{J}}\), respectively.
We will prove Theorem 4.1 in the next section. The same argument also applies to obtain Theorem 4.2 using Theorem 4.13.
### Proof of Theorem 4.1
The order of summands in the semiorthogonal decomposition is induced from the order in Subsection 4.6 for the quiver \(Q^{\mathfrak{I}}\). Note that \(Q^{\mathfrak{I}}\) depends on the choice of a certain integer \(A\), but we do not discuss the dependence of the order on this choice and only claim that such an order exists.
For a partition \((d_{i})_{i=1}^{k}\) of \(d\) and \(\lambda\) an associated antidominant cocharacter, we define the weights \(\theta_{i}^{\mathfrak{I}}\in M(d_{i})_{\mathbb{R}}\) such that
\[\sum_{i=1}^{k}\theta_{i}^{\mathfrak{I}}=-\frac{1}{2}R^{\mathfrak{I}}(d)^{ \lambda>0}+\frac{1}{2}\mathfrak{g}(d)^{\lambda>0}.\]
For \(\delta_{d}\in M(d)_{\mathbb{R}}^{W_{d}}\), denote by \(\mathbb{M}^{\mathfrak{I}}(d;\delta_{d})\subset D^{b}(\mathfrak{X}^{\mathfrak{ I}}(d))\) the magic categories (2.22) for the quiver \(Q^{\mathfrak{I}}\). Consider the quasi-BPS categories:
\[\mathbb{S}^{\mathfrak{I}\mathrm{gr}}(d;\delta_{d}):=\mathrm{MF}^{\mathrm{gr} }(\mathbb{M}^{\mathfrak{I}}(d;\delta),\mathrm{Tr}\,W^{\prime})\subset\mathrm{ MF}^{\mathrm{gr}}(\mathfrak{X}^{\mathfrak{I}}(d),\mathrm{Tr}\,W^{\mathfrak{I}}).\]
Define
\[\delta_{d}^{\circ}:=-\frac{1}{2}\det U(d)=-\frac{1}{2}U(d),\]
where above, as elsewhere, we abuse notation and denote by \(U(d)\) the sum of weights of \(U(d)\). Further, define
\[\delta_{d}^{\mathfrak{I}}=\delta_{d}+\delta_{d}^{\circ}.\]
We will use the notations \(\delta_{d_{i}}^{\mathfrak{I}},\delta_{d_{i}}\) from (4.1). Note that \(\delta_{d}^{\mathfrak{I}}+\mu\sigma_{d}\) is a good weight if and only if \(\delta_{d}+\mu\sigma_{d}\) is a good weight because \(\langle\lambda,\delta_{d}^{\circ}\rangle\in\frac{1}{2}\mathbb{Z}\) for all \(\lambda\) as in Definition 2.12.
**Step 1**.: There is a semiorthogonal decomposition
\[\mathrm{MF}^{\mathrm{gr}}\left(\mathfrak{X}^{\mathfrak{I}f}(d)^{\mathrm{ss}},\mathrm{Tr}\,W^{\mathfrak{I}}\right)=\left\langle\bigotimes_{i=1}^{k}\mathbb{ S}^{\mathfrak{I}\mathrm{gr}}(d_{i};\theta_{i}^{\mathfrak{I}}+\delta_{d_{i}}^{ \mathfrak{I}}+v_{i}\tau_{d_{i}}):\mu\leqslant\frac{v_{1}}{\underline{d}_{1}}< \cdots<\frac{v_{k}}{\underline{d}_{k}}<1+\mu\right\rangle,\]
where the right hand side is as in Theorem 4.4.
Proof.: The claim follows by applying matrix factorizations [PTa, Proposition 2.5], [Pad22, Proposition 2.1] for the potential \(W^{\mathfrak{I}}\) to the semiorthogonal decomposition of Theorem 4.4 for the quiver \(Q^{\mathfrak{I}}\).
**Step 2**.: There is an equivalence:
\[\Theta_{d}^{f}\colon D^{b}\left(\mathfrak{X}^{f}(d)^{\mathrm{ss}}\right) \xrightarrow{\sim}\mathrm{MF}^{\mathrm{gr}}\left(\mathfrak{X}^{\mathfrak{I}f} (d)^{\mathrm{ss}},\mathrm{Tr}\,W^{\mathfrak{I}}\right).\]
Proof.: Consider the natural projection map \(\pi^{\mathfrak{I}}\colon\mathfrak{X}^{\mathfrak{I}f}(d)\to\mathfrak{X}^{f}(d).\) Then
\[(\pi^{\mathfrak{I}})^{-1}\left(\mathfrak{X}^{f}(d)^{\mathrm{ss}}\right) \subset\mathfrak{X}^{\mathfrak{I}\mathfrak{I}}(d)^{\mathrm{ss}}\]
is an inclusion of open sets and
\[(\pi^{\mathfrak{I}})^{-1}\left(\mathfrak{X}^{f}(d)^{\mathrm{ss}}\right)\cap \mathrm{Crit}(\mathrm{Tr}\,W^{\mathfrak{I}})=\mathfrak{X}^{\mathfrak{I}f}(d) ^{\mathrm{ss}}\cap\mathrm{Crit}(\mathrm{Tr}\,W^{\mathfrak{I}}). \tag{4.20}\]
We have equivalences
\[\mathrm{MF}^{\mathrm{gr}}\left(\mathfrak{X}^{\mathfrak{I}f}(d)^{\mathrm{ss}}, \mathrm{Tr}\,W^{\mathfrak{I}}\right)\xrightarrow{\sim}\mathrm{MF}^{\mathrm{gr }}\left(\pi^{\mathfrak{I}-1}\left(\mathfrak{X}^{f}(d)^{\mathrm{ss}}\right), \mathrm{Tr}\,W^{\mathfrak{I}}\right)\ \widetilde{\leftarrow}\ D^{b}\left(\mathfrak{X}^{f}(d)^{\mathrm{ss}}\right).\]
Here the first equivalence follows from (4.20) and (2.3), and the second equivalence is an instance of the Koszul equivalence from Theorem 2.5.
The claim of Theorem 4.1 then follows from:
**Step 3.** The equivalence (4.19) restricts to the equivalence
\[\Theta_{d}\colon\bigotimes_{i=1}^{k}\mathbb{M}(d_{i};\theta_{i}+\delta_{d_{i}}+v_ {i}\tau_{d_{i}})\stackrel{{\sim}}{{\to}}\bigotimes_{i=1}^{k} \mathbb{S}^{\mathbb{I}\mathrm{gr}}(d_{i};\theta_{i}^{\mathbbm{2}}+\delta_{d_{i} }^{\mathbbm{2}}+v_{i}\tau_{d_{i}}),\]
where the tensor products on the left and right hand sides are embedded by categorical Hall products into \(D^{b}(\mathscr{X}(d))\) and \(\mathrm{MF}^{\mathrm{gr}}(\mathscr{X}^{\mathbbm{1}}(d),\mathrm{Tr}\,W^{ \mathbbm{1}})\), respectively.
Proof.: By the compatibility of the Koszul equivalence with categorical Hall products in Proposition 2.8 and with quasi-BPS categories in Proposition 3.21, it is enough to check that
\[\sum_{i=1}^{k}\left(\theta_{i}+v_{i}\tau_{d_{i}}+\delta_{d_{i}}\right)-U(d)^{ \lambda>0}=\sum_{i=1}^{k}\left(\theta_{i}^{\mathbbm{2}}+\delta_{d_{i}}^{ \mathbbm{2}}+v_{i}\tau_{d_{i}}+\frac{1}{2}U(d_{i})\right).\]
Recall that
\[\sum_{i=1}^{k}U(d_{i})=U(d)^{\lambda},\ \sum_{i=1}^{k}\delta_{d_{i}}^{ \circ}=-\frac{1}{2}U(d),\] \[\sum_{i=1}^{k}(\theta_{i}^{\mathbbm{2}}-\theta_{i})=-\frac{1}{2} \left(R^{\mathbbm{2}}(d)^{\lambda>0}-R(d)^{\lambda>0}\right).\]
It thus suffices to show that:
\[-\frac{1}{2}\left(R^{\mathbbm{2}}(d)^{\lambda>0}-R(d)^{\lambda>0}\right)- \frac{1}{2}U(d)+\frac{1}{2}U(d)^{\lambda}+U(d)^{\lambda>0}=0,\]
which can be verified by a direct computation.
### More classes of quivers
Note that Theorem 4.1 applies for any tripled quiver. The semiorthogonal decomposition in Theorem 4.1 is particularly simple for quivers satisfying the following assumption:
**Assumption 4.14**.: The quiver \(Q=(I,E)\) is symmetric and:
* for all \(a,b\in I\) different, the number of edges from \(a\) to \(b\) is even, and
* for all \(a\in I\), the number of loops at \(a\) is odd.
Examples of quivers satisfying Assumption 4.14 are tripled quivers \(Q\) of quivers \(Q^{\circ}=(I,E^{\circ})\) satisfying the following assumption, where recall \(\alpha_{a,b}\) from (1.9):
**Assumption 4.15**.: For all \(a,b\in I\), we have \(\alpha_{a,b}\in 2\mathbb{Z}\).
For example, Assumption 4.15 is satisfied if \(Q^{\circ}\) is symmetric. Further, the moduli stack of semistable sheaves on a K3 surface is locally described by the stack of representations of a preprojective algebra of a quiver satisfying Assumption (4.15), see [PTc].
We discuss the particular case of Theorem 4.1 for quivers satisfying Assumption 4.14.
**Corollary 4.16**.: _Let \(Q\) be a quiver satisfying Assumption 4.14. Let \(\mu\in\mathbb{R}\) be such that \(\mu\sigma_{d}\) is a good weight. Then there is a \(X(d)\)-linear semiorthogonal decomposition:_
\[D^{b}\left(\mathscr{X}^{f}(d)^{ss}\right)=\left\langle\bigotimes_{i=1}^{k} \mathbb{M}(d_{i})_{v_{i}}:\mu\leqslant\frac{v_{1}}{\underline{d}_{1}}<\dots< \frac{v_{k}}{\underline{d}_{k}}<1+\mu\right\rangle, \tag{4.21}\]
_where the right hand side is after all partitions \((d_{i})_{i=1}^{k}\) of \(d\) and integers \((v_{i})_{i=1}^{k}\in\mathbb{Z}^{k}\) satisfying the above inequality._
Proof.: We set \(\delta_{d}=0\) in Theorem 4.1. Fix \(d\in\mathbb{N}^{I}\). For \(a\in I\), let \(V^{a}\) be a \(\mathbb{C}\)-vector space of dimension \(d^{a}\). For each \(a,b\in I\), let \(V^{ab}:=\operatorname{Hom}\big{(}V^{a},V^{b}\big{)}\), and \(e^{ab}\) denotes the number of edges from \(a\) to \(b\). Then
\[\frac{1}{2}R(d)^{\lambda>0}-\frac{1}{2}\mathfrak{g}(d)^{\lambda>0}=\sum_{a\in I }\frac{e^{aa}-1}{2}\mathfrak{g}(d)^{\lambda>0}+\sum_{a\neq b\in I}\frac{e^{ab} }{2}(V^{ab})^{\lambda>0}\in M(d).\]
Thus \(\theta_{i}\in M(d_{i})^{W_{d_{i}}}\), so the weights \(v_{i}\) are integers for \(1\leqslant i\leqslant k\). Moreover there is an equivalence \(\mathbb{M}(d_{i})_{v_{i}}=\mathbb{M}(d_{i};v_{i}\tau_{d_{i}})\stackrel{{ \sim}}{{\to}}\mathbb{M}(d_{i};\theta_{i}+v_{i}\tau_{d_{i}})\) by taking the tensor product with \(\theta_{i}\). The claim then follows from Theorem 4.1.
### More framed quivers
The semiorthogonal decompositions in Theorems 4.4 and Corollary 4.16 also hold for spaces of semistable representations of the quivers \(Q^{\alpha f}\), where \(Q=(I,E)\) is as in the statements of these theorems, \(\alpha\in\mathbb{Z}_{\geqslant 1}\), and \(Q^{\alpha f}\) has set of vertices \(I\sqcup\{\infty\}\) and its set of edges is \(E\) together with \(\alpha\) edges from \(\infty\) to any vertex of \(I\). For future reference, we state the version of Corollary 4.16 for the space of semistable representations \(\mathcal{X}^{\alpha f}(d)^{\operatorname{ss}}\) of the quiver \(Q^{\alpha f}\):
**Corollary 4.17**.: _Let \(Q\) be a quiver satisfying Assumption 4.15. Let \(\mu\in\mathbb{R}\) such that \(\mu\sigma_{d}\) is a good weight and let \(\alpha\in\mathbb{N}\). Then there is a \(X(d)\)-linear semiorthogonal decomposition_
\[D^{b}\left(\mathcal{X}^{\alpha f}(d)^{\operatorname{ss}}\right)=\left\langle \bigotimes_{i=1}^{k}\mathbb{M}(d_{i})_{v_{i}}:\mu\leqslant\frac{v_{1}}{ \underline{d}_{1}}<\dots<\frac{v_{k}}{\underline{d}_{k}}<\alpha+\mu\right\rangle,\]
_where the rights hand side is after all partitions \((d_{i})_{i=1}^{k}\) of \(d\) and integers \((v_{i})_{i=1}^{k}\in\mathbb{Z}^{k}\) satisfying the above inequality._
### Semiorthogonal decompositions for general potentials
Let \(W\) be a potential of \(Q\). By [PTa, Proposition 2.5], there are analogous semiorthogonal decompositions to those in Theorems 4.1 and 4.2 (and also to those in Corollaries 4.16 and 4.17) for categories of matrix factorizations. Recall the definition of (graded or not) quasi-BPS categories from (2.27) and (2.28). We first state the version for Corollary 4.16, which we use in [PTd]:
**Theorem 4.18**.: _Let \(Q\) be a quiver satisfying Assumption 4.15 and let \(W\) be a potential of \(Q\) (and possibly a grading as in Section 2.4). Let \(\mu\in\mathbb{R}\) such that \(\mu\sigma_{d}\) is a good weight and let \(\alpha\in\mathbb{Z}_{\geqslant 1}\). There is a semiorthogonal decomposition_
\[\operatorname{MF}^{\bullet}\left(\mathcal{X}^{\alpha f}(d)^{\operatorname{ss} },\operatorname{Tr}W\right)=\left\langle\bigotimes_{i=1}^{k}\mathbb{S}^{ \bullet}(d_{i})_{v_{i}}:\mu\leqslant\frac{v_{1}}{\underline{d}_{1}}<\dots< \frac{v_{k}}{\underline{d}_{k}}<\mu+\alpha\right\rangle,\]
_where the right hand side is after all partitions \((d_{i})_{i=1}^{k}\) of \(d\) and integers \((v_{i})_{i=1}^{k}\in\mathbb{Z}^{k}\) satisfying the above inequality, and where \(\bullet\in\{\emptyset,\operatorname{gr}\}\)._
A version for Theorem 4.2 (for quivers satisfying Assumption 4.15) is the following:
**Theorem 4.19**.: _Let \(Q\) be a quiver satisfying Assumption 4.15 and let \(W\) be a potential of \(Q\) (and possibly consider a grading as in Section 2.4). There is a
semiorthogonal decomposition_
\[\operatorname{MF}^{\bullet}\left(\mathscr{X}(d),\operatorname{Tr}W\right)=\left< \bigotimes_{i=1}^{k}\mathbb{S}^{\bullet}(d_{i};v_{i}\tau_{d_{i}}):\frac{v_{1}} {\underline{d}_{1}}<\ldots<\frac{v_{k}}{\underline{d}_{k}}\right>,\]
_where the right hand side is after all partitions \((d_{i})_{i=1}^{k}\) of \(d\) and integers \((v_{i})_{i=1}^{k}\in\mathbb{Z}^{k}\) satisfying the above inequality, and where \(\bullet\in\{\emptyset,\operatorname{gr}\}\)._
In the case of doubled quiver, by combining Theorems 4.2 and 4.19 with the Koszul equivalence in Theorem 2.5, and the compatibility of Koszul equivalence with categorical Hall product in Proposition 2.8, we obtain the following:
**Theorem 4.20**.: _Let \(Q^{\circ}\) be a quiver and let \((Q^{\circ,d},\mathscr{I})\) be its doubled quiver relation. For a partition \((d_{i})_{i=1}^{k}\) of \(d\), let \(\lambda\) be an associated antidominant cocharacter, and define \(\theta_{i}\in\frac{1}{2}M(d_{i})^{W_{d_{i}}}\) by:_
\[\sum_{i=1}^{k}\theta_{i}=-\frac{1}{2}\overline{R}(d)^{\lambda>0}+\mathfrak{g} (d)^{\lambda>0}.\]
_There is a semiorthogonal decomposition:_
\[D^{b}(\mathscr{P}(d))=\left<\bigotimes_{i=1}^{k}\mathbb{T}(d_{i},\theta_{i}+v _{i}\tau_{d_{i}})\right> \tag{4.22}\]
_where the right hand side is after all partitions \((d_{i})_{i=1}^{k}\) of \(d\) and tuplets \((v_{i})_{i=1}^{k}\in\mathbb{R}^{k}\) such that the sum of coefficients of \(\theta_{i}+v_{i}\tau_{d_{i}}\) is an integer for all \(1\leqslant i\leqslant k\) and satisfying the inequality:_
\[\frac{v_{1}}{\underline{d}_{1}}<\ldots<\frac{v_{k}}{\underline{d}_{k}}. \tag{4.23}\]
_Moreover, each summand is given by the image of categorical Hall product (2.12)._
_If furthermore \(Q^{\circ}\) satisfies Assumption 4.15, then \(v_{i}\in\mathbb{Z}\) and \(\theta_{i}\in M(d_{i})^{W_{d_{i}}}\), so the right hand side of (4.22) is after all partitions \((d_{i})_{i=1}^{k}\) of \(d\) and all tuplets \((v_{i})_{i=1}^{k}\in\mathbb{Z}^{k}\) satisfying (4.23)._
The following example will be used in [PTc]:
**Example 4.21**.: Let \(Q^{\circ}\) be the quiver with one vertex and \(g\geqslant 1\) loops. Then \(d\in\mathbb{N}\) and the semiorthogonal decomposition (4.22) is
\[D^{b}(\mathscr{P}(d))=\left<\bigotimes_{i=1}^{k}\mathbb{T}(d_{i})_{w_{i}}: \frac{v_{1}}{d_{1}}<\ldots<\frac{v_{k}}{d_{k}}\right>.\]
Here, \(w_{i}\in\mathbb{Z}\) for \(1\leqslant i\leqslant k\) is given by
\[w_{i}=v_{i}+(g-1)d_{i}\left(\sum_{i>j}d_{j}-\sum_{i<j}d_{j}\right).\]
Note that \(\mathbb{T}(d_{i})_{w_{i}}\cong\mathbb{T}(d_{i})_{v_{i}}\) for all \(1\leqslant i\leqslant k\).
### Strong generation of quasi-BPS categories
We use Theorem 4.1 to prove the strong generation of the (graded or not) quasi-BPS categories
\[\mathbb{S}^{\bullet}(d;\delta_{d})\text{ for }\bullet\in\{\emptyset,\operatorname{gr}\},\]
where the grading is as in Subsection 1.8. We first recall some terminology.
Let \(\mathcal{D}\) be a pre-triangulated dg-category. For a set of objects \(\mathcal{S}\subset\mathcal{D}\), we denote by \(\langle\mathcal{S}\rangle\) the smallest subcategory which contains \(S\) and closed under shifts, the finite direct sums and direct summands. For subcategories \(\mathcal{C}_{1},\mathcal{C}_{2}\subset\mathcal{D}\), we denote by \(\mathcal{C}_{1}\star\mathcal{C}_{2}\subset\mathcal{D}\) the smallest subcategory which contains objects \(E\) which fit into distinguished triangles \(A_{1}\to E\to A_{2}\to A_{1}[1]\) for \(A_{i}\in\mathcal{C}_{i}\) and closed under shifts, finite direct sums, and direct summands.
We say that \(\mathcal{D}\) is _strongly generated by \(C\in\mathcal{D}\)_ if \(\mathcal{D}=\langle C\rangle^{\star n}\) for some \(n\geqslant 1\). A dg-category \(\mathcal{D}\) is called _regular_ if it has a strong generator. It is called _smooth_ if the diagonal dg-module of \(\mathcal{D}\) is perfect. It is proved in [10, Lemma 3.5, 3.6] that if \(\mathcal{D}\) is smooth, then \(\mathcal{D}\) is regular.
**Proposition 4.22**.: _Let \(Q\) be a symmetric quiver such that the number of loops at each vertex \(i\in I\) has the same parity, let \(W\) be any potential of \(Q\), let \(d\in\mathbb{N}^{I}\), and let \(\delta_{d}\in M(d)_{\mathbb{R}}^{W_{d}}\). The category \(\mathbb{S}^{\bullet}(d;\delta_{d})\) has a strong generator, thus it is regular._
Proof.: The category \(\mathbb{S}^{\bullet}(d;\delta_{d})\) is admissible in \(\operatorname{MF}^{\bullet}\left(\mathcal{X}^{f}(d),\operatorname{Tr}W\right)\) by the variant for an arbitrary potential (see [11, Proposition 2.5]) of Theorem 4.1. Let
\[\Phi\colon\operatorname{MF}^{\bullet}\left(\mathcal{X}^{f}(d),\operatorname{ Tr}W\right)\to\mathbb{S}^{\bullet}(d;\delta_{d})\]
be the adjoint of the inclusion. The category \(\operatorname{MF}^{\bullet}\left(\mathcal{X}^{f}(d),\operatorname{Tr}W\right)\) is smooth, see [12, Lemma 2.11]. Therefore it is regular, so it has a strong generator \(C\). Then \(\mathbb{S}^{\bullet}(d;\delta_{d})\) has the strong generator \(\Phi(C)\).
## 5. Quasi-BPS categories for tripled quivers
In this section, we prove a categorical analogue of Davison's support lemma [11, Lemma 4.1] for tripled quivers with potential of quivers \(Q^{\circ}\) satisfying Assumption 4.15, see Theorem 5.1. We then use Theorem 5.1 to construct reduced quasi-BPS categories \(\mathbb{T}\) for preprojective algebras, which are proper over the good moduli space \(P\) of representations of the preprojective algebra, and regular. When the reduced stack of representations of the preprojective algebra is classical, we show that the relative Serre functor of \(\mathbb{T}\) over \(P\) is trivial, and further that the category \(\mathbb{T}\) is indecomposable.
Throughout this section, we consider tripled quivers with potential or preprojective algebra for quivers \(Q^{\circ}\) satisfying Assumption 4.15.
### The categorical support lemma
Let \(Q^{\circ}=(I,E^{\circ})\) be a quiver satisfying Assumption 4.15, and consider its tripled quiver \(Q=(I,E)\) with potential \(W\), see Subsection 2.2.6. We will use the notations from Subsection 2.2.6. Recall that
\[\mathcal{X}(d)=R(d)/G(d),\ R(d)=\overline{R}(d)\oplus\mathfrak{g}(d),\]
where \(\overline{R}(d)\) is the representation space of the doubled quiver of \(Q^{\circ}\). There is thus a projection map onto the second summand (which records the linear maps corresponding to the loops in the tripled quiver not in the doubled quiver):
\[\tau\colon\mathcal{X}(d)\to\mathfrak{g}(d)/G(d).\]
We consider the good moduli space morphism
\[\pi_{\mathfrak{g}}\colon\mathfrak{g}(d)/G(d)\to\mathfrak{g}(d)/\!\!/G(d)=\prod_{a \in I}\operatorname{Sym}^{d}(\mathbb{C}). \tag{5.1}\]
The above map sends \(z\in\mathfrak{g}(d)=\bigoplus_{a\in I}\operatorname{End}(V^{a})\) to its generalized eigenvalues. Let \(\Delta\) be the diagonal
\[\Delta\colon\mathbb{C}\hookrightarrow\prod_{a\in I}\operatorname{Sym}^{d^{a}}( \mathbb{C}),\ x\mapsto\prod_{a\in I}(\overbrace{x,\dots,x}^{d^{a}})=( \overbrace{x,\dots,x}^{d}).\]
Consider the composition
\[\pi\colon\operatorname{Crit}(\operatorname{Tr}W)\hookrightarrow\mathscr{X}(d) \xrightarrow{\tau}\mathfrak{g}(d)/G(d)\xrightarrow{\pi_{\mathfrak{g}}} \mathfrak{g}(d)/\!\!/G(d). \tag{5.2}\]
The following is the main result of this section, and a generalization of [PTb, Theorem 3.1] which discusses the case of the tripled quiver with potential of the Jordan quiver.
**Theorem 5.1**.: _Let \(v\in\mathbb{Z}\) such that \(\gcd(v,\underline{d})=1\). Then any object in \(\mathbb{S}(d)_{v}\) is supported on \(\pi^{-1}(\Delta)\)._
Before the proof of Theorem 5.1, we introduce some notations related to formal completions of fibers over \(\mathfrak{g}(d)\mathbin{/\!\!/}G(d)\). For \(p\in\mathfrak{g}(d)/\!\!/G(d)\), we denote by \(\mathscr{X}_{p}(d)\) the pull-back of the morphism
\[\pi_{\mathfrak{g}}\circ\tau\colon\mathscr{X}(d)\to\mathfrak{g}(d)/\!\!/G(d)\]
by \(\operatorname{Spec}\widehat{\mathcal{O}}_{\mathfrak{g}(d)fG(d),p}\to\mathfrak{ g}(d)/\!\!/G(d)\). We write an element \(p\in\prod_{a\in I}\operatorname{Sym}^{d^{a}}(\mathbb{C})\) as
\[p=\left(\sum_{j=1}^{l^{a}}d^{a,(j)}x^{a,(j)}\right)_{a\in I} \tag{5.3}\]
where \(x^{a,(j)}\in\mathbb{C}\) with \(x^{a,(j)}\neq x^{a,(j^{\prime})}\) for \(1\leqslant j\neq j^{\prime}\leqslant l^{a}\) and \(d^{a,(j)}\in\mathbb{Z}_{\geqslant 1}\) for \(1\leqslant j\leqslant l^{a}\) are such that \(\sum_{j=1}^{l^{a}}d^{a,(j)}=d\). There is an isomorphism
\[\mathscr{X}_{p}(d)\cong\left(\overline{R}(d)\times\prod_{a,j}\widehat{ \mathfrak{g}}^{a,(j)}\right)/G_{p} \tag{5.4}\]
where \(V^{a}=\bigoplus_{j}V^{a,(j)}\) is the decomposition into generalized eigenspaces corresponding to \(p\), \(G_{p}:=\prod_{a,j}GL(V^{a,(j)})\), and \(\mathfrak{g}^{a,(j)}:=\operatorname{End}(V^{a,(j)})\).
**Remark 5.2**.: For a point \(p\) as in (5.3), let \(J\) be the set of pairs \((a,j)\) such that \(a\in I\) and \(1\leqslant j\leqslant l^{a}\). The support of \(p\) is defined to be
\[\operatorname{supp}(p):=\{x^{a,(j)}\mid(a,j)\in J\}\subset\mathbb{C}.\]
Let \(Q_{p}^{\circ}\) be a quiver with vertex set \(J\) and the number of arrows from \((a,j)\) to \((b,j^{\prime})\) is the number of arrows from \(a\) to \(b\) in \(Q^{\circ}\). Since we have
\[\overline{R}(d)\oplus\bigoplus_{(a,j)\in J}\mathfrak{g}^{a,(j)}\] \[=\bigoplus_{(a\to b)\in E^{\circ},j,j^{\prime}}\operatorname{Hom }(V^{a,(j)},V^{b,(j^{\prime})})\oplus\operatorname{Hom}(V^{b,(j^{\prime})},V^ {a,(j)})\oplus\bigoplus_{(a,j)\in J}\operatorname{End}(V^{a,(j)}), \tag{5.5}\]
the space (5.5) is the representation space of the tripled quiver \(Q_{p}\) of \(Q_{p}^{\circ}\) with dimension vector \(d=\big{(}d^{a,(j)}\big{)}_{(a,j)\in J}\). Note that \(Q_{p}^{\circ}\) satisfies Assumption 4.15 since \(Q^{\circ}\) satisfies Assumption 4.15. There is a correspondence from dimension vectors of \(Q_{p}\) to dimension vectors of \(Q\):
\[\Big{(}d^{a,(j)}\Big{)}_{(a,j)\in J}\mapsto\left(d^{a}=\sum_{j}d^{a,(j)}\right) _{a\in I}.\]
Proof of Theorem 5.1.: The proof is similar to the proof of [14, Theorem 3.1], but simpler by the use of Proposition 3.22. Consider an object \(\mathcal{F}\in\mathbb{S}(d)_{v}\). Let \(p\in\prod_{a\in I}\operatorname{Sym}^{d^{a}}(\mathbb{C})\) be a point which decomposes as \(p=p^{\prime}+p^{\prime\prime}\) such that the supports of \(p^{\prime}\) and \(p^{\prime\prime}\) are disjoint. Write \(p\) as in (5.3). Assume the support of \(\mathcal{F}\) intersects \(\pi^{-1}(p)\). We will reach a contradiction with the assumption \(\gcd(v,\underline{d})=1\). Define
\[\mathbb{S}_{p}(d)_{v}\subset\operatorname{MF}\left(\mathscr{X}_{p}(d), \operatorname{Tr}W\right) \tag{5.6}\]
to be the full subcategory generated by matrix factorizations whose factors are direct summands of the vector bundles \(\mathcal{O}_{\mathscr{X}_{p}(d)}\otimes\Gamma_{G_{p}}(\chi)\), where \(\chi\) is a \(G_{p}\)-dominant \(T(d)\)-weight satisfying
\[\chi+\rho_{p}-v\tau_{d}\in\mathbf{W}_{p}(d):=\frac{1}{2}\text{sum}[0,\beta]. \tag{5.7}\]
Here, \(\rho_{p}\) is half the sum of positive roots of \(G_{p}\) and the Minkowski sum for \(\mathbf{W}_{p}(d)\) is after all weights \(\beta\) of the \(T(d)\)-representation (5.5). Define \(n_{\lambda,p}\) by
\[n_{\lambda,p}:=\left\langle\lambda,\det\left(\mathbb{I}_{\mathfrak{X}_{p}(d) }^{\lambda>0}|_{0}\right)\right\rangle=\left\langle\lambda,\det\left(\mathbb{ I}_{\mathfrak{X}(d)}^{\lambda>0}|_{0}\right)\right\rangle. \tag{5.8}\]
As in Lemma 2.10, the subcategory (5.6) is generated by matrix factorizations whose factors are of the form \(\Gamma\otimes\mathcal{O}_{\mathscr{X}_{p}(d)}\), where \(\Gamma\) is a \(G_{p}\)-representation such that any \(T(d)\)-weight \(\chi\) of \(\Gamma\) satisfies
\[\left\langle\lambda,\chi-v\tau_{d}\right\rangle\in\left[-\frac{1}{2}n_{\lambda,p},\frac{1}{2}n_{\lambda,p}\right].\]
In particular by the identity (5.8), the restriction along with the natural morphism \(\iota_{p}\colon\mathscr{X}_{p}(d)\to\mathscr{X}(d)\) restricts to the functor
\[\iota_{p}^{*}\colon\mathbb{S}(d)_{v}\to\mathbb{S}_{p}(d)_{v}.\]
Therefore, by the assumption that the support of \(\mathcal{F}\) intersect \(\pi^{-1}(p)\), we have \(0\neq\iota_{p}^{*}\mathcal{F}\in\mathbb{S}_{p}(d)_{v}\), and in particular \(\mathbb{S}_{p}(d)_{v}\neq 0\). We show that, in this case, \(v\) is not coprime with \(\underline{d}\).
The decomposition \(p=p^{\prime}+p^{\prime\prime}\) corresponds to decompositions \(V^{a}=V^{\prime a}\oplus V^{\prime\prime a}\)
\[V^{\prime a}=\bigoplus_{x^{a,(j)}\in\operatorname{supp}(p^{\prime})}V^{a,(j) },\ V^{\prime\prime a}=\bigoplus_{x^{a,(j)}\in\operatorname{supp}(p^{\prime \prime})}V^{a,(j)}\]
for all \(a\in I\). Let \(d^{\prime a}=\dim V^{\prime a}\), \(d^{\prime\prime a}=\dim V^{\prime\prime a}\), \(d^{\prime}=(d^{\prime a})_{a\in I}\) and \(d^{\prime\prime}=(d^{\prime\prime a})_{a\in I}\), so \(d=d^{\prime}+d^{\prime\prime}\). By Lemma 5.3, after possibly replacing the isomorphism (5.4), the regular function \(\operatorname{Tr}W\) restricted to \(\mathscr{X}_{p}(d)\) is written as
\[\operatorname{Tr}W|_{\mathscr{X}_{p}(d)}=\operatorname{Tr}W^{\prime}\boxplus \operatorname{Tr}W^{\prime}\boxplus f. \tag{5.9}\]
Here, \(\operatorname{Tr}W^{\prime}\) and \(\operatorname{Tr}W^{\prime\prime}\) are the regular functions given by \(\operatorname{Tr}W\) on \(\mathscr{X}(d^{\prime})\) and \(\mathscr{X}(d^{\prime\prime})\), respectively, restricted to \(\mathscr{X}_{p^{\prime}}(d^{\prime})\) and \(\mathscr{X}_{p^{\prime\prime}}(d^{\prime\prime})\), and \(f\) is a non-degenerate
\(G_{p}\)-invariant quadratic form on \(U\oplus U^{\vee}\) given by \(f(u,v)=\langle u,v\rangle\), where \(U\) is the \(G_{p}\)-representation
\[U:=\bigoplus_{(a\to b)\in E^{\circ}}\operatorname{Hom}(V^{\prime a},V^{\prime \prime b})\oplus\bigoplus_{(b\to a)\in E^{\circ}}\operatorname{Hom}(V^{\prime a },V^{\prime b}).\]
Note that we have the decomposition as \(G_{p}\)-representations
\[\overline{R}(d)=\overline{R}(d^{\prime})\oplus\overline{R}(d^{\prime\prime}) \oplus U\oplus U^{\vee}. \tag{5.10}\]
We have the following diagram
where \(\mathcal{U}\) is the vector bundle on \(\mathcal{X}_{p^{\prime}}(d^{\prime})\times\mathcal{X}_{p^{\prime\prime}}(d^{ \prime\prime})\) determined by the \(G_{p}\)-representation \(U\), \(i\) is the closed immersion \(x\mapsto(x,0)\), and \(j\) is the natural morphism induced by the formal completion which induces the isomorphism on critical loci of \(\operatorname{Tr}W\). Consider the functor
\[\Psi:=j^{*}i_{*}q^{*}\colon\operatorname{MF}(\mathcal{X}_{p^{ \prime}}(d^{\prime}),\operatorname{Tr}W^{\prime}) \boxtimes\operatorname{MF}(\mathcal{X}_{p^{\prime\prime}}(d^{ \prime\prime}),\operatorname{Tr}W^{\prime\prime})\\ \stackrel{{\sim}}{{\to}}\operatorname{MF}(\mathcal{U }\oplus\mathcal{U}^{\vee},\operatorname{Tr}W)\stackrel{{ j^{*}}}{{\hookrightarrow}} \operatorname{MF}(\mathcal{X}_{p}(d),\operatorname{Tr}W), \tag{5.11}\]
where the first arrow is an equivalence by Knorrer periodicity (3.40) and the second arrow is fully-faithful with dense image, see [Todb, Lemma 6.4]. Let \(v^{\prime},v^{\prime\prime}\in\mathbb{Q}\) be such that
\[v\tau_{d}=v^{\prime}\tau_{d^{\prime}}+v^{\prime\prime}\tau_{d^{\prime\prime}}, \tag{5.12}\]
and let \(\delta^{\prime}\in M(d^{\prime})_{\mathbb{R}}\) and \(\delta^{\prime\prime}\in M(d^{\prime\prime})_{\mathbb{R}}\) be such that
\[\delta^{\prime}+\delta^{\prime\prime}=\frac{1}{2}U. \tag{5.13}\]
The quiver \(Q^{\circ}\) satisfies Assumption 4.15, and thus \(\delta^{\prime}\in M(d^{\prime})^{W_{d^{\prime}}}\) and \(\delta^{\prime\prime}\in M(d^{\prime\prime})^{W_{d^{\prime\prime}}}\). By Proposition 3.22, the functor (5.11) restricts to the fully-faithful functor with dense image:
\[\mathbb{S}_{p^{\prime}}(d^{\prime};\delta^{\prime}+v^{\prime}\tau_{d^{\prime} })\boxtimes\mathbb{S}_{p^{\prime\prime}}(d^{\prime\prime};\delta^{\prime\prime }+v^{\prime\prime}\tau_{d^{\prime\prime}})\to\mathbb{S}_{p}(d)_{v}. \tag{5.14}\]
Then we have
\[\mathbb{S}_{p^{\prime}}(d^{\prime})_{v^{\prime}}\simeq\mathbb{S}_ {p^{\prime}}(d^{\prime};\delta^{\prime}+v^{\prime}\tau_{d^{\prime}})\neq 0,\] \[\mathbb{S}_{p^{\prime\prime}}(d^{\prime\prime})_{v^{\prime\prime} }\simeq\mathbb{S}_{p^{\prime\prime}}(d^{\prime\prime};\delta^{\prime\prime}+v ^{\prime\prime}\tau_{d^{\prime\prime}})\neq 0.\]
In particular, we have \(v^{\prime}\in\mathbb{Z}\), \(v^{\prime\prime}\in\mathbb{Z}\) by Remark 2.11. By (5.12), we further have that
\[\frac{v^{\prime}}{\underline{d}^{\prime}}=\frac{v^{\prime\prime}}{\underline{ d}^{\prime\prime}}=\frac{v}{\underline{d}},\]
which contradicts the assumption that \(\gcd(v,\underline{d})=1\).
We have postponed the proof of the following:
**Lemma 5.3**.: _By replacing the isomorphism (5.4) if necessary, the identity (5.9) holds._
Proof.: (cf. [PTb, Lemma 3.3]) For \(p\in\prod_{a\in I}\operatorname{Sym}^{d^{a}}(\mathbb{C})\) as in (5.3), let \(u\in\mathfrak{g}(d)/G(d)\) be the unique closed point over \(p\). Note that \(u\) is represented by a diagonal matrix with eigenvalues \(x^{a,(j)}\). In particular, we can assume that
\[u\in\bigoplus_{(a,j)\in J}\mathfrak{g}^{a,(j)}\subset\mathfrak{g}(d).\]
We construct a map
\[\nu\colon\left(\overline{R}(d)\oplus\bigoplus_{(a,j)\in J}\mathfrak{g}^{a,(j )}\right)/G_{p}\to\mathcal{X}(d)\]
given by \((\alpha,\beta=\beta^{a,(j)})\mapsto(\alpha,\beta+u)\). The morphism \(\nu\) is etale at \(\nu(0)\). Indeed, the tangent complex of \(\mathcal{X}(d)\) at \(\nu(0)\) is
\[\mathbb{T}_{\mathcal{X}(d)}|_{\nu(0)}=\left(\mathfrak{g}(d)\to\overline{R}(d) \oplus\mathfrak{g}(d)\right),\ \gamma\mapsto(0,[\gamma,u]).\]
The kernel of the above map is \(\bigoplus_{(a,j)\in J}\mathfrak{g}^{a,(j)}\) and the cokernel is \(\overline{R}(d)\oplus\bigoplus_{(a,j)\in J}\mathfrak{g}^{a,(j)}\), so the morphism \(\nu\) induces a quasi-isomorphism on tangent complexes at \(\nu(0)\).
For \(x\in\overline{R}(d)\), a vertex \(a\in I\) and and edge \(e=(a\to b)\) in \(E^{\circ}\), write its corresponding maps as \(x(e)\colon V^{a}\to V^{b}\) and \(x(\overline{e})\colon V^{b}\to V^{a}\). For \(\theta\in\mathfrak{g}(d)\), write its corresponding map as \(\theta(a)\colon V^{a}\to V^{a}\). The function \(\operatorname{Tr}W\) is given by
\[\operatorname{Tr}W(x,\theta)=\operatorname{Tr}\left(\sum_{e\in E^{\circ}}[x(e ),x(\overline{e})]\right)\left(\sum_{a\in I}\theta(a)\right).\]
We set \(\mathfrak{g}^{\prime}\) and \(\mathfrak{g}^{\prime\prime}\) to be
\[\mathfrak{g}^{\prime}=\bigoplus_{x^{a,(j)}\in\operatorname{supp}(p^{\prime}) }\mathfrak{g}^{a,(j)},\ \mathfrak{g}^{\prime\prime}=\bigoplus_{x^{a,(j)}\in \operatorname{supp}(p^{\prime\prime})}\mathfrak{g}^{a,(j)},\]
and write an element \(\gamma\in\bigoplus_{(a,j)\in J}\mathfrak{g}^{a,(j)}\) as \(\gamma=\gamma^{\prime}+\gamma^{\prime\prime}\) for \(\gamma^{\prime}\in\mathfrak{g}^{\prime}\), \(\gamma^{\prime\prime}\in\mathfrak{g}^{\prime\prime}\). Note that there are isomorphisms
\[\mathcal{X}_{p^{\prime}}(d^{\prime})\cong\left(\overline{R}(d^{\prime}) \times\overline{\mathfrak{g}}^{\prime}\right)/G_{p^{\prime}},\ \mathcal{X}_{p^{\prime\prime}}(d^{\prime\prime})\cong\left(\overline{R}(d^{ \prime\prime})\times\overline{\mathfrak{g}}^{\prime\prime}\right)/G_{p^{ \prime\prime}}.\]
For \(x\in\overline{R}(d)\), we write
\[x=x^{\prime}+x^{\prime\prime}+x_{U}+x_{U^{\vee}}\]
for \(x^{\prime}\in\overline{R}(d^{\prime})\), \(x^{\prime\prime}\in\overline{R}(d^{\prime\prime})\), \(x_{U}\in U\) and \(x_{U^{\vee}}\in U^{\vee}\). Then \(\nu^{*}\operatorname{Tr}W\) is calculated as
\[\nu^{*}\operatorname{Tr}W(x,\gamma) =\operatorname{Tr}\left(\sum_{e\in E^{\circ}}[x^{\prime}(e),x^{ \prime}(\overline{e})]\right)\left(\sum_{a\in I}\gamma^{\prime}(a)\right)\] \[+\operatorname{Tr}\left(\sum_{e\in E^{\circ}}[x^{\prime\prime}(e ),x^{\prime\prime}(\overline{e})]\right)\left(\sum_{a\in I}\gamma^{\prime \prime}(a)\right)\] \[+\operatorname{Tr}\left(\sum_{e\in E^{\circ}}x_{U^{\vee}}(e) \left(x_{U}(\overline{e})\gamma^{\prime}+x_{U}(\overline{e})u^{\prime}-\gamma ^{\prime\prime}x_{U}(\overline{e})-u^{\prime\prime}x_{U}(\overline{e}) \right)\right)\] \[+\operatorname{Tr}\left(\sum_{e\in E^{\circ}}x_{U}(e)\left(x_{U^{ \vee}}(\overline{e})\gamma^{\prime\prime}+x_{U^{\vee}}(\overline{e})u^{\prime \prime}-\gamma^{\prime}x_{U^{\vee}}(\overline{e})-u^{\prime}x_{U^{\vee}}( \overline{e})\right)\right).\]
We take the following \(G_{p}\)-equivariant variable change
\[x_{U}(\overline{e})\mapsto x_{U}(\overline{e})\gamma^{\prime}+x_{U} (\overline{e})u^{\prime}-\gamma^{\prime\prime}x_{U}(\overline{e})-u^{\prime \prime}x_{U}(\overline{e}),\] \[x_{U^{\vee}}(\overline{e})\mapsto x_{U^{\vee}}(\overline{e}) \gamma^{\prime\prime}+x_{U^{\vee}}(\overline{e})u^{\prime\prime}-\gamma^{ \prime}x_{U^{\vee}}(\overline{e})-u^{\prime}x_{U^{\vee}}(\overline{e}).\]
Since \(u^{\prime}\) and \(u^{\prime\prime}\) are diagonal matrices with different eigenvalues, the above variable change determines an automorphism of \(\mathcal{X}_{p}(d)\). Therefore we obtain the desired identity (5.9).
### Reduced stacks
We use the notations introduced in Subsections 2.2.6, 2.2.7. Let \(\mathfrak{g}(d)_{0}\subset\mathfrak{g}(d)\) be the traceless subalgebra, i.e. the kernel of the map
\[\mathfrak{g}(d)=\bigoplus_{a\in I}\operatorname{Hom}(V^{a},V^{a})\to\mathbb{C },\ (g^{a})_{a\in I}\mapsto\sum_{a\in I}\operatorname{Tr}(g^{a}).\]
The moment map \(\mu\) in (2.9) factors though the map
\[\mu_{0}\colon\overline{R}(d)\to\mathfrak{g}(d)_{0}.\]
We define the following reduced derived stack:
\[\mathcal{P}(d)^{\operatorname{red}}:=\mu_{0}^{-1}(d)/G(d). \tag{5.15}\]
Note that we have the commutative diagram
where the horizontal arrows are closed immersions and \(\pi_{P}=\pi_{P,d}\), \(\pi_{Y}=\pi_{Y,d}\) are the good moduli space morphisms.
Further, consider the stack
\[\mathcal{X}_{0}(d):=\left(\overline{R}(d)\oplus\mathfrak{g}(d)_{0}\right)/G(d)\]
and the regular function:
\[\operatorname{Tr}W_{0}=\operatorname{Tr}W|_{\mathcal{X}_{0}(d)}\colon\mathcal{ X}_{0}(d)\to\mathbb{C}.\]
Let \((Q,W)\) be the tripled quiver with potential associated to \(Q^{\circ}\), see Subsection 2.2.6. Denote by \((\partial W)\) the two-sided ideal of \(\mathbb{C}[Q]\) generated by \(\partial W/\partial e\) for \(e\in E\). Consider the Jacobi algebra \(J(Q,\partial W):=\mathbb{C}[Q]/(\partial W)\). Let \(w_{a}\in J(Q,\partial W)\) be the image of the element corresponding to the loop \(\omega_{a}\). The critical locus
\[\operatorname{Crit}(\operatorname{Tr}W_{0})\subset\mathcal{X}_{0}(d)\]
is the moduli stack of \((Q,W)\)-representations such that the action of \(\theta\) has trace zero, where \(\theta\) is the element
\[\theta:=\sum_{a\in I}w_{a}\in J(Q,\partial W). \tag{5.16}\]
The element \(\theta\) is a central element in \(J(Q,\partial W)\) from the definition of the potential \(W\), see (2.13). We have the following diagram
(5.17)
Here, recall the good moduli space map \(\pi_{X}=\pi_{X,d}\), \(p\) and \(\eta\) are projections, the morphism \(0\colon\mathcal{Y}(d)\to\mathcal{X}_{0}(d)\) is the zero-section of \(\eta\colon\mathcal{X}_{0}(d)\to\mathcal{Y}(d)\), the middle vertical arrows are the induced maps on good moduli spaces, and \(\bigl{(}\prod_{a\in I}\operatorname{Sym}^{d^{a}}(\mathbb{C})\bigr{)}_{0}\) is the fiber at \(0\in\mathbb{C}\) of the addition map
\[\prod_{a\in I}\operatorname{Sym}^{d^{a}}(\mathbb{C})\to\mathbb{C}.\]
Let \(\overline{\operatorname{Crit}}(\operatorname{Tr}W_{0})\hookrightarrow X(d)\) be the good moduli space of \(\operatorname{Crit}(\operatorname{Tr}W_{0})\). The above diagram restricts to the diagram
(5.18)
where the arrows \(\pi_{C}\) and \(\pi_{P}\) are good moduli space morphisms. We abuse notation and denote by
\[0:=\overbrace{(0,\dots,0)}^{\underline{d}}\in\left(\prod_{a\in I} \operatorname{Sym}^{d^{a}}(\mathbb{C})\right)_{0}.\]
Define
\[\mathcal{N}_{\operatorname{nil}}:=\pi_{C}^{-1}\overline{p}_{C}^{-1}(0)\subset \operatorname{Crit}(\operatorname{Tr}W_{0}). \tag{5.19}\]
The substack \(\mathcal{N}_{\operatorname{nil}}\subset\operatorname{Crit}(\operatorname{ Tr}W_{0})\) correspond to \((Q,W)\)-representations such that the action of \(\theta\) is nilpotent. Alternatively, it can be described as follows:
**Lemma 5.4**.: _We have \(\overline{p}_{C}^{-1}(0)=\operatorname{Im}(\overline{0}_{C})\) in the diagram (5.18). Hence \(\mathcal{N}_{\operatorname{nil}}=\pi_{C}^{-1}(\operatorname{Im}(\overline{0} _{C}))\)._
Proof.: The inclusion \(\operatorname{Im}(\overline{0}_{C})\subset\overline{p}_{C}^{-1}(0)\) is obvious. Below we show that \(\overline{p}_{C}^{-1}(0)\subset\operatorname{Im}(\overline{0}_{C})\).
Let \(Q^{\circ,d}\) be the doubled quiver of \(Q^{\circ}=(I,E^{\circ})\) and let \((\mathcal{I})\subset\mathbb{C}[Q^{\circ,d}]\) be the two-sided ideal generated by the relation \(\mathcal{I}:=\sum_{e\in E^{\circ}}[e,\overline{e}]\). Since \(\sum_{e\in E^{\circ}}[e,\overline{e}]\in(\partial W)\), we have the functor
\[\eta_{*}\colon J(Q,\partial W)\text{-mod}\to\mathbb{C}[Q^{\circ,d}]/(\mathcal{ I})\text{-mod}\]
which forgets the action of \(\theta\), where \(\theta\in J(Q,\partial W)\) is defined in (5.16).
For a simple \(J(Q,\partial W)\)-module \(R\), we show that \(\eta_{*}R\) is a simple module over \(\mathbb{C}[Q^{\circ,d}]/(\mathcal{I})\). We first note that the action of \(\theta\) on \(R\) has equal generalized eigenvalues. Indeed, \(\theta\in J(Q,\partial W)\) is central, so the generalized eigenspaces for different
eigenvalues would give a direct sum decomposition of \(R\), which contradicts that \(R\) is simple. Moreover, if \(R^{\prime}\subset R\) is an eigenspace for \(\theta\), then \(R^{\prime}\) is preserved by the \(J(Q,\partial W)\)-action, so \(R^{\prime}=R\) as \(R\) is simple. It follows that the action of \(\theta\) on \(R\) is multiplication by \(\lambda\) for some \(\lambda\in\mathbb{C}\). It follows that any submodule of \(\eta_{*}R\) is preserved by the action of \(\theta\), so \(\eta_{*}R\) is also simple.
Let \(\bigoplus_{i=1}^{k}R_{i}^{\oplus n_{i}}\) be a semisimple \(J(Q,\partial W)\)-module. Then the morphism \(\overline{\eta}_{C}\) in the diagram (5.18) sends it to the semisimple \(\mathbb{C}[Q^{\circ,d}]/(\mathfrak{I})\)-module \(\bigoplus_{i=1}^{k}\eta_{*}R_{i}^{\oplus n_{i}}\), as \(\eta_{*}R_{i}\) is simple. Thus, given a semisimple \(\mathbb{C}[Q^{\circ,d}]/(\mathfrak{I})\)-module \(T=\bigoplus_{i=1}^{k}T_{i}^{\oplus n_{i}}\) corresponding to a point \(r\in P(d)\), the set of points of the fiber of \(\overline{\eta}_{C}\) at \(r\) consists of choices of \(\lambda_{ij}\in\mathbb{C}\) for \(1\leqslant i\leqslant k\), \(1\leqslant j\leqslant n_{i}\), such that \(\theta\) acts on the \(j\)th copy of \(T_{i}\) in \(T\) by multiplication by \(\lambda_{ij}\). If it lies in \(\overline{\eta}_{C}^{-1}(0)\), we must have \(\lambda_{ij}=0\) for all \(1\leqslant i\leqslant k\), \(1\leqslant j\leqslant n_{i}\). Therefore \(\overline{p}_{C}^{-1}(0)\subset\operatorname{Im}(\overline{0}_{C})\) holds.
### Quasi-BPS categories for reduced stacks
We abuse notation and also denote by \(j\) the closed immersion
\[j^{r}\colon\mathcal{P}(d)^{\operatorname{red}}\hookrightarrow\mathcal{Y}(d):= \overline{R}(d)/G(d).\]
We define the subcategory
\[\mathbb{T}(d)_{v}^{\operatorname{red}}\subset D^{b}(\mathcal{P}(d)^{ \operatorname{red}}) \tag{5.20}\]
to be consisting of objects \(\mathcal{E}\) such that \(j_{*}^{r}\mathcal{E}\) is generated by \(\mathcal{O}_{\mathfrak{I}(d)}\otimes\Gamma_{G(d)}(\chi)\) for \(\chi\) a dominant weight satisfying (2.23) for \(\delta_{d}=v\tau_{d}\), i.e.
\[\chi+\rho-v\tau_{d}\in\mathbf{W}(d), \tag{5.21}\]
where \(\mathbf{W}(d)\) is the polytope defined by (2.21) for the tripled quiver \(Q\) of \(Q^{\circ}\).
The Koszul equivalence in Theorem 2.5 gives an equivalence
\[\Theta_{0}\colon D^{b}(\mathcal{P}(d)^{\operatorname{red}})\stackrel{{ \sim}}{{\to}}\operatorname{MF}^{\operatorname{gr}}(\mathcal{X}_{0}(d), \operatorname{Tr}W_{0}). \tag{5.22}\]
Define the reduced quasi-BPS category
\[\mathbb{S}^{\operatorname{gr}}(d)_{v}^{\operatorname{red}}\subset \operatorname{MF}^{\operatorname{gr}}(\mathcal{X}_{0}(d),\operatorname{Tr}W _{0})\]
as in (2.28), that is,
\[\mathbb{S}^{\operatorname{gr}}(d)_{v}^{\operatorname{red}}:= \operatorname{MF}^{\operatorname{gr}}\left(\mathbb{M}(d)_{v}^{\operatorname{ red}},\operatorname{Tr}W_{0}\right), \tag{5.23}\]
where \(\mathbb{M}(d)_{v}^{\operatorname{red}}\) is the full subcategory of \(D^{b}(\mathcal{X}_{0}(d))\) generated by the vector bundles \(\mathcal{O}_{\mathcal{X}_{0}(d)}\otimes\Gamma_{G(d)}(\chi)\) for \(\chi\) a dominant weight satisfying (5.21). The Koszul equivalence (5.22) restricts to the equivalence, see Lemma 2.6:
\[\Theta_{0}\colon\mathbb{T}(d)_{v}^{\operatorname{red}}\stackrel{{ \sim}}{{\to}}\mathbb{S}^{\operatorname{gr}}(d)_{v}^{\operatorname{red}}.\]
We use Theorem 5.1 to study the singular support of sheaves [1] in reduced quasi-BPS categories:
**Corollary 5.5**.: _Let \(v\in\mathbb{Z}\) such that \(\gcd(\underline{d},v)=1\). Then any object \(\mathcal{E}\in\mathbb{T}(d)_{v}^{\operatorname{red}}\) has singular support_
\[\operatorname{Supp}^{\operatorname{sg}}(\mathcal{E})\subset\mathcal{N}_{ \operatorname{nil}}. \tag{5.24}\]
Proof.: The singular support of \(\mathcal{E}\) equals to the support of \(\Theta_{0}(\mathcal{E})\), see [21, Proposition 2.3.9]. Then the corollary follows from Theorem 5.1 together with
\[\Delta\cap\left(\prod_{a\in I}\operatorname{Sym}^{d^{a}}(\mathbb{C})\right)_{ 0}=\{0\}.\]
**Remark 5.6**.: The closed substack \(\operatorname{Im}(0_{C})\subset\mathcal{N}_{\operatorname{nil}}\) is given by the equation \(\theta=0\), and the condition \(\operatorname{Supp}^{\operatorname{sg}}(\mathcal{E})\subset\operatorname{Im}(0_ {C})\) is equivalent to \(\mathcal{E}\) being perfect, see [1, Theorem 4.2.6]. The condition (5.24) is weaker than \(\mathcal{E}\) being perfect, and indeed there may exist objects in \(\mathbb{T}(d)_{v}^{\operatorname{red}}\) which are not perfect.
### Relative properness of quasi-BPS categories
We continue to assume that the quiver \(Q^{\circ}=(I,E^{\circ})\) satisfies Assumption 4.15. We will make a further assumption on the quiver \(Q^{\circ}\) that guarantees, for example, that the reduced stack \(\mathcal{P}(d)^{\operatorname{red}}\) is classical. Recall \(\alpha_{a,b}\) defined in (1.9) and let
\[\alpha_{Q^{\circ}}:=\min\{\alpha_{a,b}\mid a,b\in I\}.\]
Recall the good moduli spaces \(X(d)\), \(Y(d)\), \(P(d)\) from Subsections 2.2.1 and 2.2.6.
**Lemma 5.7**.: _(i) If \(\alpha_{Q^{\circ}}\geqslant 1\), then \(X(d)\) and \(Y(d)\) are Gorenstein with trivial canonical module._
_(ii) If \(\alpha_{Q^{\circ}}\geqslant 2\), then \(\mathcal{P}(d)^{\operatorname{red}}\) is a classical stack and \(P(d)\) is a normal irreducible variety with_
\[\dim P(d)=2+\sum_{a,b\in I}d^{a}d^{b}\alpha_{a,b}. \tag{5.25}\]
_If furthermore \(P(d)\) is Gorenstein, then its canonical module is trivial._
_(iii) If \(\alpha_{Q^{\circ}}\geqslant 3\), or if \(\alpha_{Q^{\circ}}=2\) and \(\underline{d}\geqslant 3\), then \(P(d)\) is Gorenstein._
Proof.: (i) We only prove the case of \(Y(d)\), as the case of \(X(d)\) is similar. Let \(\mathcal{Y}(d)^{\operatorname{s}}\subset\mathcal{Y}(d)\) be the open substack corresponding to simple representations. By [13, Corollary 2], it is enough to show that the codimension of \(\mathcal{Y}(d)\setminus\mathcal{Y}(d)^{\operatorname{s}}\) is at least two. Let \(\lambda\) be a cocharacter corresponding to \(d=d_{1}+d_{2}\) such that \(d_{1}\) and \(d_{2}\) are non-zero. A simple calculation shows that
\[\dim\mathcal{Y}(d)-\dim\mathcal{Y}(d)^{\lambda\geqslant 0}=\sum_{a,b\in I}d_{1 }^{a}d_{2}^{b}\alpha_{a,b}+\sum_{a\in I}d_{1}^{a}d_{2}^{a}\geqslant 2\underline{d }_{1}\underline{d}_{2}\alpha_{Q^{\circ}}\geqslant 2.\]
Therefore \(\mathcal{Y}(d)\setminus\mathcal{Y}(d)^{\operatorname{s}}\) is at least of codimension two in \(\mathcal{Y}(d)\).
(ii) If \(\alpha_{Q^{\circ}}\geqslant 2\), then \(\mu_{0}^{-1}(0)\subset\overline{R}(d)\) is an irreducible variety of dimension \(\dim\overline{R}(d)-\dim\mathfrak{g}(d)_{0}\) by [10, Proposition 3.6]. In particular \(\mathcal{P}(d)^{\operatorname{red}}\) is classical. Moreover, in the proof of [10, Proposition 3.6], it is also proved that \(\mu_{0}^{-1}(0)\) contains a dense open subset of points with trivial stabilizer groups in \(G(d)/\mathbb{C}^{*}\). Therefore the dimension (5.25) is easily calculated from
\[\dim P(d)=1+\dim\mathcal{P}(d)^{\operatorname{red}}=2+\dim\overline{R}(d)-2 \dim\mathfrak{g}(d).\]
The last statement holds since the canonical module of \(\mu_{0}^{-1}(0)\) is a \(G(d)\)-equivariantly trivial line bundle.
(iii) By [11] and [16], it is enough to show that the codimension of the singular locus of \(P(d)\) is at least \(4\), see [14, Proposition 1.1, Theorem 1.2]. The singular locus of \(P(d)\) is contained in the union of images of
\[\oplus\colon P(d_{1})\times P(d_{2})\to P(d)\]
for \(d=d_{1}+d_{2}\) with \(d_{i}\neq 0\). The codimension of the image of the above map is at least
\[\dim P(d)-\dim P(d_{1})-\dim P(d_{2}) =-2+\sum_{a,b\in I}(d_{1}^{a}d_{2}^{b}+d_{1}^{b}d_{2}^{a})\alpha_{a,b}\] \[\geqslant 2\underline{d}_{1}\underline{d}_{2}\alpha_{Q^{\circ}}-2 \geqslant 4.\]
Here the identity follows from (5.25), the first inequality follows from the definition of \(\alpha_{Q^{\circ}}\), and the last inequality follows from the assumption on \(\alpha_{Q^{\circ}}\) and \(d\). Therefore (iii) holds.
**Example 5.8**.: As in Example 4.21, let \(Q^{\circ}\) be a quiver with one vertex and \(g\)-loops. Then \(\alpha_{Q^{\circ}}=2g-2\). By Lemma 5.7 the stack \(\mathcal{P}(d)^{\mathrm{red}}\) is classical if \(g\geqslant 2\), and \(P(d)\) is Gorenstein if \(g\geqslant 3\) or \(g=2\), \(d\geqslant 3\). When \(g=d=2\), the singular locus of \(P(d)\) is of codimension two, but nevertheless it is also Gorenstein because it admits a symplectic resolution of singularities [11].
Below we assume that \(\mathcal{P}(d)^{\mathrm{red}}\) is classical, e.g. \(\alpha_{Q^{\circ}}\geqslant 2\). Then we have the good moduli space morphism
\[\pi_{P}=\pi_{P,d}\colon\mathcal{P}(d)^{\mathrm{red}}\to P(d).\]
In particular for \(\mathcal{E}_{1},\mathcal{E}_{2}\in D^{b}(\mathcal{P}(d)^{\mathrm{red}})\), the \(\mathrm{Hom}\) space \(\mathrm{Hom}(\mathcal{E}_{1},\mathcal{E}_{2})\) is a module over \(\mathcal{O}_{P(d)}\). The categorical support condition in Corollary 5.5 implies the relative properness of quasi-BPS categories, which is a non-trivial statement as objects in \(\mathbb{T}(d)^{\mathrm{red}}_{v}\) may not be perfect, see Remark 5.6:
**Proposition 5.9**.: _Suppose that the stack \(\mathcal{P}(d)^{\mathrm{red}}\) is classical, e.g. \(\alpha_{Q^{\circ}}\geqslant 2\). For \(v\in\mathbb{Z}\) such that \(\gcd(\underline{d},v)=1\) and objects \(\mathcal{E}_{i}\in\mathbb{T}(d)^{\mathrm{red}}_{v}\) for \(i=1,2\), the \(\mathcal{O}_{P(d)}\)-module_
\[\bigoplus_{i\in\mathbb{Z}}\mathrm{Hom}^{i}(\mathcal{E}_{1},\mathcal{E}_{2})\]
_is finitely generated, i.e. the category \(\mathbb{T}(d)^{\mathrm{red}}_{v}\) is proper over \(P(d)\). In particular we have \(\mathrm{Hom}^{i}(\mathcal{E}_{1},\mathcal{E}_{2})=0\) for \(|i|\gg 0\)._
Proof.: Let \(F_{i}\) be defined by
\[F_{i}=\mathrm{forg}\circ\Theta_{0}(\mathcal{E}_{i})\in\mathrm{MF}(\mathcal{X} _{0}(d),\mathrm{Tr}\,W_{0})\]
where \(\Theta_{0}\) is the equivalence (5.22) and \(\mathrm{forg}\) is the forget-the-grading functor. Consider its internal Hom:
\[\mathcal{H}om(F_{1},F_{2})\in\mathrm{MF}(\mathcal{X}_{0}(d),0)=D^{\mathbb{Z} /2}(\mathcal{X}_{0}(d)).\]
Here \(D^{\mathbb{Z}/2}(-)\) is the \(\mathbb{Z}/2\)-graded derived category of coherent sheaves. The above object is supported over \(\pi_{C}^{-1}(\mathrm{Im}(\overline{0}_{C}))\) by Corollary 5.5 and Lemma 5.4. As \(\pi_{X}\) is a good moduli space morphism, \(\pi_{X*}\) sends a coherent sheaf to a bounded complex of coherent sheaves. Therefore we have
\[\pi_{X*}\mathcal{H}om(F_{1},F_{2})\in D^{\mathbb{Z}/2}(X(d)) \tag{5.26}\]
and, further, that \(\pi_{X*}\mathcal{H}om(F_{1},F_{2})\) is supported over \(\mathrm{Im}(\overline{0}_{C})\subset\mathrm{Im}(\overline{0})\), see the diagrams (5.17), (5.18). Since \(\overline{0}\) is a section of \(\overline{p}\), the restriction of \(\overline{\eta}\) to \(\mathrm{Im}(\overline{0})\) is an isomorphism, \(\overline{\eta}\colon\mathrm{Im}(\overline{0})\xrightarrow{\cong}Y(d)\). In particular, (5.26) has a proper support over \(Y(d)\). Therefore, we have that
\[\overline{\eta}_{*}\pi_{X*}\mathcal{H}om(F_{1},F_{2})\in D^{\mathbb{Z}/2}(Y(d)),\]
so \(\operatorname{Hom}^{*}(F_{1},F_{2})\) is finitely generated over \(Y(d)\), hence over \(P(d)\) as \(P(d)\hookrightarrow Y(d)\) is a closed subscheme. Now, for \(i\in\mathbb{Z}/2\), we have
\[\operatorname{Hom}^{i}(F_{1},F_{2})=\bigoplus_{k\in\mathbb{Z}}\operatorname{ Hom}^{2k+i}(\mathcal{E}_{1},\mathcal{E}_{2}),\]
hence the finite generation over \(P(d)\) of the left hand side implies the proposition.
### Relative Serre functor on quasi-BPS categories
We keep the situation of Proposition 5.9. The category \(\mathbb{T}=\mathbb{T}(d)_{v}\) is a module over \(\operatorname{Perf}(P(d))\). We have the associated internal homomorphism, see Subsection 2.3:
\[\mathcal{H}om_{\mathbb{T}}(\mathcal{E}_{1},\mathcal{E}_{2})\in D_{\operatorname {qc}}(\mathcal{P}(d)^{\operatorname{red}}),\ \mathcal{E}_{i}\in\mathbb{T}(d)^{\operatorname{red}}_{v}.\]
Then Proposition 5.9 implies that
\[\pi_{*}\mathcal{H}om_{\mathbb{T}}(\mathcal{E}_{1},\mathcal{E}_{2})\in D^{b}(P (d)).\]
A functor \(S_{\mathbb{T}/P}\colon\mathbb{T}\to\mathbb{T}\) is called _a relative Serre functor_ if it satisfies the isomorphism
\[\operatorname{Hom}_{P(d)}(\pi_{*}\mathcal{H}om_{\mathbb{T}}(\mathcal{E}_{1}, \mathcal{E}_{2}),\mathcal{O}_{P(d)})\cong\operatorname{Hom}_{\mathbb{T}}( \mathcal{E}_{2},S_{\mathbb{T}/P}(\mathcal{E}_{1})).\]
The following result shows that \(\mathbb{T}\) is strongly crepant in the sense of [VdB, Section 2.2].
**Theorem 5.10**.: _Suppose that \(\alpha_{Q^{\circ}}\geqslant 2\) and \(P(d)\) is Gorenstein, e.g. \(\alpha_{Q^{\circ}}\geqslant 3\). Let \(v\in\mathbb{Z}\) such that \(\gcd(\underline{d},v)=1\). Then the relative Serre functor \(S_{\mathbb{T}/P}\) exists and satisfies \(S_{\mathbb{T}/P}\cong\operatorname{id}_{\mathbb{T}}\)._
Proof.: We have the following commutative diagram
(5.27)
Let \(\Theta_{0}\) be the Koszul duality equivalence in (5.22). Then by Lemma 2.7, we have
\[j^{r}_{*}\mathcal{H}om_{\mathbb{T}}(\mathcal{E}_{1},\mathcal{E}_{2})=\eta_{*} \mathcal{Q},\]
where \(\mathcal{Q}\) is the internal homomorphism of matrix factorizations
\[\mathcal{Q}=\mathcal{H}om_{\operatorname{MF}}(\Theta_{0}(\mathcal{E}_{1}), \Theta_{0}(\mathcal{E}_{2}))\in\operatorname{MF}^{\operatorname{gr}}( \mathcal{X}_{0}(d),0).\]
Since the left vertical arrows \(\pi_{P}\), \(\pi_{Y}\) have relative dimension \(-1\) and the codimension of the closed substack \(\mathcal{P}(d)^{\operatorname{red}}\) in \(\mathcal{Y}(d)\) is \(\dim\mathfrak{g}(d)_{0}\), the codimension of \(P(d)\) in \(Y(d)\) is \(\dim\mathfrak{g}(d)_{0}\). By Lemma 5.7, we have the following descriptions of dualizing complexes
\[\omega_{Y(d)}=\mathcal{O}_{Y(d)}[\dim Y(d)],\ \omega_{P(d)}=\mathcal{O}_{P(d)}[ \dim Y(d)-\dim\mathfrak{g}(d)_{0}]. \tag{5.28}\]
As \(\overline{j}^{l}\omega_{Y(d)}=\omega_{P(d)}\), we have
\[\overline{j}^{l}\mathcal{O}_{Y(d)}=\mathcal{O}_{P(d)}[-\dim\mathfrak{g}(d)_{0 }].\]
Then we have the isomorphisms
\[\operatorname{Hom}_{P(d)}(\pi_{P*}\mathcal{H}om_{\mathbb{T}}(\mathcal{ E}_{1},\mathcal{E}_{2}),\mathcal{O}_{P(d)})\] \[\cong\operatorname{Hom}_{Y(d)}(\overline{\jmath}_{*}\pi_{P*} \mathcal{H}om_{\mathbb{T}}(\mathcal{E}_{1},\mathcal{E}_{2}),\mathcal{O}_{Y(d)} [\dim\mathfrak{g}(d)_{0}])\] \[\cong\operatorname{Hom}_{Y(d)}(\pi_{Y*}j_{*}^{r}\mathcal{H}om_{ \mathbb{T}}(\mathcal{E}_{1},\mathcal{E}_{2}),\mathcal{O}_{Y(d)}[\dim\mathfrak{ g}(d)_{0}])\] \[\cong\operatorname{Hom}_{Y(d)}(\pi_{Y*}\eta_{*}\mathcal{Q}, \mathcal{O}_{Y(d)}[\dim\mathfrak{g}(d)_{0}])\] \[\cong\operatorname{Hom}_{Y(d)}(\overline{\eta}_{*}\pi_{X*} \mathcal{Q},\mathcal{O}_{Y(d)}[\dim\mathfrak{g}(d)_{0}]).\]
By Lemma 5.7 (i), we have
\[\omega_{X(d)}=\mathcal{O}_{X(d)}[\dim X(d)](-2\dim\mathfrak{g}(d)_{0})= \mathcal{O}_{X(d)}[\dim Y(d)-\dim\mathfrak{g}(d)_{0}].\]
Here, we denote by (1) the grade shift with respect to the \(\mathbb{C}^{*}\)-action on \(X(d)\), induced by the fiberwise weight two \(\mathbb{C}^{*}\)-action on \(\mathcal{X}_{0}(d)\to\mathcal{Y}(d)\), which is isomorphic to [1] in \(\operatorname{MF}^{\operatorname{gr}}(X(d),0)\). As in the proof of Lemma 5.9, the complex \(\pi_{X*}\mathcal{Q}\) has proper support over \(Y(d)\). Therefore, from \(\eta^{\prime}\omega_{Y(d)}=\omega_{X(d)}\) and (5.28), we have
\[\operatorname{Hom}_{Y(d)}(\overline{\eta}_{*}\pi_{X*}\mathcal{Q}, \mathcal{O}_{Y(d)}[\dim\mathfrak{g}(d)_{0}])\] \[\cong\operatorname{Hom}_{X(d)}(\pi_{X*}\mathcal{Q},\overline{ \eta}^{\prime}\mathcal{O}_{Y(d)}[\dim\mathfrak{g}(d)_{0}])\] \[\cong\operatorname{Hom}_{X(d)}(\pi_{X*}\mathcal{Q},\mathcal{O}_{ X(d)}).\]
Note that \(\overline{\eta}\) is not proper, but the first isomorphism above holds because \(\pi_{X*}\mathcal{Q}\) has proper support over \(Y(d)\). In the above, \(\operatorname{Hom}_{X(d)}(-,-)\) denotes the space of homomorphisms in the category \(\operatorname{MF}^{\operatorname{gr}}(X(d),0)\); note that \(X(d)\) may not be smooth, but its definition and the definition of the functor \(\overline{\eta}_{*}\) on the subcategory of matrix factorizations with proper support over \(Y(d)\) are as in the smooth case [11, 12].
By the definition of \(\mathbb{S}^{\operatorname{gr}}(d)_{v}\), the object \(\mathcal{Q}\) is represented by
\[(\mathcal{Q}^{0}\leftrightarrows\mathcal{Q}^{1}),\]
where \(\mathcal{Q}^{0}\) and \(\mathcal{Q}^{1}\) are direct sums of vector bundles of the form \(\Gamma_{G(d)}(\chi_{1})^{\vee}\otimes\Gamma_{G(d)}(\chi_{2})\otimes\mathcal{O }_{\mathcal{X}_{0}(d)}\) such that the weights \(\chi_{i}\) satisfy, for \(i\in\{1,2\}\):
\[\chi_{i}+\rho-v\tau_{d}\in\mathbf{W}:=\frac{1}{2}\text{sum}[0,\beta], \tag{5.29}\]
where the above Minkowski sum is over all weights \(\beta\) in \(\overline{R}(d)\oplus\mathfrak{g}(d)_{0}\). By Lemma 5.11 below, the weight \(\chi_{2}-\chi_{1}\) lies in the interior of the polytope \(-2\rho+2\mathbf{W}\). Therefore applying [10, Proposition 4.4.4] for the \(PG(d):=G(d)/\mathbb{C}^{*}\)-action on \(\overline{R}(d)\oplus\mathfrak{g}(d)_{0}\), the sheaf \(\pi_{X*}\mathcal{Q}^{i}\) is a maximal Cohen-Macaulay sheaf on \(X(d)\). It follows that
\[\mathcal{H}om_{X(d)}(\pi_{X*}\mathcal{Q}^{i},\mathcal{O}_{X(d)})=\pi_{X*}( \mathcal{Q}^{i})^{\vee}.\]
The morphism \(\pi_{X*}\mathcal{Q}^{i}\to\pi_{X*}\mathcal{Q}^{i+1}\) is uniquely determined by its restriction to the smooth locus of \(X(d)\), and so
\[\mathcal{H}om_{X(d)}(\pi_{X*}\mathcal{Q},\mathcal{O}_{X(d)})\cong\pi_{X*}( \mathcal{Q}^{\vee}).\]
Therefore we have the isomorphisms
\[\operatorname{Hom}_{X(d)}(\pi_{X*}\mathcal{Q},\mathcal{O}_{X(d)}) \cong\operatorname{Hom}_{X(d)}(\mathcal{O}_{X(d)},\pi_{X*}( \mathcal{Q}^{\vee}))\] \[\cong\operatorname{Hom}_{\operatorname{MF}(\mathcal{X}(d), \operatorname{Tr}W)}(\Theta_{0}(\mathcal{E}_{2}),\Theta_{0}(\mathcal{E}_{1}))\] \[\cong\operatorname{Hom}_{\mathcal{P}(d)}(\mathcal{E}_{2}, \mathcal{E}_{1})\] \[\cong\operatorname{Hom}_{\mathbb{T}}(\mathcal{E}_{2},\mathcal{E}_{ 1}),\]
and the conclusion follows.
We are left with proving the following:
**Lemma 5.11**.: _In the setting of (5.29), let \(\chi\in M(d)\) be a dominant weight such that \(\chi+\rho-v\tau_{d}\in\mathbf{W}\). If \(\gcd(\underline{d},v)=1\), then \(\chi+\rho-v\tau_{d}\) is not contained in the boundary of \(\mathbf{W}\)._
Proof.: Suppose that \(\chi+\rho-v\tau_{d}\) is on the boundary of \(\mathbf{W}\). Use the decomposition (4.6) for \(\delta_{d}=0\) to obtain the decomposition
\[\chi+\rho=-\frac{1}{2}R(d)^{\lambda>0}+\sum_{i=1}^{k}\psi_{i}+v\tau_{d} \tag{5.30}\]
for some antidominant cocharacter \(\lambda\) such that, if \(d=d_{1}+\cdots+d_{k}\) for \(k\geqslant 2\) is the decomposition corresponding to \(\lambda\), then \(\langle 1_{d_{i}},\psi_{i}\rangle=0\). We write
\[v\tau_{d}=\sum_{i=1}^{k}v_{i}\tau_{d_{i}},\ \frac{v}{\underline{d}}=\frac{v_{i }}{\underline{d}_{i}} \tag{5.31}\]
for \(v_{i}\in\mathbb{Q}\). Then the identity (5.30) is written as
\[\chi=-\frac{1}{2}\overline{R}(d)^{\lambda>0}+\sum_{i=1}^{k}v_{i}\tau_{d_{i}}+ \sum_{i=1}^{k}(\psi_{i}-\rho_{i}). \tag{5.32}\]
By Assumption 4.15, we have \(\frac{1}{2}\overline{R}(d)^{\lambda>0}\in M(d)^{W_{d}}\). Therefore the identity (5.32) implies that
\[v_{i}=\langle 1_{d_{i}},\chi\rangle+\left\langle 1_{d_{i}},\frac{1}{2} \overline{R}(d)^{\lambda>0}\right\rangle\in\mathbb{Z}\]
for all \(1\leqslant i\leqslant k\). However, as \(v/\underline{d}=v_{i}/\underline{d}_{i}\), we obtain a contradiction with the assumption that \(\gcd(\underline{d},v)=1\).
### Indecomposability of quasi-BPS categories
Let \(Q^{\circ}=(I,E^{\circ})\) be a quiver and let \(d\in\mathbb{N}^{I}\). Recall the good moduli space map
\[\pi_{P}\colon\mathcal{P}(d)^{\mathrm{red}}=\mu_{0}^{-1}(0)/G(d)\to P(d)\]
from the reduced stack of dimension \(d\) representations of the preprojective algebra of \(Q^{\circ}\).
**Proposition 5.12**.: _Let \(Q^{\circ}\) be a quiver and let \(d\in\mathbb{N}^{I}\) such that \(\mathcal{P}(d)^{\mathrm{red}}\) is a classical stack, e.g. \(\alpha_{Q^{\circ}}\geqslant 2\). Let \(v\in\mathbb{Z}\). Then \(\mathbb{T}(d)^{\mathrm{red}}_{v}\) does not have a non-trivial orthogonal decompositions._
We note the following corollary:
**Corollary 5.13**.: _Let \(Q^{\circ}\) be a quiver satisfying Assumption 4.15 and let \(d\in\mathbb{N}^{I}\) such that \(\mathcal{P}(d)^{\mathrm{red}}\) is a classical stack. Let \(v\in\mathbb{Z}\) such that \(\gcd(v,\underline{d})=1\). Then the category \(\mathbb{T}(d)^{\mathrm{red}}_{v}\) does not have any non-trivial semiorthogonal decompositions._
Proof.: Assume there is a semiorthogonal decomposition \(\mathbb{T}(d)^{\mathrm{red}}_{v}=\langle\mathbb{A},\mathbb{B}\rangle\). By Theorem 5.10, there is an orthogonal decomposition of \(\mathbb{T}(d)^{\mathrm{red}}_{v}\) in \(\mathbb{A}\) and \(\mathbb{B}\). By Proposition 5.12, one of the categories \(\mathbb{A}\) and \(\mathbb{B}\) is zero.
Before we begin the proof of Proposition 5.12, we note a few preliminary results. We assume in the remaining of the subsection that \(\mathcal{P}(d)^{\mathrm{red}}\) is a classical stack. We say a representation \(V\) of \(G(d)\) has weight \(v\in\mathbb{Z}\) if \(1_{d}\) acts on \(V\) with weight \(v\).
**Proposition 5.14**.: _Let \(V\) be a representation of \(G(d)\) of weight zero. Then the sheaf \(\pi_{*}\left(\mathcal{O}_{\mathcal{P}(d)^{\mathrm{red}}}\otimes V\right)\) is non-zero and torsion free._
Proof.: The module \(\mathcal{O}_{\mathcal{P}(d)^{\mathrm{red}}}\otimes V\) is a non-zero torsion free \(\mathcal{O}_{\mu_{0}^{-1}(0)}\)-module of weight zero, thus it is also a torsion free \(\mathcal{O}_{P(d)}=\mathcal{O}_{\mu_{0}^{-1}(0)}^{G(d)}\subset\mathcal{O}_{ \mu_{0}^{-1}(0)}\)-module.
The category \(\mathbb{T}(d)_{v}^{\mathrm{red}}\) is admissible in \(D^{b}\left(\mathcal{P}(d)^{\mathrm{red}}\right)_{v}\) by an immediate modification of the argument for the admissibility of \(\mathbb{T}(d)_{v}\) in \(D^{b}\left(\mathcal{P}(d)\right)_{v}\), which follows from the Koszul equivalence (2.15) and [13]. Then there exists a left adjoint of the inclusion \(\mathbb{T}(d)_{v}^{\mathrm{red}}\hookrightarrow D^{b}\left(\mathcal{P}(d)^{ \mathrm{red}}\right)_{v}\), which we denote by
\[\Phi\colon D^{b}\left(\mathcal{P}(d)^{\mathrm{red}}\right)_{v}\to\mathbb{T}(d )_{v}^{\mathrm{red}}.\]
**Proposition 5.15**.: _Let \(V\) be a representation of \(G(d)\) of weight \(v\) and let \(\mathcal{A}\) be a direct summand of \(\Phi(\mathcal{O}_{\mathcal{P}(d)^{\mathrm{red}}}\otimes V)\). Then \(\pi_{*}(\mathcal{A})\) has support \(P(d)\)._
Proof.: Write \(\Phi(\mathcal{O}_{\mathcal{P}(d)^{\mathrm{red}}}\otimes V)=\mathcal{A}\oplus \mathcal{B}\) for \(\mathcal{A},\mathcal{B}\in D^{b}\left(\mathcal{P}(d)^{\mathrm{red}}\right)_{v}\). There exists a representation \(V^{\prime}\) of \(G(d)\) of weight \(v\) such that \(\mathrm{Hom}\left(\mathcal{A},\mathcal{O}_{\mathcal{P}(d)^{\mathrm{red}}} \otimes V^{\prime}\right)\neq 0\). Then \(\mathrm{Hom}\left(\mathcal{A},\mathcal{O}_{\mathcal{P}(d)^{\mathrm{red}}} \otimes V^{\prime}\right)\) is a direct summand of
\[\mathrm{Hom}\left(\Phi(\mathcal{O}_{\mathcal{P}(d)^{\mathrm{red}}} \otimes V),\mathcal{O}_{\mathcal{P}(d)^{\mathrm{red}}}\otimes V^{\prime}\right) =\mathrm{Hom}\left(\mathcal{O}_{\mathcal{P}(d)^{\mathrm{red}}} \otimes V,\mathcal{O}_{\mathcal{P}(d)^{\mathrm{red}}}\otimes V^{\prime}\right)\] \[=\pi_{*}\left(\mathcal{O}_{\mathcal{P}(d)^{\mathrm{red}}} \otimes V^{\prime}\otimes V^{\vee}\right),\]
which is non-zero and torsion free over \(P(d)\) by Proposition 5.14, and thus the \(P(d)\)-sheaf \(\mathrm{Hom}\left(\mathcal{A},\mathcal{O}_{\mathcal{P}(d)^{\mathrm{red}}} \otimes V^{\prime}\right)\) has support \(P(d)\). Then also \(\pi_{*}(\mathcal{A})\) has support \(P(d)\).
**Proposition 5.16**.: _For \(\mathcal{A},\mathcal{A}^{\prime}\in D^{b}\left(\mathcal{P}(d)^{\mathrm{red}} \right)_{v}\), suppose that \(\pi_{*}\mathcal{A}\) and \(\pi_{*}\mathcal{A}^{\prime}\) have support \(P(d)\). Then \(\mathrm{Hom}^{i}(\mathcal{A},\mathcal{A}^{\prime})\neq 0\) for some \(i\in\mathbb{Z}\)._
Proof.: The object \(\pi_{P*}\mathcal{H}om(\mathcal{A},\mathcal{A}^{\prime})\in D_{\mathrm{qc}}(P(d))\) is non-zero because it is non-zero over a generic point. Then \(R\,\mathrm{Hom}(\mathcal{A},\mathcal{A}^{\prime})=R\Gamma(\pi_{P*}\mathcal{H}om (\mathcal{A},\mathcal{A}^{\prime}))\) is also non-zero as \(P(d)\) is affine, and the conclusion follows.
Proof of Proposition 5.12.: Assume \(\mathbb{T}(d)_{v}^{\mathrm{red}}\) has an orthogonal decomposition in categories \(\mathbb{A}\) and \(\mathbb{B}\). By Propositions 5.15 and 5.16, all summands of \(\Phi(\mathcal{O}_{\mathcal{P}(d)^{\mathrm{red}}}\otimes V)\) are in the same category, say \(\mathbb{A}\), for all representations \(V\) of \(G(d)\) of weight \(v\). Then the complexes \(\Phi(\mathcal{O}_{\mathcal{P}(d)^{\mathrm{red}}}\otimes V)\) are in \(\mathbb{A}\) for all representations \(V\) of \(G(d)\) of weight \(v\).
Let \(\mathcal{A}\in\mathbb{T}(d)_{v}^{\mathrm{red}}\) be non-zero and indecomposable. Then there exists \(V\) such that \(\mathrm{Hom}(\mathcal{O}_{\mathcal{P}(d)^{\mathrm{red}}}\otimes V,\mathcal{A} )\neq 0\), and so \(\mathrm{Hom}\left(\Phi(\mathcal{O}_{\mathcal{P}(d)^{\mathrm{red}}}\otimes V), \mathcal{A}\right)\neq 0\). The complex \(\mathcal{A}\) is indecomposable, so \(\mathcal{A}\in\mathbb{A}\), and thus \(\mathbb{B}=0\).
|
2309.08343 | On Higgs+jet production at next-to-leading power accuracy | We present computation of the next-to-leading power corrections for Higgs
plus one jet production in a hadron collider via gluon fusion channel. Shifting
of spinors in the helicity amplitudes without additional radiation captures the
leading next-to-soft radiative behaviour and makes the calculation tractable.
We establish the connection between the shifted dipole spinors and the colour
ordered radiative amplitudes. We find that next-to-maximal helicity violating
amplitudes do not play a role in this correction. Compact analytic expressions
of next-to-leading power leading logarithms coming from different helicity
configurations are shown. | Sourav Pal, Satyajit Seth | 2023-09-15T11:57:03Z | http://arxiv.org/abs/2309.08343v2 | # On H+jet production at NLP accuracy
###### Abstract
We present computation of the next-to-leading power corrections for Higgs plus one jet production in a hadron collider via gluon fusion channel. Shifting of spinors in the helicity amplitudes without additional radiation captures the leading next-to-soft radiative behaviour and makes the calculation tractable. We find that next-to-maximal helicity violating amplitudes do not play a role in this correction. Compact analytic expressions of next-to-leading power leading logarithms coming from different helicity configurations are shown.
Next-to-leading power, Next-to-soft, Leading logarithms
## 1 Introduction
Precise experimental data from the Large Hadron Collider and the lack of any persuasive new physics signature demand improvement in the understanding of the Standard Model. Typically in collider environments the strong force dominates over other interactions and that makes the study of theory of Quantum Chromodynamics (QCD) most important. Fixed order corrections by taking into account higher order perturbative terms in the strong coupling constant, and resummation including certain enhanced logarithms to all orders in the perturbation series are the two ways to ameliorate the theoretical accuracy. For all collider processes, one can define a threshold variable that vanishes in the threshold limit. In terms of a generic threshold variable (\(\xi\)) the differential cross-section takes the following form:
\[\frac{d\sigma}{d\xi}\,\approx\,\sum_{n=0}^{\infty}\alpha_{s}^{n}\left\{\sum_{m =0}^{2n-1}C_{nm}\left(\frac{\log^{m}\xi}{\xi}\right)_{+}+d_{n}\delta(\xi)+ \sum_{m=0}^{2n-1}C_{nm}\,\log^{m}\xi\right\}\,. \tag{1}\]
The first set of logarithms and the delta function are associated with the leading power (LP) approximations, whereas the second set of logarithms appear due to the next-to-leading power (NLP) approximation. The LP terms are well known to originate from the emission of soft and collinear radiation. The seminal works of refs. [1; 2; 3; 4; 5; 6; 7; 8] based on
diagrammatics helped in devising methods of LP resummation. Later, several alternative methods of LP resummation based on Wilson lines [9; 10], renormalisation group (RG) [11] and Soft Collinear Effective Theory (SCET) [12; 13; 14; 15] were developed. A comparative study of different approaches can be found in refs. [16; 17; 18].
Despite substantial progress made towards understanding the infrared behaviour of the NLP logarithms during the past decade, the universality of such terms is yet to be established. The numerical impacts of NLP logarithms are shown in refs. [19; 20; 21; 22; 23; 24; 25; 26]. Realising the importance of these numerical impacts, several methods to resum NLP logarithms have been formulated over the years [27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79]. It is essential to investigate NLP logarithms of several processes to better understand the universal nature of the next-to-soft radiation and to come up with a global resummation formulae. The universality of NLP logarithms is already established in case of colour singlet production [42], however for coloured particles in the final state there exists no unique resummation formula.
A prescription has been developed in [42] for colourless particles and then further extended to final state coloured particles in [43], in which appropriate shifting of pairs of momenta in the squared non-radiative (_i.e.,_ without additional radiation) amplitude captures the next-to-soft radiation effects. The expression of the squared non-radiative amplitude does not always have a compact analytical form and therefore shifting the momenta may not always give a simple result. For example, the calculation of NLP terms using the squared amplitudes appears to be very intricate for coloured particles in the final state and due to this reason, only two processes with a single coloured particle in the final state are studied so far at NLP accuracy - (_i_) prompt photon plus jet production [43], (_ii_) W plus jet production [48; 80; 81]. The scarcity of results for coloured final state particles due to the complexity in such calculations clearly demands an improvement on the existing technique.
In this endeavour, we study the effect of next-to-soft gluon radiation on Higgs production via gluon fusion in association with a final state hard jet by crafting spinor helicity amplitudes. We consider the heavy top mass limit throughout. Instead of shifting momenta in the squared amplitude, we shift the spinors of the non-radiative helicity amplitudes to capture next-to-soft radiation effects and that essentially makes the calculation lucid and tractable. We start with the soft and next-to-soft theorems developed in [82; 83] and show that pairwise shifting of spinors at the non-radiative amplitudes can be realised as next-to-soft emissions from those amplitudes. We find that colour dipoles with shifted spinors directly correspond to the colour ordered radiative amplitudes in the next-to-soft limit. The next-to-soft amplitudes thus obtained are compact in nature. In addition, it reveals that the next-to-maximal helicity violating (NMHV) amplitudes never contribute at NLP
accuracy for the case at hand. In order to obtain NLP logarithms, we integrate squared helicity amplitudes over the unresolved parton phase space and present the analytic results for different helicity configurations. Singularities that arise at the LP and NLP stages get exactly cancelled while contributions from the virtual emission and mass factorisation are included.
The structure of our paper is as follows. In section 2, we review the soft and next-to-soft theorems in terms of spinor shifts. After detailing the shifts, in section 3 we apply them to calculate different colour ordered helicity amplitudes. Squaring these amplitudes, we perform the phase space integration over the unresolved phase space in section 4 to obtain the NLP logarithms. Finally, we summarise our findings with an outlook in section 5. Throughout this study, we have used a combination of in-house routines based on QGRAF[84], FORM[85] and Mathematica[86] to calculate all helicity amplitudes and to perform the phase space integration.
## 2 Soft and next-to-soft corrections
In this section, we briefly review the soft and next-to-soft theorems in terms of colour ordered scattering amplitudes. Any colour ordered scattering amplitude involving \(n\) particles (quarks and gluons) with specific helicities can be represented as,
\[\mathcal{A}\,=\,\mathcal{A}_{n}\left(\{|1\rangle,|1\rangle\}\,,\ldots,\{|n \rangle,|n]\}\right)\,, \tag{2}\]
where \(|i\rangle\) and \(|i]\) denote the holomorphic and anti-holomorphic spinors associated with the particle \(i\) carrying momentum \(p_{i}\). Let us now consider that a gluon \(s\) with momenta \(p_{s}\) and helicity '\(+\)' is being emitted from this scattering process. Scaling the momentum of the radiated gluon \(p_{s}\to\lambda\,p_{s}\), the scattering amplitude for \(n+1\) particle can be expressed in powers of \(\lambda\) as written here under [82; 83],
\[\mathcal{A}_{n+1}\!\left(\,\{|s\rangle,|s]\}\,,\{|1\rangle,|1]\}\,,\ldots,\{| n\rangle,|n]\}\,\right)=\left(S^{(0)}+S^{(1)}\right)\mathcal{A}_{n}\!\left(\,\{|1 \rangle,|1]\}\,,\ldots,\{|n\rangle,|n]\}\,\right)\,. \tag{3}\]
Here \(S^{(0)}\) and \(S^{(1)}\) denote LP and NLP terms that are of \(\mathcal{O}(1/\lambda^{2})\) and \(\mathcal{O}(1/\lambda)\) respectively, and are given by,
\[S^{(0)} = \frac{\langle n1\rangle}{\langle s1\rangle\,\langle ns\rangle}\,,\] \[S^{(1)} = \frac{1}{\langle s1\rangle}|s]\frac{\partial}{\partial|1]}-\frac{ 1}{\langle sn\rangle}|s]\frac{\partial}{\partial|n]}\,. \tag{4}\]
In order to obtain the above formulae, holomorphic soft limit [82; 83] is being used _i.e.,_
\[|s\rangle\to\lambda\,|s\rangle\,,\quad|s]\to|s]\,, \tag{5}\]
under the BCFW [87; 88] deformation of \(s\) and \(n\) pair, while particles 1 and \(s\) always form a three particle amplitude involving the on-shell cut propagator that carries complex momentum. With the help of eq. (4), the colour ordered amplitude of eq. (3) can be rewritten as,
\[\mathcal{A}_{n+1}^{\text{LP+NLP}}\bigg{(}\left\{\lambda|s\rangle,|s |\right\},\left\{|1\rangle,|1\rangle\right\},\ldots,\left\{|n\rangle,|n]\right\} \bigg{)} = \frac{1}{\lambda^{2}}\frac{\langle 1n\rangle}{\langle 1s\rangle \langle ns\rangle}\times \tag{6}\] \[\mathcal{A}_{n}\bigg{(}\left\{|1\rangle,|1^{\prime}]\right\}, \ldots,\left\{|n\rangle,|n^{\prime}]\right\}\bigg{)}\,,\]
where
\[|1^{\prime}] = |1]+\Delta_{s}^{(1,n)}|s]\,,\] \[|n^{\prime}] = |n]+\Delta_{s}^{(n,1)}|s]\,, \tag{7}\]
and,
\[\Delta_{s}^{(i,j)}=\lambda\frac{\langle js\rangle}{\langle ji\rangle}\,. \tag{8}\]
This form of eq. (6) signifies that the leading and subleading behaviour of the amplitude can be obtained in terms of simple shifts in the spinors of tree amplitudes. Note that the emitted soft gluon is placed in between the 1 and \(n\) particles in the colour ordered amplitudes and forms a colour dipole \(\mathcal{D}_{1n}\). Such colour dipole structures play an important role in understanding the IR singularities of scattering amplitudes [89; 90; 91].
Emission of soft gluon with '\(-\)' helicity can be treated analogously by taking anti-holomorphic soft gluon limit and interchanging angle and square spinors. Equipped with these formulae, we now move on to calculate the LP and NLP amplitudes for Higgs plus one jet production in the gluon fusion channel.
## 3 LP and NLP amplitudes for \(gg\to Hg\)
The most dominant mechanism for Higgs boson production at the LHC is via the gluon fusion channel. In this section we first reproduce all independent helicity amplitudes for Higgs plus one jet production via gluon fusion with(out) one extra gluon emission. Then we obtain NLP amplitudes by - (_i_) taking soft gluon limit on \(gg\to Hgg\) amplitudes, (_ii_) shifting spinors in \(gg\to Hg\) amplitudes. Both ways lead to the exactly same results. Finally we discuss that for Higgs plus one jet production NMHV amplitudes do not contribute to the NLP threshold corrections.
### Higgs-gluon amplitudes
The Standard Model of particle physics forbids gluons to interact with Higgs at the tree level, however they can interact via a massive quark loop. As the top quark is the heaviest
among massive quarks, the coupling of Higgs with gluons is dominated via a top quark loop. In the large top mass limit \(m_{t}\to\infty\), we can integrate out the heavy top quark effect to obtain an effective Lagrangian as follows [92; 93],
\[\mathcal{L}_{\,\rm eff}\,=\,-\frac{1}{4}\,G\,\,H\,{\rm Tr}(F^{a}_{ \mu\nu}F^{\mu\nu,a})\,, \tag{9}\]
where \(F^{a}_{\mu\nu}\) is the QCD field strength tensor. The effective coupling is given at lowest order by \(G=\alpha_{s}/3\pi v\), where \(v\) is the vacuum expectation value of the Higgs field and \(\alpha_{s}\) is the strong coupling constant. The general form of an amplitude consisting of one Higgs boson and \(n\)-gluons can be represented as,
\[\mathcal{A}_{n}(p_{i},h_{i},c_{i})\,=\,i\,\left(\frac{\alpha_{s}} {6\pi v}\right)g_{s}^{n-2}\sum_{\sigma\in\mathcal{S}_{n^{\prime}}}{\rm Tr}\,( {\bf T}^{c_{1}}{\bf T}^{c_{2}}\ldots{\bf T}^{c_{n}})\,\mathcal{A}_{n}^{\{c_{i }\}}\left(h_{1}\,h_{2}\,h_{3}\ldots\,h_{n};H\right)\,. \tag{10}\]
Here \(\mathcal{S}_{n^{\prime}}\) represents the set of all \((n-1)!\) non-cycling permutations of \(1,2,\ldots,n\). \({\bf T}^{c_{i}}\) denote the SU(3) colour matrix in the fundamental representation and the are normalized as, \({\rm Tr}({\bf T}^{c_{1}},{\bf T}^{c_{2}})=\delta^{c_{1}c_{2}}\). For brevity, we avoid writing \(H\) explicitly in \(\mathcal{A}_{n}^{\{c_{i}\}}\) in the rest of this paper.
The leading order process for Higgs plus one gluon production can be written as,
\[g(p_{1})+g(p_{2})\to H(-p_{3})+g(-p_{4})\,. \tag{11}\]
There are two independent colour ordered helicity amplitudes for this process as given below,
\[\mathcal{A}_{+++}^{124}=\frac{m_{H}^{4}}{\left\langle 12\right\rangle \left\langle 24\right\rangle\left\langle 41\right\rangle}\,,\qquad\mathcal{A}_{-++}^{124 }=\frac{[24]^{3}}{[12][14]}\,, \tag{12}\]
and amplitudes for all other helicity configurations can be constructed using these two.
Now, we consider that a gluon with momenta \(p_{5}\) is being emitted from the leading order process, _i.e.,_
\[g(p_{1})+g(p_{2})\to H(-p_{3})+g(-p_{4})+g(-p_{5})\,. \tag{13}\]
For this process, there are only three independent helicity amplitudes and remaining helicity configurations can be obtained by switching external momenta and spinors. These three independent helicity amplitudes containing Higgs plus four gluons are given by,
\[\mathcal{A}_{++++}^{1245} =\frac{m_{H}^{4}}{\left\langle 12\right\rangle\left\langle 24 \right\rangle\left\langle 45\right\rangle\left\langle 51\right\rangle}\,,\] \[\mathcal{A}_{-+++}^{1245} =\frac{\left\langle 1|4+5|2\right\rangle^{3}}{\left\langle 4| 1|2\right\rangle\left\langle 15\right\rangle\left\langle 45\right\rangle s_{145}}+ \frac{[25][45]\langle 1|4+5|2]^{2}}{\left\langle 4|1|2\right\rangle s_{15}s_{145}}+ \frac{[24]\langle 1|2+4|5]^{2}}{\left\langle 24\right\rangle s_{12}s_{124}}\] \[\quad+\frac{[25]\langle 1|2+4|5]^{2}}{\left\langle 14\right\rangle \left\langle 24\right\rangle[15]s_{12}}-\frac{[25]^{2}\langle 1|2+5|4]^{2}}{s_{12}s_{15}s_{125}}\,,\]
\[\mathcal{A}^{1245}_{---+} =-\frac{\langle 12\rangle^{4}}{\langle 12\rangle\langle 24 \rangle\langle 45\rangle\langle 51\rangle}-\frac{[45]^{4}}{[12][24][45][51]}\,. \tag{14}\]
Here \(s_{ij}=(p_{i}+p_{j})^{2}\) and \(s_{ijk}=(p_{i}+p_{j}+p_{k})^{2}\). These amplitudes were calculated for the first time in [94]. Following eq. (10), we can write the full amplitude for a given helicity configuration as,
\[\mathcal{A}(\{p_{i},h_{i},c_{i}\})\] \[=\,i\,\left(\frac{\alpha_{s}}{6\pi v}\right)\,g_{s}^{2}\bigg{[} \left\{\mathrm{Tr}\left(\mathbf{T}^{c_{1}}\mathbf{T}^{c_{2}}\mathbf{T}^{c_{4} }\mathbf{T}^{c_{5}}\right)+\left(\mathbf{T}^{c_{1}}\mathbf{T}^{c_{5}}\mathbf{ T}^{c_{4}}\mathbf{T}^{c_{2}}\right)\right\}\mathcal{A}^{1245}_{h_{1}h_{2}h_{4}h_{5}}\] \[+\left\{\mathrm{Tr}\left(\mathbf{T}^{c_{1}}\mathbf{T}^{c_{4}} \mathbf{T}^{c_{5}}\mathbf{T}^{c_{2}}\right)+\left(\mathbf{T}^{c_{1}}\mathbf{ T}^{c_{2}}\mathbf{T}^{c_{5}}\mathbf{T}^{c_{4}}\right)\right\}\mathcal{A}^{1452}_{h_{1}h_{2} h_{4}h_{5}}\] \[+\left\{\mathrm{Tr}\left(\mathbf{T}^{c_{1}}\mathbf{T}^{c_{5}} \mathbf{T}^{c_{2}}\mathbf{T}^{c_{4}}\right)+\left(\mathbf{T}^{c_{1}}\mathbf{ T}^{c_{4}}\mathbf{T}^{c_{2}}\mathbf{T}^{c_{5}}\right)\right\}\mathcal{A}^{1524}_{h_{1}h_{2} h_{4}h_{5}}\bigg{]}\,. \tag{15}\]
Squaring the above equation and summing over colours, we obtain the expression of squared amplitude as,
\[\sum_{\mathrm{colours}}|\mathcal{A}(\{p_{i},h_{i},c_{i}\})|^{2} =\,\bigg{[}\left(\frac{\alpha_{s}}{6\pi v}\right)\,g_{s}^{2} \bigg{]}^{2}(N^{2}-1)\bigg{\{}2\,N^{2}\left(|\mathcal{A}^{1245}|^{2}+|\mathcal{ A}^{1452}|^{2}+|\mathcal{A}^{1524}|^{2}\right)\] \[-4\frac{(N^{2}-3)}{N^{2}}|\mathcal{A}^{1245}+\mathcal{A}^{1452}+ \mathcal{A}^{1524}|^{2}\bigg{\}}\,. \tag{16}\]
Here, for simplicity, we have suppressed the labels that represent helicity configurations. Due to the dual Ward identity [94; 95], the term in the second line of the above equation vanishes and we are left with only the first term.
### Spinor shifts and colour dipoles
In order to obtain NLP amplitudes for the Higgs plus two gluon production process, one needs to expand the \(gg\to Hgg\) helicity amplitudes in the powers of the soft momentum keeping the sub-leading contributions. In parallel, following the arguments presented in section 2, we can get NLP amplitudes using the shifts in the spinors of \(gg\to Hg\) amplitudes. We start our calculation by noting the fact that the gluon with momentum \(p_{5}\) be emitted from any of the three gluons present at the leading order and as discussed in the previous section, the emission of a soft gluon always engenders shifts in two adjacent spinors present in the colour ordered non-radiative Born amplitudes.
In case of emission of a next-to-soft gluon from Higgs plus \(n\) gluon amplitudes, a total \({}^{n}C_{2}=n(n-1)/2\) number of colour dipoles can be formed. Therefore, for amplitudes consisting of Higgs plus three gluons, three dipoles are generated due to the emission of a next-to-soft gluon and NLP amplitudes can be realised by shifting appropriate spinors
depending on the helicity of the emitted gluon. For a '\(+\)' gluon emisison from the dipole \(\mathcal{D}_{14}\) made up of momenta \(p_{1}\) and \(p_{4}\), the LP+NLP amplitude can be expressed as,
\[\mathcal{A}_{h_{1}h_{2}h_{4}+}^{1245} = \frac{\langle 14\rangle}{\langle 15\rangle\,\langle 45\rangle} \mathcal{A}_{h_{1}h_{2}h_{4}}^{1^{\prime}2\,4^{\prime}}\,, \tag{17}\]
where \(\mathcal{A}_{h_{1}h_{2}h_{4}}^{1^{\prime}2\,4^{\prime}}\) denotes that the \(\left|1\right|\) and \(\left|4\right|\) spinors are shifted in the colour ordered leading amplitude obeying eq. (7). Similar contributions coming from the dipoles \(\mathcal{D}_{24}\) and \(\mathcal{D}_{12}\) can be written as,
\[\mathcal{A}_{h_{1}h_{2}h_{4}+}^{1452} = \frac{\langle 24\rangle}{\langle 25\rangle\,\langle 54 \rangle}\mathcal{A}_{h_{1}h_{2}h_{4}}^{1^{\prime}2\,4^{\prime}}\,, \tag{18}\]
and
\[\mathcal{A}_{h_{1}h_{2}h_{4}+}^{1524} = \frac{\langle 12\rangle}{\langle 15\rangle\,\langle 52 \rangle}\mathcal{A}_{h_{1}h_{2}h_{4}}^{1^{\prime}2^{\prime}4}\,. \tag{19}\]
So the full amplitude of eq. (15) can now be rewritten using eqs. (17) - (19) as,
\[\mathcal{A}_{h_{1}h_{2}h_{4}+}|_{\text{LP+NLP}} = i\,\left(\frac{\alpha_{s}}{6\pi v}\right)\,g^{2}\] \[\times\Bigg{[}\left\{\text{Tr}\left(\mathbf{T}^{C_{1}}\mathbf{T} ^{C_{2}}\mathbf{T}^{C_{4}}\mathbf{T}^{C_{5}}\right)+\left(\mathbf{T}^{C_{1}} \mathbf{T}^{C_{5}}\mathbf{T}^{C_{4}}\mathbf{T}^{C_{2}}\right)\right\}\frac{ \langle 14\rangle}{\langle 15\rangle\,\langle 45\rangle}\mathcal{A}_{h_{1}h_{2}h_{4}}^{1^{ \prime}2\,4^{\prime}}\] \[-\left\{\text{Tr}\left(\mathbf{T}^{C_{1}}\mathbf{T}^{C_{4}} \mathbf{T}^{C_{5}}\mathbf{T}^{C_{2}}\right)+\left(\mathbf{T}^{C_{1}}\mathbf{ T}^{C_{2}}\mathbf{T}^{C_{5}}\mathbf{T}^{C_{4}}\right)\right\}\frac{\langle 24 \rangle}{\langle 25\rangle\,\langle 45\rangle}\mathcal{A}_{h_{1}h_{2}h_{4}}^{1^{ \prime}2\,4^{\prime}}\] \[-\left\{\text{Tr}\left(\mathbf{T}^{C_{1}}\mathbf{T}^{C_{5}} \mathbf{T}^{C_{2}}\mathbf{T}^{C_{4}}\right)+\left(\mathbf{T}^{C_{1}}\mathbf{ T}^{C_{4}}\mathbf{T}^{C_{2}}\mathbf{T}^{C_{5}}\right)\right\}\frac{\langle 12 \rangle}{\langle 15\rangle\,\langle 25\rangle}\mathcal{A}_{h_{1}h_{2}h_{4}}^{1^{ \prime}2^{\prime}4}\Bigg{]}\,. \tag{20}\]
To derive the above equation, we have used the reflection identity [95] that applies for Higgs plus \(n\)-gluon amplitudes. This equation is one of the central results of this paper which identifies the direct correspondence of colour ordered amplitudes in the next-to-soft limit to the non-radiative colour ordered Born amplitudes with shifted spinors. Shift in each non-radiative spinor pair represents one colour ordered radiative amplitude. The validity of this formula relies only on the cyclic and antisymmetric properties of Higgs plus gluon amplitudes. Thus, this formula is applicable to any process that satisfy such properties, namely pure gluon amplitudes in Yang-Mills theories or gluons with a quark-antiquark pair in QCD.
### NLP Amplitudes: Absence of NMHV contribution
As evident from the discussion in the previous section, colour ordered LP amplitudes always appear as a product of Born amplitudes and the corresponding Eikonal factors such as,
\[\mathcal{A}_{h_{1}h_{2}h_{4}+}^{1245}\Big{|}_{\text{LP}} = \frac{\langle 14\rangle}{\langle 15\rangle\,\langle 45\rangle}\mathcal{A}_{h_{1}h _{2}h_{4}}^{124}\,,\]
\[{\cal A}_{h_{1}h_{2}h_{4}-}^{1245}\Big{|}_{\rm LP} = \frac{[14]}{[15][45]}{\cal A}_{h_{1}h_{2}h_{4}}^{124}\,. \tag{21}\]
In this section we provide the details of NLP amplitudes for different helicity configurations. For Higgs plus four gluon amplitudes, there are altogether \(2^{4}=16\) helicity configurations possible. Out of these sixteen helicity amplitudes, one needs to calculate only eight, as the remaining conjugate configurations can easily be obtained by flipping the helicity of all the external gluons. As discussed earlier, NLP amplitudes can be calculated considering emission of both '\(+\)' and '\(-\)' helicity gluons from all possible Born amplitudes. In doing so, we find that the NMHV amplitudes do not add to the NLP contribution. We illustrate this by a simple example. Let us consider emission of a '\(+\)' helicity gluon out of the \({\cal A}_{+--}^{124}\) amplitude, which following eq. (12) can be presented as,
\[{\cal A}_{+--}^{124}=-\frac{\left\langle 24\right\rangle^{3}}{\left\langle 12 \right\rangle\left\langle 14\right\rangle}\,. \tag{22}\]
We have already seen in the previous subsection that emission of a '\(+\)' helicity gluon always demands anti-holomorphic spinors to be shifted. However there are no square spinors present in the above equation and therefore \({\cal A}_{+--+}^{1245}|_{\rm NLP}\) vanishes. It is also straight forward to check that applying \(S^{(1)}\) of eq. (4) on \({\cal A}_{+--}^{124}\) gives zero. The reason behind this vanishing of NLP amplitude for NHMV amplitudes can furthermore be argued by invoking the soft Higgs limit. Due to the momentum conservation, one can choose not to bring Higgs momentum explicitly in the expressions of NLP amplitudes and in the soft Higgs limit these amplitudes essentially behave as pure gluon NMHV amplitudes which were shown to be non-contributing to NLP in [83]. Among sixteen Higgs plus four gluon helicity amplitudes, six NMHV amplitudes vanish and we are left with ten non-zero helicity amplitudes at NLP. Out of these ten, we need to calculate only five, as the remaining five helicity configurations can readily be obtained by flipping helicities of all external gluons. Table 1 shows emissions from the Born amplitudes and lists down those five different non-zero amplitudes. Their expressions including three different colour orderings for each of them are given below.
## 1 \(++++\)
\[{\cal A}_{++++}^{1245}\Big{|}_{\rm NLP} = \frac{\left\langle 14\right\rangle}{\left\langle 15\right\rangle \left\langle 45\right\rangle}\,\frac{2\left(s_{15}+s_{25}+s_{45}\right)}{\left(s_{12}+s _{14}+s_{24}\right)}\,{\cal A}_{++++}^{124}\,,\] \[{\cal A}_{++++}^{1524}\Big{|}_{\rm NLP} = -\frac{\left\langle 12\right\rangle}{\left\langle 15\right\rangle \left\langle 25\right\rangle}\,\frac{2\left(s_{15}+s_{25}+s_{45}\right)}{\left(s_{12}+s _{14}+s_{24}\right)}\,{\cal A}_{+++}^{124}\,,\] \[{\cal A}_{++++}^{1452}\Big{|}_{\rm NLP} = -\frac{\left\langle 24\right\rangle}{\left\langle 45\right\rangle \left\langle 25\right\rangle}\frac{2\left(s_{15}+s_{25}+s_{45}\right)}{\left(s_{12}+s _{14}+s_{24}\right)}\,{\cal A}_{+++}^{124}\,. \tag{23}\]
## 2 \(-+++\)
## 3 \(++-+\)
\[\left.\mathcal{A}_{++-+}^{1245}\right|_{\rm NLP} = \left.\mathcal{A}_{-+++}^{1245}\right|_{\rm NLP}\,\left\{1\leftrightarrow 4 \right\},\] \[\left.\mathcal{A}_{-+++}^{1524}\right|_{\rm NLP} = \left.\mathcal{A}_{-+++}^{1452}\right|_{\rm NLP}\,\left\{1 \leftrightarrow 4\right\},\] \[\left.\mathcal{A}_{++-+}^{1452}\right|_{\rm NLP} = \left.\mathcal{A}_{-+++}^{1524}\right|_{\rm NLP}\,\left\{1 \leftrightarrow 4\right\}. \tag{25}\]
## 4 \(+-++\)
\[\left.\mathcal{A}_{+-++}^{1245}\right|_{\rm NLP} = \left.\mathcal{A}_{-+++}^{1452}\right|_{\rm NLP}\,\left\{1 \leftrightarrow 2\right\},\] \[\left.\mathcal{A}_{+-++}^{1524}\right|_{\rm NLP} = \left.\mathcal{A}_{-+++}^{1524}\right|_{\rm NLP}\,\left\{1 \leftrightarrow 2\right\},\] \[\left.\mathcal{A}_{+-++}^{1452}\right|_{\rm NLP} = \left.\mathcal{A}_{-+++}^{1245}\right|_{\rm NLP}\,\left\{1 \leftrightarrow 2\right\}. \tag{26}\]
## 5 \(+++-\)
\[\left.\mathcal{A}_{+++-}^{1245}\right|_{\rm NLP} = \left.\mathcal{A}_{+-++}^{1452}\right|_{\rm NLP}\,\left\{1 \leftrightarrow 2\right\},\] \[\left.\mathcal{A}_{+-++}^{1524}\right|_{\rm NLP} = \left.\mathcal{A}_{-+++}^{1524}\right|_{\rm NLP}\,\left\{1 \leftrightarrow 2\right\}.\] \[\left.\mathcal{A}_{+-++}^{1452}\right|_{\rm NLP} = \left.\mathcal{A}_{-+++}^{1245}\right|_{\rm NLP}\,\left\{1 \leftrightarrow 2\right\}. \tag{27}\]
## 6 \(+++-\)
\[\left.\mathcal{A}_{+++-}^{1245}\right|_{\rm NLP} = \left.\mathcal{A}_{+-++}^{1452}\right|_{\rm NLP}\,\left\{1 \leftrightarrow 2\right\},\] \[\left.\mathcal{A}_{+-++}^{1524}\right|_{\rm NLP} = \left.\mathcal{A}_{-+++}^{1524}\right|_{\rm NLP}\,\left\{1 \leftrightarrow 2\right\},\] \[\left.\mathcal{A}_{+-++}^{1452}\right|_{\rm NLP} = \left.\mathcal{A}_{-+++}^{1245}\right|_{\rm NLP}\,\left\{1 \leftrightarrow 2\right\}. \tag{28}\]
## 7 \(++-\)
\begin{table}
\begin{tabular}{|l|c|c|} \hline Born & Helicity of extra emission & NLP \\ \hline \(\mathcal{A}_{+++}\) & \(+\) & \(\mathcal{A}_{+++++}|_{\rm NLP}\) \\ & - & \(\mathcal{A}_{+++-}|_{\rm NLP}\) \\ \hline \(\mathcal{A}_{-++}\) & \(+\) & \(\mathcal{A}_{-+++}|_{\rm NLP}\) \\ & - & \(0\) \\ \hline \(\mathcal{A}_{+--}\) & \(+\) & \(0\) \\ & - & \(\mathcal{A}_{+---}|_{\rm NLP}\) \\ \hline \(\mathcal{A}_{++-}\) & \(+\) & \(\mathcal{A}_{++-+}|_{\rm NLP}\) \\ & - & \(0\) \\ \hline \end{tabular}
\end{table}
Table 1: A set of eight NLP amplitudes constructed from the Born amplitudes is given. Flipping helicities of all the particles provide the remaining eight amplitudes among which three NMHV configurations become zero again. We do not mention any explicit colour ordering here as this feature stands true irrespective of that choice.
\[\mathcal{A}^{2}_{+++-}|_{\rm NLP} = -\frac{2\left(s_{15}+s_{25}+s_{45}\right)}{\left(s_{12}+s_{14}+s_{24 }\right)}\mathcal{A}^{124}_{+++}\,,\] \[\mathcal{A}^{1524}_{+++-}\Big{|}_{\rm NLP} = -\frac{[12]}{[15][25]}\left(\frac{\langle 45\rangle\,[15]}{ \langle 24\rangle\,[12]}-\frac{\langle 45\rangle\,[25]}{\langle 14\rangle\,[12]}- \frac{s_{15}}{s_{12}}-\frac{s_{25}}{s_{12}}\right.\] \[\left.\qquad\qquad+\frac{2\left(s_{15}+s_{25}+s_{45}\right)}{ \left(s_{12}+s_{14}+s_{24}\right)}\right)\mathcal{A}^{124}_{+++}\,,\] \[\mathcal{A}^{1452}_{+++-}\Big{|}_{\rm NLP} = -\frac{[24]}{[25][45]}\Big{(}\frac{\langle 15\rangle\,[45]}{ \langle 12\rangle\,[24]}-\frac{\langle 15\rangle\,[25]}{\langle 14\rangle\,[24]}- \frac{s_{25}}{s_{24}}-\frac{s_{45}}{s_{24}} \tag{27}\] \[\qquad\qquad+\frac{2\left(s_{15}+s_{25}+s_{45}\right)}{\left(s_{1 2}+s_{14}+s_{24}\right)}\Big{)}\,\mathcal{A}^{124}_{+++}\,.\]
## 4 NLP logarithms
The obvious next step to obtain the NLP threshold logarithms is to perform phase-space integrations over the squared amplitudes at NLP and we discuss that in the following two subsections.
### Squared amplitudes at NLP
The amplitude for a process carrying a soft gluon radiation can be written as a sum of LP and NLP amplitudes such as,
\[\mathcal{A} = \mathcal{A}_{\rm LP}+\mathcal{A}_{\rm NLP}\,. \tag{28}\]
Squaring the amplitude gives,
\[\mathcal{A}^{2} = \mathcal{A}^{2}_{\rm LP}+2{\rm Re}\left(\mathcal{A}_{\rm NLP} \mathcal{A}^{\dagger}_{\rm LP}\right)\,, \tag{29}\]
where the term \(\mathcal{A}^{2}_{\rm NLP}\) is being neglected as it starts contributing at the next-to-next-to leading power. The first and the second terms represent LP and NLP contributions respectively, and we denote the NLP contribution as \([\mathcal{A}^{2}]\,|_{\rm NLP}\) hereafter. Using eq. (16) we obtain the squared NLP amplitude for a fixed helicity as,
\[\sum_{\rm colours}[\mathcal{A}^{2}]\,|_{\rm NLP} = \Big{[}\left(\frac{\alpha_{s}}{6\pi v}\right)\,g^{2}\Big{]}^{2} 2\,N^{2}(N^{2}-1)\Big{\{}[\mathcal{A}^{2}]^{1245}|_{\rm NLP}+[\mathcal{A}^{2 }]^{1452}|_{\rm NLP}+[\mathcal{A}^{2}]^{1524}|_{\rm NLP}\Big{\}}\,,\]
where \(N\) is the dimensionality of the SU(\(N\)) colour and it takes the value \(N=3\) for QCD.
Using eqs. (23) - (27) we get the squared NLP amplitudes of the following five helicity configurations,
1. \(++++\) \[[\mathcal{A}^{2}]^{1245}_{++++}|_{\rm NLP} = 4\left(\frac{s_{14}s_{25}}{s_{15}s_{45}}+\frac{s_{14}}{s_{15}}+ \frac{s_{14}}{s_{45}}\right)\frac{1}{\left(s_{12}+s_{14}+s_{24}\right)^{2}} \,\mathcal{A}^{2}_{+++}\,,\]
\[[\mathcal{A}^{2}]^{1524}_{+++++}|_{\rm NLP} = \left[\mathcal{A}^{2}\right]^{1245}_{+++++}|_{\rm NLP}\left\{2 \leftrightarrow 4\right\},\] \[[\mathcal{A}^{2}]^{1452}_{+++++}|_{\rm NLP} = \left[\mathcal{A}^{2}\right]^{1245}_{+++++}|_{\rm NLP}\left\{1 \leftrightarrow 2\right\}. \tag{31}\]
2. \(-+++\) \[[\mathcal{A}^{2}]^{1245}_{-+++}|_{\rm NLP} = \left(-\,\frac{3\,s_{12}}{s_{15}s_{24}}-\frac{3}{s_{15}}+\frac{1} {s_{45}}+\frac{s_{24}}{s_{12}s_{45}}-\frac{s_{14}s_{25}}{s_{12}s_{15}s_{45}}+ \frac{3\,s_{14}s_{25}}{s_{15}s_{24}s_{45}}\right)\mathcal{A}^{2}_{-++}\,,\] \[[\mathcal{A}^{2}]^{1524}_{-+++}|_{\rm NLP} = [\mathcal{A}^{2}]^{1245}_{+++++}|_{\rm NLP}\left\{2 \leftrightarrow 4\right\},\] \[[\mathcal{A}^{2}]^{1452}_{-+++}|_{\rm NLP} = \left(\frac{s_{12}}{s_{14}s_{25}}+\frac{5}{s_{25}}+\frac{5}{s_{45 }}+\frac{s_{14}}{s_{12}s_{45}}-\frac{s_{15}s_{24}}{s_{12}s_{25}s_{45}}-\frac{s _{15}s_{24}}{s_{14}s_{25}s_{45}}\right)\mathcal{A}^{2}_{-++}\,.\] (32)
3. \(++-+\) \[[\mathcal{A}^{2}]^{1245}_{++-+}|_{\rm NLP} = [\mathcal{A}^{2}]^{1245}_{-+++}|_{\rm NLP}\left\{1\leftrightarrow 4 \right\},\] \[[\mathcal{A}^{2}]^{1524}_{++-+}|_{\rm NLP} = [\mathcal{A}^{2}]^{1452}_{-+++}|_{\rm NLP}\left\{1 \leftrightarrow 4\right\},\] \[[\mathcal{A}^{2}]^{1452}_{++-+}|_{\rm NLP} = [\mathcal{A}^{2}]^{1524}_{-+++}|_{\rm NLP}\left\{1 \leftrightarrow 4\right\}.\] (33)
4. \(+-++\) \[[\mathcal{A}^{2}]^{1245}_{+-++}|_{\rm NLP} = [\mathcal{A}^{2}]^{1452}_{-+++}|_{\rm NLP}\left\{1 \leftrightarrow 2\right\},\] \[[\mathcal{A}^{2}]^{1524}_{+-+++}|_{\rm NLP} = [\mathcal{A}^{2}]^{1245}_{-+++}|_{\rm NLP}\left\{1 \leftrightarrow 2\right\}.\] (34)
5. \(+++-\) \[[\mathcal{A}^{2}]^{1245}_{++-}|_{\rm NLP} = \left(\frac{s_{12}}{s_{15}s_{24}}-\frac{3}{s_{15}}-\frac{3}{s_{45 }}+\frac{s_{24}}{s_{12}s_{45}}-\frac{s_{14}s_{25}}{s_{12}s_{15}s_{45}}-\frac{s _{14}s_{25}}{s_{15}s_{24}s_{45}}\right)\mathcal{A}^{2}_{++++}\] \[\quad+[\mathcal{A}^{2}]^{1245}_{+++++}|_{\rm NLP},\] \[[\mathcal{A}^{2}]^{1524}_{+++-}|_{\rm NLP} = [\mathcal{A}^{2}]^{1245}_{++++-}|_{\rm NLP}\left\{2 \leftrightarrow 4\right\},\] \[[\mathcal{A}^{2}]^{1452}_{+++-}|_{\rm NLP} = [\mathcal{A}^{2}]^{1245}_{++++-}|_{\rm NLP}\left\{1 \leftrightarrow 2\right\}.\] (35)
Note that, the colour ordering of the non-radiative squared amplitude, suppressed here and in the rest of the paper, is to be considered as \(\{124\}\)_i.e.,_\(\mathcal{A}^{2}_{h_{1}h_{2}h_{4}}=[\mathcal{A}^{2}]^{124}_{h_{1}h_{2}h_{4}}\). Each of the remaining five non-NMHV squared amplitudes resemble to one of the above results as their helicity amplitudes are obtained by flipping helicities of all the external particles.
### Phase Space Integration
We are now ready to integrate the squared amplitudes over the unobserved parton phase space in the rest frame of \(p_{4}\) and \(p_{5}\) momenta to obtain the differential cross-section. Following the usual method, we factorize the three-body phase space into two two-body phase spaces: (_i_) one containing two gluons with momenta \(p_{4}\) and \(p_{5}\), (_ii_) the other one containing the Higgs and the collective contribution of the two gluons mentioned in (_i_). We choose the phase space parametrisation in \(d=(4-2\epsilon)\) dimension [96; 97] as,
\[p_{1} = \left(E_{1},0,\cdots,0,E_{1}\right),\] \[p_{2} = \left(E_{2},0,\cdots,0,p_{3}\sin\psi,p_{3}\cos\psi-E_{1}\right),\] \[p_{3} = -(E_{3},0,\cdots,0,p_{3}\sin\psi,p_{3}\cos\psi)\,,\] \[p_{4} = -\frac{\sqrt{s_{45}}}{2}(1,0,\cdots,0,\sin\theta_{1}\sin\theta_{2 },\sin\theta_{1}\cos\theta_{2},\cos\theta_{1})\,,\] \[p_{5} = -\frac{\sqrt{s_{45}}}{2}(1,0,\cdots,0,-\sin\theta_{1}\sin\theta_ {2},-\sin\theta_{1}\cos\theta_{2},-\cos\theta_{1})\,. \tag{36}\]
The differential cross-section at NLP is then given by,
\[s_{12}^{2}\frac{d^{2}\sigma}{ds_{13}ds_{23}}\bigg{|}_{\rm NLP} = \left.\mathcal{F}\left(\frac{s_{45}}{\bar{\mu}^{2}}\right)^{-\epsilon} \overline{\mathcal{A}_{\rm NLP}^{2}}\,,\right. \tag{37}\]
where
\[\mathcal{F} = \frac{1}{2}K_{gg}\,G^{2}\left(\frac{\alpha_{s}(\bar{\mu}^{2})}{4 \pi}\right)^{2}\,\left(\frac{s_{13}\,s_{23}-m_{H}^{2}\,s_{45}}{\bar{\mu}^{2}\, s_{12}}\right)^{-\epsilon}\,,\] \[K_{gg} = \frac{N^{2}}{2(N^{2}-1)},\quad\bar{\mu}^{2}=4\pi e^{-\gamma_{E} \epsilon}\mu_{r}^{2}\,, \tag{38}\]
and
\[\overline{\mathcal{A}_{\rm NLP}^{2}} = \int_{0}^{\pi}d\theta_{1}\,(\sin\theta_{1})^{1-2\epsilon}\int_{0 }^{\pi}d\theta_{2}\,(\sin\theta_{2})^{-2\epsilon}[\mathcal{A}^{2}]\,|_{\rm NLP}. \tag{39}\]
We can now use eqs. (31) - (35) and formulae given in [98] to perform the angular integrations which give us the NLP threshold logarithms. We have checked that the singular terms produced after these integrations due to the hard collinear emissions get cancelled, once the effects of mass factorization using helicity dependent Altarelli-Parisi splitting functions [99; 100] are taken into account. The helicity driven NLP leading logarithms that contribute to the differential cross-sections are given by,
1. \(++++\) \[s_{12}^{2}\frac{d^{2}\sigma_{++++}}{ds_{13}ds_{23}}\bigg{|}_{\rm NLP -LL} = \mathcal{F}\left\{16\pi\left(s_{12}\left(\frac{1}{s_{13}}+\frac{1} {s_{23}}\right)+2\right)\log\left(\frac{s_{45}}{\bar{\mu}^{2}}\right)\right.\] (40) \[\left.\phantom{s_{12}^{2}\frac{d^{2}\sigma_{++++}}{ds_{13}ds_{23 }}}+16\pi\log\left(\frac{s_{12}s_{45}}{s_{13}s_{23}}\right)\right\}\times \frac{1}{m_{H}^{2}}\mathcal{A}_{++++}^{2}\,.\]
2. \(-+++\) \[s_{12}^{2}\frac{d^{2}\sigma_{-+++}}{ds_{13}ds_{23}}\bigg{|}_{\text{NLP- LL}} = \mathcal{F}\left\{16\pi\left(\frac{1}{s_{13}}-\frac{1}{s_{23}}\right) \log\left(\frac{s_{45}}{\bar{\mu}^{2}}\right)\right.\] (41) \[\qquad\left.+\,4\pi\left(\frac{3}{s_{13}}-\frac{1}{s_{23}}\right) \log\left(\frac{s_{12}s_{45}}{s_{13}s_{23}}\right)\right\}\mathcal{A}_{-++}^{2}\,.\]
3. \(++-+\) \[s_{12}^{2}\frac{d^{2}\sigma_{++-+}}{ds_{13}ds_{23}}\bigg{|}_{\text{NLP- LL}} = \mathcal{F}\bigg{\{}16\pi\left(\frac{1}{s_{13}}+\frac{1}{s_{23}} \right)\log\left(\frac{s_{45}}{\bar{\mu}^{2}}\right)\] (42) \[\qquad\left.-\,4\pi\left(\frac{1}{s_{13}}+\frac{1}{s_{23}}\right) \log\left(\frac{s_{12}s_{45}}{s_{13}s_{23}}\right)\right\}\mathcal{A}_{++-}^{2}\,.\]
4. \(+-++\) \[s_{12}^{2}\,s_{12}^{2}\frac{d^{2}\sigma_{+-++}}{ds_{13}ds_{23}} \bigg{|}_{\text{NLP-LL}} = \mathcal{F}\bigg{\{}16\pi\left(\frac{1}{s_{23}}-\frac{1}{s_{13}} \right)\log\left(\frac{s_{45}}{\bar{\mu}^{2}}\right)\] (43) \[\qquad+4\pi\left(\frac{3}{s_{23}}-\frac{1}{s_{13}}\right)\log \left(\frac{s_{12}s_{45}}{s_{13}s_{23}}\right)\bigg{\}}\,\mathcal{A}_{+-+}^{2}\,.\]
5. \(++-\) \[s_{12}^{2}\frac{d^{2}\sigma_{+++-}}{ds_{13}ds_{23}}\bigg{|}_{\text{NLP- LL}} = \mathcal{F}\bigg{\{}-16\pi\bigg{(}\frac{1}{s_{13}}+\frac{1}{s_{23}} \bigg{)}\log\left(\frac{s_{45}}{\bar{\mu}^{2}}\right)\] (44) \[\qquad-4\pi\left(\frac{1}{s_{13}}+\frac{1}{s_{23}}\right)\log \left(\frac{s_{12}s_{45}}{s_{13}s_{23}}\right)\bigg{\}}\,\mathcal{A}_{+++}^{2}\] \[\qquad+\left.\,s_{12}^{2}\frac{d^{2}\sigma_{++++}}{ds_{13}ds_{23} }\right|_{\text{NLP-LL}}\,.\]
Flipping of all helicities together in each one of the above equations does not alter the result. Therefore the complete result can be achieved by adding eqs. (40) - (44) and then by multiplying a factor of 2.
## 5 Summary and Outlook
The avalanche of high accuracy data in the LHC demands perturbative QCD predictions to be extremely precise. From theoretical point of view, all order resummation and fixed order calculations both are important to reach the desired precision. NLP corrections can leave numerically sizeable impact on the differential distribution of cross sections in the threshold limit. Although there exists a method to calculate NLP corrections using momentum shifts
at the squared amplitude level, the rarity of results clearly demands improvement on the method of such calculations.
We have considered the effect of next-to-soft radiation on the Higgs plus one jet production through gluon fusion. We have shifted the spinors in the non-radiative helicity amplitudes which essentially generate the helicity amplitudes in the case of an extra gluon emission in the next-to-soft limit. The squared amplitudes thus obtained are compact in nature and it comes out that the NMHV amplitudes do not play a role in the calculation of threshold logarithms. We have performed the phase space integration over the unobserved parton phase space to obtain the NLP threshold logarithms and listed the results for each helicity configurations. A systematic method to calculate NLP leading logarithms is presented in this paper and we believe that the simplicity and easy applicability of this approach would help in bringing out more such results for several other processes.
###### Acknowledgments.
We thank Keith Ellis and Eric Laenen for their useful comments on the manuscript. SS is supported in part by the SERB-MATRICS under Grant No. MTR/2022/000135.
|
2309.17263 | Overcoming Traditional No-Go Theorems: Quantum Advantage in Multiple
Access Channels | Extension of point-to-point communication model to the realm of multi-node
configurations finds a plethora of applications in internet and
telecommunication networks. Here, we establish a novel advantage of quantum
communication in a commonly encountered network configuration known as the
Multiple Access Channel (MAC). A MAC consists of multiple distant senders
aiming to send their respective messages to a common receiver. Unlike the
quantum superdense coding protocol, the advantage reported here is realized
without invoking entanglement between the senders and the receiver. Notably,
such an advantage is unattainable in traditional point-to-point communication
involving one sender and one receiver, where the limitations imposed by the
Holevo and Frankel Weiner no-go theorems come into play. Within the MAC setup,
this distinctive advantage materializes through the receiver's unique ability
to simultaneously decode the quantum systems received from multiple senders.
Intriguingly, some of our MAC designs draw inspiration from various other
constructs in quantum foundations, such as the Pusey-Barrett-Rudolph theorem
and the concept of `nonlocality without entanglement', originally explored for
entirely different purposes. Beyond its immediate applications in network
communication, the presented quantum advantage hints at a profound connection
with the concept of `quantum nonlocality without inputs' and holds the
potential for semi-device-independent certification of entangled measurements. | Ananya Chakraborty, Sahil Gopalkrishna Naik, Edwin Peter Lobo, Ram Krishna Patra, Samrat Sen, Mir Alimuddin, Amit Mukherjee, Manik Banik | 2023-09-29T14:15:35Z | http://arxiv.org/abs/2309.17263v2 | # Advantage of Qubit Communication Over The C-bit in Multiple Access Channel
###### Abstract
The celebrated no-go theorem by Holevo limits the information capacity of an individual quantum system when no entanglement is shared between the sender and receiver. A recently extended version of this theorem by Frenkel & Weiner imposes even a stricter embargo on communication utilities of a quantum system. Specifically, in point-to-point information transmission scenario, it proves that any input-output correlation achievable with an n-level quantum system can also be achieved with an n-state classical object provided the communication lines are assisted with classical correlations only. In this work, we show that such a no-go result does not hold true in network communication scenario involving multiple access channel (MAC), where several independent senders aim to transmit messages to a single receiver. We present various instances of MAC simulation tasks wherein communicating quantum systems prove to be advantageous over their classical counterparts, even when classical channels are augmented with unlimited shared randomness across different configurations. We also identify the foundational linchpins underlying the quantum advantages, which paves the way for several other quantum benefits in network communication scenarios.
_Introduction.-_ Quantum advantages are elusive, difficult to establish, and often constrained by fundamental no-go theorems. For instance, the set of functions computable using quantum mechanics is precisely equivalent to what can be computed using classical physics, although quantum computing can provide speedup over the classical computers for a range of problems [1, 2, 3, 4, 5, 6]. On the other hand, the seminal superdense coding protocol establishes a nontrivial advantage of quantum resources in point-to-point transmission of classical information [8]. Importantly, quantum entanglement apriori shared between the sender and receiver plays a crucial role in enhancement of classical capacity of a quantum channel [9]. In fact, the _no-go_ theorem of Holevo curtails the communication advantage of individual quantum systems in absence of this entanglement [10]. In particular, it proves that without any pre-shared entanglement, the classical capacity of a quantum channel cannot exceed that of its classical counterpart. Lately, Frenkel and Weiner extended this no-go result by demonstrating that any input/output correlation achievable with an \(n\)-level quantum system can also be attained using an \(n\)-state classical system, provided that only classical correlations are allowed to be shared between the sender and receiver [11]. As a consequence, the'signaling dimension' of quantum and classical systems turns out to be identical [12] (see also [13, 14, 15]).
In the realm of network scenarios, a typical communication system involves multiple distant parties seeking to transmit information among themselves [16]. One very common network configuration is the multiple access channel (MAC), where several distant senders aim to transmit their individual messages to a single receiver [17, 18, 19], similar to the up-link from multiple mobile phones to a central base station (see Fig. 1). Mathematically, a MAC can be represented by a stochastic matrix, with its elements denoting the conditional probabilities of the receiver's outputs given the inputs to the senders. In simulation a given MAC the goal is to mimic its action using the least possible communication from the senders to the receiver. Since the senders are independent, they generally lack any communication channel among themselves. However, the communication lines from the senders to the receiver can be augmented with additional side resources, such as pre-shared classical correlations (also called shared classical randomness). This scenario can be seen as a generalization of the one sender-one receiver setup studied by Frenkel and Weiner [11]. It is, therefore, interesting to explore the implications of the Frenkel-Weiner kind of no-go theorem in this generic setup. Surprisingly, in this work we show that such a no-go theorem does not hold true in this generic context. We demonstrate this by presenting examples of MAC simulation tasks that can be accomplished with limited quantum communication from the senders to the receiver, but otherwise cannot be carried through with the corresponding classical resources.
Our first construction considers a MAC involving just two senders and one receiver. As we show this MAC can be simulated by qubit communication from each sender to the receiver, whereas simulation becomes impossible if qubit channels are replaced with classical bit (c-bit) channels. Remarkably, this classical impossibility persists even when an unlimited amount of classical shared randomness is allowed between each sender and the receiver. Nevertheless, a classical simulation strategy emerges when global classical correlations are shared among the senders and the receiver. We then construct of a family of two-sender MACs where c-bit communication cannot replace the qubit line even in presence of unlimited global classical correlations shared among the senders and the receiver, and thus establishes a stronger quantum advantage. Notably, the quantum advantage hinges on leveraging the power of entangled basis measurement during the decoding step. Next, we provide an example of a MAC with three senders, where the quantum advantage does not ne
cessitate entanglement basis measurement during decoding. Instead, the three-qubit 'SHIFT' basis depicting the intriguing phenomenon of 'quantum nonlocality without entanglement' (QNWE) [20] servers the purpose there. Finally, we demonstrate that in simulating the aforementioned MACs, c-bit communication can fulfill the role of qubits if the classical lines get assisted with the side resource of quantum entanglement shared between each sender and the receiver. The reported quantum advantages, along with their foundational underpinnings, point towards potential for several other quantum advantages in network communication scenario.
_Multiple Access Channel_\(\vdash\)- A MAC, with \(K\) distant senders \(\{S_{i}\}_{i=1}^{K}\) and one receiver \(R\), can be represented as a stochastic matrix \(\mathcal{N}^{K\mbox{\tiny{\sf{\sf{\sf{\sf{\sf{\sf{\sf{\sf{\sf{\sf{\sf{\sf{\sf{\sf{ \sf{\sf{\sf{\sf{\sf{\bf{\bf}}}}}}}}}}}}}}}\)} \(\equiv\{p(a|x_{1},\cdots,x_{K})\mid a\in A,\;x_{i}\in X_{i}\}\), where \(X_{i}\) is the message/input set of the \(i^{th}\) sender \(S_{i}\), \(A\) is the output set of the receiver \(R\), and \(p(a|x_{1},\cdots,x_{K})\) denotes the probability of obtaining the outcome \(a\in A\) given the inputs \(x_{i}\in X_{i}\); clearly \(\sum_{a\in A}p(a|x_{1},\cdots,x_{K})=1,\;\forall\;\vec{x}\in\times_{i}X_{i}\). Consider a scenario where each of the senders can communicate only \(1\)-bit of classical information to the receiver for simulating a given MAC. Without the assistance of any kind of local or shared randomness, the parties can employ only classical deterministic strategies.
**Definition 1**.: _A classical deterministic strategy with 1-bit communication from each of the senders to the receiver is an ordered tuple \((\mathrm{E}_{1},\cdots,\mathrm{E}_{K},\mathrm{D})\in\times_{i=1}^{K}\mathcal{ E}_{i}\times\mathcal{D}\), where \(E_{i}\) denotes a deterministic encoding of \(i^{th}\) party's message set \(X_{i}\) into \(1\)-bit, i.e., \(\mathrm{E}_{i}:X_{i}\mapsto\{0,1\}\) for \(i\in\{1,\cdots,K\}\), and \(D\) denotes a deterministic function from \(K\)-bit communication string into the output set \(A\), i.e., \(\mathrm{D}:\{0,1\}^{\times K}\mapsto A\)._
Calligraphic letters, in Definition 1, symbolize sets of all possible deterministic encodings and decodings for the respective parties. Number of such strategies are finite whenever \(X_{i}\)'s and \(A\) are of finite cardinalities, and collection of such strategies will be denoted as \(\mathbf{C}_{ds}^{K\mbox{\tiny{\sf{\sf{\sf{\sf{\sf{\sf{\sf{\sf{\sf{\sf{\sf{ \bf{\bf{\bf{\bf{ }}}}}_{}}}}}}}}}}}\), randomizing their respective deterministic strategies locally, can implement a classical local strategy which is an ordered tuple \((P(\mathcal{E}_{1}),\cdots,P(\mathcal{E}_{K}),P(D))\) of probability distributions; \(P(\mathcal{E}_{i})\)'s are distributions over encoding functions and \(P(\mathcal{D})\) over decoding functions. The set of all such strategies, denoted as \(\mathbf{C}_{ls}^{K\mbox{\tiny{\sf{\sf{\sf{\sf{\sf{\sf{\sf{\sf{\sf{\sf{\bf{ \bf{\bf{\bf{\bf{\bf{\bf{\bf{\bf{ }}}}}}}}}}}}}}}}}\)\)\)\) \(\)\(C_{ls}^{K\mbox{\tiny{\sf{\sf{\sf{\sf{\bf{\sf{\sf{\sf{\bf{\bf{\bf{\bf{\bf{ \bf{\bf{\bf{\bf{\bf{\bf{\bf{\bf{ }}}}}}}}}}}}}}}}}}}\)\) \(C_{ls}^{K\mbox{\tiny{\sf{\sf{\sf{\sf{\sf{\sf{\bf{\bf{\bf{\bf{\bf{\bf{\bf{ \bf{\bf{\bf{\bf{\bf{\bf{\bf{\bfbf}}}}}}}}}}}}}}}}}}}\)\)\) \(C_{ls}^{K\mbox{\tiny{\sf{\sf{\sfsf{\bf{\bf{\bf{\bf{\bf{\bf{\bf{\bf{\bf{\bf{ \bf{\bf{\bf{\bf{\bf{\bf{\bfbf}}}}}}}}}}}}}}}}}}}\)\)\(C_{ls}^{K\mbox{\tiny{\sf{\sf{\sfsf{\sf{\bf{\bf{\bf{\bf{\bf{\bf{ \bf{\bf{\bf{\bf{\bf{\bf{\bf{\bf{\bfbf{\bfbf}}}}}}}}}}}}}}}}}}}}\)\(C_{ls}^{K\mbox{\tiny{\sf{\sf{\sfsf{\bf{\bf{\bf{\bf{\bf{\bf{\bf{ \bf{\bf{\bf{\bf{\bf{\bf{\bf{\bf{\bfbf{\bfbf{\
deals with a MAC which involves two senders. Each of the senders are given independent two-bit strings \(\mathbf{x}\in\{0,1\}^{\times 2}\) and \(\mathbf{y}\in\{0,1\}^{\times 2}\), respectively; and the receiver produces a two-bit output strings \(\mathbf{a}\in\{0,1\}^{\times 2}\). Quantum strategy reproducing the MAC is as follows: the senders respectively employ the encodings
\[\mathrm{E}_{1}^{q}:\{00\mapsto\left|0\right\rangle_{S_{1}},01 \mapsto\left|1\right\rangle_{S_{1}},10\mapsto\left|+\right\rangle_{S_{1}},11 \mapsto\left|-\right\rangle_{S_{1}}\},\] \[\mathrm{E}_{2}^{q}:\{00\mapsto\left|0\right\rangle_{S_{2}},01 \mapsto\left|1\right\rangle_{S_{2}},10\mapsto\left|-\right\rangle_{S_{2}},11 \mapsto\left|+\right\rangle_{S_{2}}\},\]
where \(\{\left|0\right\rangle,\left|1\right\rangle\}\) is the computational basis of \(\mathbb{C}^{2}\), and \(\left|\pm\right\rangle:=(\left|0\right\rangle\pm\left|1\right\rangle)/\sqrt{2}\). Receiver performs a two-qubit maximally entangled basis measurement on the qubits received from the senders and decodes the outcomes as follows:
Notably, this quantum strategy draws inspiration from the renowned Pusey-Barrett-Rudolph (PBR) theorem in quantum foundations [25], which in turn suggests the name \(\mathcal{N}_{PBR}^{\mbox{\scriptsize{2}}\mbox{\scriptsize{2}}\mbox{ \scriptsize{1}}}\) for the resulting MAC. By construction, \(\mathcal{N}_{PBR}^{\mbox{\scriptsize{2}}\mbox{\scriptsize{1}}}\) allows a simulation strategy in \(\mathbf{Q}_{ds}^{\mbox{\scriptsize{2}}\mbox{\scriptsize{3}}}\). We now proceed to establish an impossibility result regarding simulation of this MAC with qubit communication replaced by its classical counterpart.
**Proposition 1**.: \(\mathcal{N}_{PBR}^{\mbox{\scriptsize{2}}\mbox{\scriptsize{1}}}\) _cannot be simulated with \(1\)-bit communication from each sender to the receiver, even when the communication lines are augmented with the resource \(\cup_{i=1}^{2}\$_{RS_{2}}\)._
Proof.: (Outline) Note that few of the conditional probabilities in \(\mathcal{N}_{PBR}^{\mbox{\scriptsize{2}}\mbox{\scriptsize{1}}}\equiv\{p( \mathbf{a}|\mathbf{x},\mathbf{y})\}\) are zero. We first identify the classical deterministic strategies that satisfy these zero conditions. As it turns out, only \(48\) deterministic strategies in \(\mathbf{C}_{ds}^{\mbox{\scriptsize{2}}\mbox{\scriptsize{2}}\mbox{\scriptsize{ 1}}}\) are compatible with these zero requirements. Then we show that any strategy obtained through convex mixing of theses \(48\) deterministic strategies and reproducing \(\mathcal{N}_{PBR}^{\mbox{\scriptsize{2}}\mbox{\scriptsize{1}}}\) demands all the three parties to share global randomness \(\$_{G}\) among themselves. This completes the proof of our claim, with detailed calculations provided in Appendix-A.
Proposition 1 highlights the advantage of qubit communication over the c-bit in a network communication setup, which is prohibited in point-to-point communication scenarios involving only one sender and one receiver [11]. Notably, the quantum advantage is limited in a sense. Although the c-bit channels augmented with the side resource \(\cup_{i=1}^{2}\$_{RS_{i}}\) cannot simulate the MAC \(\mathcal{N}_{PBR}^{\mbox{\scriptsize{2}}\mbox{\scriptsize{1}}}\), a classical strategy is possible if the resource \(\$_{G}\) (_i.e._ global SR among the three parties) is allowed (see Appendix-A). We now introduce a class of two-sender MACs which exhibit a stronger quantum advantage - classical strategies become impossible even with the side resource \(\$_{G}\). The senders get inputs \(x\) and \(y\) from the set \(\{0,\cdots,m-1\}\), while the receiver produces a binary output \(a\in\{0,1\}\). The probabilities \(\{p(a=0|x,y)\}\) uniquely determine the MAC since the other values are fixed by normalization. Denoting \(p(a=0|x,y)\) as \(p(x,y)\), the MAC \(\mathcal{N}_{m}^{\mbox{\scriptsize{2}}\mbox{\scriptsize{1}}}\equiv\{p(x,y)\}\) is defined as \(p(x,y):=\mathrm{Tr}\left[\left|\phi^{+}\right\rangle\left\langle\phi^{+} \right|\rho_{x}\otimes\rho_{y}\right]\), where \(\left|\phi^{+}\right\rangle:=(\left|00\right\rangle+\left|11\right\rangle)/ \sqrt{2}\), \(\rho_{u}=\frac{1}{2}\left(\mathbf{I}+\cos(2u/m)\sigma_{Z}+\sin(2u/m)\sigma_{X}\right)\) for \(u\in\{x,y\}\). By construction, all these MACs can be simulated by a quantum strategy with qubit encoding. As the encoding states lie on the vertices of an \(m\)-sided polygon in the \(xz\)-plane of the Bloch sphere, we refer to them as the polygon-MACs. Our next result establishes a stronger quantum advantage in simulating a family of these polygon-MACs.
**Proposition 2**.: _For \(m\in\{5,\cdots,9\}\), the polygon-MACs \(\mathcal{N}_{m}^{\mbox{\scriptsize{2}}\mbox{\scriptsize{1}}}\) cannot be simulated using the strategies \(\mathbf{C}_{cs}^{\mbox{\scriptsize{2}}\mbox{\scriptsize{4}}}\)._
Proof.: (Outline) Since the set of strategies \(\mathbf{C}_{cs}^{\mbox{\scriptsize{3}}\mbox{\scriptsize{4}}\mbox{\scriptsize{1}}}\) forms a convex and compact set, we can employ the power of the classic Minkowski-Hahn-Banach hyperplane separation theorem [26]. This theorem empowers us to construct an appropriate witness operator (or hyperplane) to determine whether a given MAC is simulable with such a strategy or not. We construct the linear witness operator \(\mathbf{w}_{m}\) and determine the upper bound that restricts the values \(\mathbf{w}_{m}[\mathbf{p}_{c}]\) for any strategy \(\mathbf{p}_{c}\in\mathbf{C}_{cs}^{\mbox{\scriptsize{2}}\mbox{\scriptsize{4 }}}\), and then show that there exists qubit strategies violating theses bounds. The explicit forms of the witness operators along with the optimal classical values and the corresponding quantum violations are provided in Appendix-B. This concludes the proof.
Importantly, the nature of the quantum advantages established in Propositions 1 & 2 are distinct from the advantage known in the communication complexity scenario [27] (also see [28, 29, 30, 31]). In communication complexity, the objective is to compute the value of a specific function whose inputs are distributed between two remote parties. The parties aim to achieve the goal using the least possible amount of communication between them. Canonical instances of such problems where quantum systems exhibit advantages over the classical counterparts are the task of quantum random access codes [32, 33, 34]. Importantly, the quantum advantage in those problems relies on the exploitation of non-classical features of quantum systems during the encoding step as well as the decoding step. The sender prepares the quantum system in superposed states based on the inputs she receives, and the receiver, based on his inputs, selects a decoding measurement from a set of incompatible measurements. However, the scope for utilizing non-classical effects at the decoding step does not arise in the Frenkel-Weiner setup, as no input is provided to the receiver in this case. This effectively renders the information capacity of a quantum system identical to its classical counterpart [11, 12]. Interestingly, when considering a MAC - a generalization of the Frenkel-Weiner setup - a new opportunity emerges at the decoding step where quantum effects can play a non-trivial role. The receiver can perform a global measurement, such as an entangled basis measurement, on the quantum systems received from different senders. This precisely happens while establishing the quantum advantage outlined in Propositions 1 & 2.
Naturally, the question arises: is an entanglement basis measurement necessary to achieve such an advantage? Interestingly, we will now demonstrate that this is not the case in general. To illustrate this, we consider a MAC involving three senders and one receiver. Senders are provided independent two-bit strings as inputs, _i.e._, \(X=Y=Z\equiv\{0,1\}^{\times 2}\), while the receiver produces three-bit string outputs, _i.e._, \(A\equiv\{0,1\}^{\times 3}\). Here also we take the reverse engineering approach to introduce the quantum strategy in \(\mathbf{Q}_{ds}^{\text{3+1}}\) that leads us to the desired MAC (see Fig. 2). Since the receiver employs a decoding measurement in a product basis known as the SHIFT ensemble, we will refer to the resulting MAC as \(\mathcal{N}_{shift}^{\text{3+1}}\). Structure of this MAC is analyzed in Appendix-B. Notably, the SHIFT measurement exhibits the phenomenon of 'quantum nonlocality without entanglement' (QNWE) [20] (see also [35]), and implementation of this measurement necessitates a global interaction among the three qubits [36]. What follows next is a no-go result regarding simulability of the resulting MAC \(\mathcal{N}_{shift}^{\text{3+1}}\) using classical strategies.
**Proposition 3**.: _The MAC \(\mathcal{N}_{shift}^{\text{3+1}}\) cannot be simulated using any strategy from the set \(\mathbf{C}_{cs}^{\text{3+1}}\)._
Proof.: (Outline) Proof follows a similar reasoning as of Proposition 2. We construct a linear witness operator that yields payoff value \(10(5\sqrt{2}-6)\approx 10.71\) for the MAC \(\mathcal{N}_{shift}^{\text{3+1}}\), while the corresponding values for strategies in \(\mathbf{C}_{cs}^{\text{3+1}}\) are upper bounded by \(8\). This concludes the proof with detailed calculation provided in Appendix-C.
Proposition 3 is remarkable from another perspective as well. As highlighted by Bennett and Shor [37], for a quantum channel, four basic types of classical capacities can be defined, that correspond to utilization of either product or entangled states at the input, and product or entangled measurements at the output. Although these capacities become identical for a perfect quantum channel, in presence of noise, leveraging entanglement in encoding and decoding can provide a higher capacity compared to using only product encoding-decoding [38; 39; 40]. However, for product decoding, such an advantage is absent even when entangled states are employed in the encoding step [41]. Nevertheless, Proposition 2 reveals that even a product basis measurement, which exhibits QNWE can prove beneficial for simulating a MAC.
So far we have established quantum advantages by providing examples of MACs that cannot be simulated using \(1\)-bit classical communication from each sender to the receiver, combined with arbitrary shared randomness across the sender-receiver divide (as shown in Proposition 1), and in certain cases even with global shared randomness (as shown in Propositions 2 & 3). Interestingly, we will now show that situations get changed if entanglement assistance is available to the c-bit channel.
**Proposition 4**.: _The MACs \(\mathcal{N}_{PBR}^{\text{2+1}}\), \(\mathcal{N}_{shift}^{\text{3+1}}\), and \(\mathcal{N}_{m}^{\text{2+1}}\) can all be simulated using \(1\)-cbit communication from each sender to the receiver, provided each communication line is assisted by a two-qubit maximally entangled state._
The claim simply follows from the familiar'remote state preparation' protocol [42; 43; 44], since in all the cases the encoding states are chosen from great circles of the Bloch sphere. The decoding step goes as that of the qubit based protocols. Proposition 4 establishes nontrivial usage of quantum entanglement in network communication scenario. The results in [45; 46; 47; 48] are worth mentioning at this point, which depict similar advantages of entanglement in point-to-point communication involving one sender and one receiver.
_Discussion.-_ In the context of simulating multiple-sender-to-one-receiver channels, our study uncovers a novel advantage of qubit communication over the c-bit. Notably, the present work is distinct from other recent studies on MAC [49; 50; 51], where it has been shown that nonlocal correlations shared among the distant senders can lead to higher channel capacities. As previously highlighted, our construction in Proposition 1 draws inspiration from the renowned PBR theorem [25]. This theorem establishes quantum wave-functions to be \(\psi\)_-ontic_, implying a direct correspondence with reality [53]. It would be, therefore, intriguing to explore the possible connection between the \(\psi\)-onticity of quantum wave-functions and the quantum advantage reported here. It is noteworthy that the quantum advantages in Propositions 1 & 2 do not rely on incompatible measurements at the decoding step; rather, measurements involving entangled projectors are utilized. Such measurements are known to play a pivotal role in the phenomenon of 'quantum nonlocality without inputs' [52]. An interesting question to ponder is the potential connection between the reported quantum advantage and the concept of network nonlocality [54]. On the other hand, our construction also provides a way for semi-device-independent certification of entangled measurements [55; 56; 57].
The MAC presented in Proposition 3 underscores the intricate role of QNWE in establishing the qubit advantage over the c-bit in network communication scenarios. Numerous other product bases are documented in literature that exhibit the QNWE phenomenon, with recent studies even introducing various variants of this phenomenon [58; 59; 60]. Explor
Figure 2: \(\mathbf{Q}_{ds}^{\text{3}\to 1}\) strategy simulating the MAC \(\mathcal{N}_{shift}^{\text{3}\to 1}\). While the encoding states are symmetrically chosen from the \(xz\)-plane of the Bloch sphere, the receiver employs the decoding measurement in SHIFT basis [20].
ing these constructions to establish quantum advantages in the MAC scenario would be highly intriguing. Lastly, Proposition 4 demonstrates that all the reported advantages of the qubit channel over the c-bit vanish when the latter is assisted with entanglement. It would be quite interesting to formulate a scenario where the qubit channel maintains an advantage over an entanglement-assisted c-bit channel.
SGN acknowledges support from the CSIR project 09/0575(15951)/2022-EMR-I. EPL acknowledges support from the FWO through the BeQuNet SBO project S008323N. MA and MB acknowledge funding from the National Mission in Interdisciplinary Cyber-Physical systems from the Department of Science and Technology through the I-HUB Quantum Technology Foundation (Grant no: I-HUB/PDF/2021-22/008). MB acknowledges support through the research grant of INSPIRE Faculty fellowship from the Department of Science and Technology, Government of India, and the start-up research grant from SERB, Department of Science and Technology (Grant no: SRG/2021/000267). EPL is grateful to Stefano Pironio for discussions on numerical methods.
|
2309.11612 | Brief Architectural Survey of Biopotential Recording Front-Ends since
the 1970s | Measuring the bioelectric signals is one of the key functions in wearable
healthcare devices and implantable medical devices. The use of wearable
healthcare devices has made continuous and immediate monitoring of personal
health status possible. Implantable medical devices have played an important
role throughout the fields of neuroscience, brain-machine (or brain-computer)
interface, and rehabilitation technology. Over the last five decades, the
bioelectric signals have been observed through a variety of biopotential
recording front-ends, along with advances in semiconductor technology scaling
and circuit techniques. Also, for reliable and continuous signal acquisition,
the front-end architectures have evolved while maintaining low power and low
noise performance. In this article, the architecture history of the
biopotential recording front-ends developed since the 1970s is surveyed, and
overall key circuit techniques are discussed. Depending on the bioelectric
signals being measured, appropriate front-end architecture needs to be chosen,
and the characteristics and challenges of each architecture are also covered in
this article. | Taeju Lee, Minkyu Je | 2023-09-20T19:57:32Z | http://arxiv.org/abs/2309.11612v1 | # Brief Architectural Survey of Biopotential Recording Front-Ends since the 1970s
###### Abstract
Measuring the bioelectric signals is one of the key functions in wearable healthcare devices and implantable medical devices. The use of wearable healthcare devices has made continuous and immediate monitoring of personal health status possible. Implantable medical devices have played an important role throughout the fields of neuroscience, brain-machine (or brain-computer) interface, and rehabilitation technology. Over the last five decades, the bioelectric signals have been observed through a variety of biopotential recording front-ends, along with advances in semiconductor technology scaling and circuit techniques. Also, for reliable and continuous signal acquisition, the front-end architectures have evolved while maintaining low power and low noise performance. In this article, the architecture history of the biopotential recording front-ends developed since the 1970s is surveyed, and overall key circuit techniques are discussed. Depending on the bioelectric signals being measured, appropriate front-end architecture needs to be chosen, and the characteristics and challenges of each architecture are also covered in this article.
Analog front-end, amplifier, bioelectric signal, biopotential, biomedical engineering, neurotechnology, healthcare, wearable healthcare device (WHD), implantable medical device (IMD), CMOS technology scaling.
## I Introduction
Since the 1970s, the biopotential recording front-ends have evolved based on various circuit architectures to measure bioelectric signals, leading to significant advancements in wearable healthcare devices (WHDs) and implantable medical devices (IMDs). The WHDs have been widely used for daily personal healthcare and continuous patient monitoring. The IMDs have been used in patients for a variety of purposes, such as deep brain stimulation for Parkinson's, artificial pulse generation for heart failure, stimulation of auditory nerve for hearing loss, etc. The recording front-ends have made a significant achievement in the brain-machine interface, revolutionizing the motor function restoration of patients with difficulties in their physical activities. In addition, the recording front-ends have significantly advanced the field of neuroscience by enabling _in-vivo_ and _in-vitro_ neural activity monitoring.
In this article, the recording front-ends, developed since the 1970s, are surveyed regarding their front-end architectures and key circuit techniques. Note that the front-ends that record bioelectric signals invasively and non-invasively have been surveyed, and the bioelectric signals include action potentials (APs), local field potentials (LFPs), electrocorticogram, electroencephalogram, electrocardiogram, electromyogram, and electrooculogram.
Over the last decades, the complementary metal-oxide semiconductor (CMOS) devices have been gradually scaled down, and the recording front-ends also have evolved along with device scaling, as shown in Fig. 1. The front-ends have had a significant impact across a variety of fields, from personal healthcare to cutting-edge neuroscience research, and the development of the front-end is actively underway to improve the performance such as spatial density, power consumption, noise, etc. The biopotential recording front-ends have been developed based on various circuit architectures, and each architecture shows different characteristics in its frequency response, DC offset cancellation, input-referred noise, etc. Also, the circuit technique can be combined with the circuit architecture to improve a specific
performance, e.g., the input-referred noise, input impedance, and dynamic range. However, the circuit techniques may introduce performance trade-offs. Therefore, choosing appropriate architecture and circuit techniques depending on the goal is important to minimize performance limitations and trade-offs. All front-ends surveyed in this article are listed in the paper [1].
Section II of this article provides a chronological explanation of circuit architectures that emerged as technology scaled. Section III presents the key front-end architectures used over the past decades. Section IV concludes this article by outlining future directions.
## II Architectural Progress & Technology Scaling
### A. 1970s to 1980s
Advances in microfabrication technology triggered the miniaturization of front-ends, which led to the development of implantable devices. In the early development period, the recording front-ends were designed as a continuous-time buffer (CT-Buf) using the source follower [2, 3, 4, 5]. In 1986, a front-end architecture based on continuous-time open-loop amplification (CT-OL-Amp) was designed using a 6-\(\mu\)m NMOS process [6]. In 1987, a front-end using current balancing amplification was developed in a 3-\(\mu\)m CMOS process [7]. Note that Fig. 1 shows the architectural progress as technology has scaled down since the 1970s, which is explored using the technology scaling data in [1].
### B. 1990s
Although many recording front-ends were not developed in the 1990s, the CT-OL-Amp-based front-ends were developed using the 6-\(\mu\)m and 3-\(\mu\)m nodes in 1991 and 1992, respectively, as shown in Fig. 1. Also, a current balancing-based front-end was developed using a 2.4-\(\mu\)m CMOS process [8], and a resistive-feedback-based channel was developed using a 2-\(\mu\)m CMOS process [9]. Most front-ends implemented from the 1970s to the 1990s were developed to detect neural activities such as APs and LFPs. Also, most front-ends developed in that period were designed as active neural probes that placed the front-end close to the electrode, thereby minimizing the form factor of the recording system and improving the observed signal quality.
Fig. 1: Architecture trends of biopotential recording front-ends depending on technology scaling.
### \(C\). 2000\(s\)
Compared to the period from the 1970s to the 1990s, numerous front-ends were developed using various technology nodes ranging from 1.5 \(\mu\)m to 0.18 \(\mu\)m in the 2000s, as shown in Fig. 1. Especially, after the emergence of the capacitive-feedback-based architecture that efficiently filters out input DC offsets and exhibits excellent noise performance [10], many front-ends were developed by employing that architecture. Although resistive-feedback-based front-ends were also widely employed in the early 2000s, the capacitive-feedback architecture was overwhelmingly used in front-end design throughout that period. During this period, the front-ends based on CT-OL-Amp architectures were rarely developed compared to the continuous-time closed-loop amplification (CT-CL-Amp) architectures.
### \(D\). 2010\(s\)
Even into the 2010s, among the closed-loop front-end architectures employing resistive feedback and capacitive feedback, the capacitive-feedback-based one was dominantly adopted in front-end designs. Note that 1) the CT-Buf-based front-end means that the front-end is designed as a source follower or a unity-gain buffer; 2) the CT-OL-Amp-based front-end means that the front-end is implemented using one of the current balancing amplifier, operational transconductance amplifier (OTA), and common-source stage; and 3) the CT-CL-Amp-based front-end means that the front-end is designed based on resistive feedback or capacitive feedback.
Fig. 2: Overall front-end architectures used over the past decades.
The architectures based on CT-OL-Amp and CT-CL-Amp require an analog-to-digital converter (ADC) following the analog front-end stage to digitize the signal. These front-end architectures could induce performance degradation in their dynamic range, spatial density, etc. In the early 2010s, a direct conversion (D-Conv) architecture, which directly generates the digitized data from the input, emerged [11, 12, 13]. The D-Conv architecture began to be widely used to compensate for the drawbacks of previous architectures employing CT-OL-Amp and CT-CL-Amp. Also, the advent of a discrete-time amplification (DT-Amp) architecture led to the improvement of the noise efficiency factor (NEF) of the front-end compared to the previous front-end using an OTA [14]. During this period, the front-ends were developed using the technology nodes ranging from 0.6 \(\mu\)m to 40 nm, as shown in Fig. 1.
### _E._ 2020s
In the 2020s, the architectures based on D-Conv and CT-CL-Amp using capacitive feedback have been more widely used for front-end design than other topologies. Also, the NEF and power efficiency factor (PEF) have been further improved in the DT-Amp architecture [15]. The front-ends have been developed using increasingly advanced technology nodes, as shown in Fig. 1. The technology nodes used have ranged from 0.35 \(\mu\)m to 22 nm. Considering that the D-Conv architecture can be designed in a digitally intensive manner compared to CT-OL-Amp and CT-CL-Amp architectures, hopefully, the front-end performance, such as power and area consumption, could be continuously improved in the future, especially when using more advanced technology nodes.
## III Front-End Architecture
Fig. 2 summarizes the front-end architectures for biopotential recording developed over the last five decades. First, the front-ends are categorized into three groups: continuous-time amplification (CT-Amp), direct conversion (D-Conv), and discrete-time amplification (DT-Amp). Second, the CT-Amp architectures are categorized into two groups: continuous-time closed-loop amplification (CT-CL-Amp) and continuous-time open-loop amplification (CT-OL-Amp). Third, the CT-CL-Amp architectures are again divided into two groups according to their feedback types: resistive feedback (Figs. 2(g)-(i)) and capacitive feedback (Figs. 2(j)-(l)). Fourth, the CT-OL-Amp architectures are divided into three groups according to their amplifier types: the current balancing amplifier (Fig. 2(c)), OTA (Figs. 2(d) and (e)), and common-source stage (Fig. 2(f)). Fifth, the D-Conv architectures are categorized according to whether their nonlinearities are compensated for by using feedback or calibration (Figs. 2(m)-(o)). Finally, the CT-Buf architectures are divided into two types using the source follower and unity-gain buffer, shown in Figs. 2(a) and (b), respectively. Fig. 3 visualizes the architectures used over the last decades chronologically, revealing how frequently each architecture has been used over time.
Fig. 3: Front-end architectures used in chronological order.
## IV Conclusion
This article presents a history of biopotential recording front-ends. The recording front-ends are chronologically surveyed from the viewpoints of technology scaling and architectural development. As CMOS technology scales down, the front-ends also have evolved, using more advanced nodes to improve spatial density and power efficiency. To record from the significant portion of the brain network, which could be the most challenging task, scalable front-ends with higher spatial density and energy efficiency must be realized without compromising other performance parameters such as noise and dynamic range. To achieve this goal, the revolutionization needs to continue in device scaling, front-end architectures, circuit techniques, and neural probes.
|
2309.04592 | Looping the loops: a tale of elliptic dual Feynman integrals | In this talk, we review a loop-by-loop approach used to generate differential
equations for multi-scale (dual) Feynman integrals. We illustrate the method on
a well-established example: the unequal mass elliptic sunrise. | Mathieu Giroux, Andrzej Pokraka, Franziska Porkert, Yoann Sohnle | 2023-09-08T20:56:10Z | http://arxiv.org/abs/2309.04592v1 | # BoNN-TH-2023-07
###### Abstract:
In this talk, we review a loop-by-loop approach used to generate differential equations for multi-scale (dual) Feynman integrals. We illustrate the method on a well-established example: the unequal mass elliptic sunrise.
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote †: Speaker
+
Footnote † †: Speaker
+
Footnote † †: Speaker
+
Footnote †: Speaker
+
Footnote † †: Speaker
+
Footnote † †: Speaker
+
Footnote † †: Speaker
+
Footnote † †: Speaker
+
Footnote † †: Speaker
+
Footnote † †: Speaker
+
Footnote † †: Speaker
+
Footnote † † †: Speaker
+
Footnote † † †: Speaker
+
Footnote † † †: Speaker
+
Footnote † † †: Speaker
+
Footnote † † †: Speaker
+
Footnote † † †: Speaker
+
Footnote † † †: Speaker
+
Footnote † † †: Speaker
+
Footnote † † †: Speaker
+
Footnote † † †: Speaker
+
Footnote † † †: Speaker
+
Footnote † † †: Speaker
+
Footnote † † † †: Speaker
+
Footnote † † † †: Speaker
+
Footnote † † † †: Speaker
+
Footnote † † † † †: Speaker
+
Footnote
## 1 Introduction
It is a long-standing fact, reminiscent of the golden age of string theory, that (hyper-)elliptic curves and Calabi-Yau manifolds play a central role in various corners of mathematics and physics. Over the past few decades, a heroic effort has been made by various authors to demonstrate this in the context of particle physics (see [1, 2, 3, 4, 5, 6, 7, 8] and references therein.) In this context, Feynman diagrams are used to compute scattering amplitudes, which yields observables such as cross-sections measurable in collider experiments or waveforms detected in gravitational wave observatories.
For diagrams with loops, the corresponding _Feynman integrals_ over the off-shell loop momenta determine the probabilities of various particle interactions. Beyond the 1-loop level, Feynman integrals depending on multiple kinematic scales (like energies, angles, and masses) often involve complex underlying geometric structures. Fig. 1 shows a few phenomenologically relevant 2-loop examples that involve one or more elliptic curves. As our understanding of these integrals deepened, it became evident that many could be totally understood and even evaluated through inherent properties of these rich geometries.
An effective approach employed to accomplish this is the method of _differential equations_. However, a significant hurdle in using this approach is the path to a so-called _canonical basis_. Such a basis, denoted by \(\vec{I}\), satisfies a differential equation that we can solve order-by-order in the dimensional regularization parameter \(\varepsilon\in\mathbb{C}\)[9]. It typically takes the following form:
\[\mathrm{d}\vec{I}=\varepsilon\ \mathbf{\Omega}\cdot\vec{I}\,, \tag{1}\]
where \(\mathbf{\Omega}\) is a matrix of kinematic one forms independent of \(\varepsilon\). For many integrals (e.g., those without internal masses), a canonical basis is systematically obtained normalizing by _leading singularities_ (in the polylogarightmic case, one can think of leading singularities simply as maximal residues or as powers of Gram determinants in the external kinematics.)
As soon as we turn on some internal masses, the story often changes. Here, we argue that this is because the precise definition of leading singularities in some of these examples is somewhat blurry (i.e., not uniquely defined), and so are "systematic methods" used to achieve a canonical form. In that sense, one of the most pressing problems in this program is how to construct such a basis \(\vec{I}\) from first principles, i.e., without relying too heavily on _ad-hoc_ ansatz-based techniques. Interesting progress on this issue was made recently in [10].
While we do not claim to solve this problem fully, we believe that we can make significant progress using a concise set of analytical tools: unitarity and geometry. These tools are intrinsically integrated into the _dual forms_ framework of [11, 12]. For this reason, dual forms seems to be the appropriate objects to leverage the intuitive, yet subtle, notion that a multi-loop problem is simply a bunch of coupled 1-loop problems [13]. Furthermore, when working with elliptic
Figure 1: Examples of graphs in which one (or more) elliptic curves is (are) lurking. These include self-energy kite integrals, electro-weak form-factors and Bhabha scatterings.
Feynman integrals, it is to our advantage to foreground a key symmetry: the _modular symmetry_. When combined into one toolbox, the above provides streamlined instructions to break down the construction of a canonical basis into a minimal sequence of algebraic steps. Below, we will demonstrate this approach using a proof-of-concept example: the unequal mass elliptic sunrise.
## 2 From Feynman integrals to the dual paradigm
To set the stage, let us revisit the definition of Feynman integrals within dimensional regularization. At the most basic level, these are the integrals we obtain by applying Feynman rules on loop diagrams. However, as is the case with any other function, there isn't a unique way to represent it as an integral. Over the years, various representations have been introduced, each tailored for a specific purpose (for an extensive pedagogical account, see [14].) Most representations take the form of _twisted periods_:
\[I=\int_{X}u\ \phi\,, \tag{2}\]
where the _twist_\(u\) typically represents a multi-valued function through its dependence on the (non-integer) spacetime dimension. Meanwhile, \(X\) denotes an integration domain that incorporates the Feynman \(i0\) prescription. Both \(u\) and \(X\) are common to all Feynman integrals in a given family, whereas the algebraic differential \(n\)-form \(\phi\) is not shared among all integrals, and its form depends on the propagator structure of each graph. Thus, the purpose behind expressing Feynman integrals as shown in (2) is to highlight the distinction between quantities such as \(u\) and \(X\) that are universal to a family of integrals, and those tied to a particular diagram such as \(\phi\).
An important property of Feynman integrals is that they can consistently be represented as a _finite_ combination of simpler integrals [15]. This finite collection of integrals is commonly referred to as the set of _master integrals_. It follows that any family of Feynman integrals forms a vector space. This vector space also possesses the additional property of being closed under differentiation with respect to the kinematic variables [16].
Now, one can invoke a standard fact from linear algebra: for any given finite-dimensional vector space, there exists a _dual vector space_, from which the original space can be studied. Our next objective is to understand how the dual space for Feynman integrals is defined and its practical use in computations.
Dual formsIn the most direct manner, the integrand \(\widetilde{\phi}\) dual to the Feynman integrand \(\phi\) (as given in (2)) is defined such that the _intersection paring_:
\[\langle\widetilde{\phi}|\phi\rangle\propto\int_{\Gamma}\ (\widetilde{u}\times u )\ \widetilde{\phi}\wedge\phi\,, \tag{3}\]
makes sense.1 Here, \(\Gamma\equiv\mathbb{C}^{n}\setminus\{u=0\}\cup\{\text{propagators}=0\}\) and \(\widetilde{u}\equiv u|_{\varepsilon\rightarrow-\varepsilon}\) denotes the _dual twist_.
Footnote 1: See [12] for the exact constant of proportionality.
Let us unpack this further. For the intersection number (3) to be meaningful, it must be unique. This uniqueness requires that the product of the dual twist and the twist is an algebraic function (not necessarily one!) Furthermore, we need this unique number to be finite. This condition is met when the zeros of the dual form are paired with the poles of the Feynman integrands, which are the propagators. In other words, \(\widetilde{\phi}\) is supported only away from \(\phi\)'s _unregulated_ (_non-twisted_) poles.
In practice, we implement this condition by attaching strings of \(\delta\)-functions to dual forms for each propagator on the Feynman side: the form \(\vec{\phi}\) dual to the Feynman form \(\phi\) is proportional to a wedge product of \(\mathrm{d}\theta(x)=\delta(x)\mathrm{d}x\)'s (in the sense of distributions), each of which accounts for a propagator that can be put on-shell. The overall function multiplying this wedge product varies depending on the situation, as we will see later. For a complete treatment of dual forms, see [11, 12, 13].
## 3 Looping the loops and differential equations
Now that the basics of dual forms are established, let us describe in detail the method used below to generate differential equations for dual Feynman integrals. This method draws from the naive idea that a multi-loop problem can be broken down into a collection of simpler, yet coupled, 1-loop problems. We will see below how this can be used to construct differential equations for (dual) Feynman integrals, one loop at a time. This approach is interesting from the standpoint of multi-scales problems because it comes with the immediate reward of having to deal with less variables at a time.
SetupTo turn a multi-loop problem into a series of more manageable 1-loop problems, it is essential to get a grip on the fibre bundle structure of the multi-loop integrals we want to study. Given that a 2-loop integrand effectively illustrates this concept and that the pattern is evident for (\(L>2\))-loop integrands, we focus on 2-loop integrals.
Denoting by \(\ell_{1}\) and \(\ell_{2}\) the loop momenta, one may rewrite any 2-loop integrand as the wedge product of some (fibered) 1-loop basis:
\[\underbrace{\vec{\varphi}_{a}^{\text{(2-loop)}}(\ell_{1},\ell_{2},\{p_{ij}^{ 2}\},\{m_{j}\})}_{\text{2-loop integrand (total space)}}=\underbrace{\vec{\varphi}_{b}^{\text{(1-loop)}}(\ell_{2},\{q_{ij}^{ 2}\},\{m_{j}\})}_{\text{First loop (fibre)}}\wedge\underbrace{\vec{\varphi}_{ba}^{\text{( left-over)}}(\ell_{1},\{p_{ij}^{2}\},\{m_{j}\})}_{\text{Second loop (base)}}, \tag{4}\]
with \(p_{ij}=p_{i}+p_{j}\), where the \(p\)'s denote the external momenta in the 2-loop integrand (total space). Similarly, \(q_{ij}=q_{i}+q_{j}\), where the \(q\)'s label the external momenta in the first loop integrand (fibre), which can include some of the \(p\)'s as well as \(\ell_{1}\).
The splitting given in (4) imposes constraints on the left-over pieces (base) and guides our selection of "good" \(\vec{\varphi}_{ba}^{\text{(1-loop)}}\). In addition, once the differential equation \(\mathbf{\Omega}^{\text{(1-loop)}}\) for the 1-loop basis is known, we can commute \(\vec{\nabla}\equiv\mathrm{d}+\mathrm{d}\log\vec{u}^{\text{(1-loop)}}\wedge\) across the 1-loop basis to get a new covariant derivative acting on the left-over part:
\[\vec{\nabla}\left(\vec{\varphi}_{b}^{\text{(1-loop)}}\wedge\vec{\varphi}_{ba }^{\text{(left-over)}}\right)\simeq\vec{\varphi}_{b}^{\text{(1-loop)}}\wedge \vec{\nabla}_{bc}^{\text{(new)}}\left(\vec{\varphi}_{ca}^{\text{(left-over)} }\right). \tag{5}\]
Above,"\(\simeq\)" stands for "equal mod total derivatives." Moreover, the knowing \(\mathbf{\Omega}^{\text{(1-loop)}}\) gives us the covariant derivative \(\vec{\nabla}_{ab}^{\text{(new)}}\) for the left-over part. Indeed, after the dust settles, one finds from (5)
\[\vec{\nabla}_{ab}^{\text{(new)}} =\delta_{ab}\left(\mathrm{d}+\mathrm{d}\log\vec{u}^{\text{(left- over)}}\right)+\Omega_{ab}^{\text{(1-loop)}}\wedge\right.\] \[\equiv\delta_{ab}\mathrm{d}+\vec{\omega}_{ab}\wedge\]
We expect that it should be easier to obtain a "good" (but not necessarily \(\varepsilon\)-form) 2-loop differential equations from the action of \(\vec{\nabla}_{bc}^{\text{(new)}}\) on \(\vec{\varphi}_{ca}^{\text{(left-over)}}\) than from the action of \(\vec{\nabla}\) on \(\vec{\varphi}^{\text{(2-loop)}}\):
\[\text{\emph{Computing }}\vec{\nabla}\vec{\varphi}_{a}^{\text{(2-loop)}}\simeq\vec{ \varphi}_{b}^{\text{(2-loop)}}\wedge\Omega_{ba}^{\text{(2-loop)}}\text{ is harder than }\vec{\nabla}_{bc}^{\text{(new)}}\vec{\varphi}_{ca}^{\text{(left-over)}}\simeq \vec{\varphi}_{bc}^{\text{(left-over)}}\wedge\Omega_{ca}^{\text{(2-loop)}}\,. \tag{7}\]
The rationale for this expectation is that this approach yields two strong constraints on the fibered dual bases, such that there are not too many options to start the problem with. We refer to these constraints below as _loop-by-loop constraints_. The first imposes that the fibre basis is normalized such that \(\widetilde{\omega}\) in (6) factorizes linearly in \(\varepsilon-\) i.e., that \(\widetilde{\omega}\propto\varepsilon\). The second requires the wedge product in (4) to be single-valued (algebraic). Note that when this constraint is applied, the fibre forms have _already_ been fixed by the first constraint. Therefore, only the base form basis is affected by this constraint.
While the expectation given in (7) is quite general, there is an important technical caveat we would like address here. Indeed, although the (dual) IBP reduction resulting in \(\mathbf{\Omega}^{\text{(2-loop)}}\) works well using a _global_ set of coordinates in some of the examples we examined (e.g., Sec. 4), we found instances (e.g., the 5-mass kite integral family) with obstructions to this global approach [17]. One way out of this is to use multiple local coordinate systems, which allow for a reduction in each sector, instead of relying solely on a global one. The results can then be pulled back to a global set of coordinates post reduction. More optimistically, in view of the recent progress discussed in [18], one might soon be able to bypass the cumbersome use of local coordinates and IBPs by employing directly cutting-edge intersection numbers algorithms.
## 4 The 3-mass sunrise
Let us now exemplify the discussion above and consider the scalar 2-loop 3-mass elliptic sunrise. From the Feynman rules, we find (up to an overall normalization):
\[\begin{split}\includegraphics[width=143.344pt]{fig-1.eps}=\int \frac{\delta^{\text{D}}(\ell_{3}{-}\ell_{1}{+}\ell_{2}{-}p)}{(\ell_{1}^{2}{+} m_{1}^{2}{-}i0)(\ell_{2}^{2}{+}m_{2}^{2}{-}i0)(\ell_{3}^{2}{+}m_{3}^{2}{-}i0)} \prod_{a=1}^{3}\frac{\text{d}^{\text{D}}\ell_{a}}{i\pi^{\text{D}/2}}\qquad( \text{D}=4{-}2\varepsilon)\.\end{split} \tag{8}\]
The _elliptic sunrise_From now on, it will be convenient to work directly in momentum space and, in particular, to consider the cylindrical-like parameterization for the internal loop momenta:
\[\ell_{i}=\ell_{i\parallel}+\ell_{i\perp}\,,\quad\ell_{i\parallel}\cdot\ell_{ i\perp}=0\,,\quad\ell_{1\parallel}=x\ p\,,\quad\ell_{2\parallel}=y\ (\ell_{1}+p)\,,\qquad\text{where}\ i=1,2\,. \tag{9}\]
In these variables, the propagators (boundaries) are given by
\[\mathsf{D}_{1}=\ell_{1\perp}^{2}+x^{2}p^{2}+m_{1}^{2}\,,\qquad \mathsf{D}_{2}=\ell_{2\perp}^{2}+y^{2}\ (\ell_{1\perp}^{2}+(1+x)^{2}p^{2})+m_{2}^{2}\,, \tag{10}\] \[\mathsf{D}_{3}=\ell_{2\perp}^{2}+(y+1)^{2}\,\ell_{1\perp}^{2}+(1+ y)^{2}\ (1+x)^{2}\,p^{2}+m_{3}^{2}\,.\]
We are now in a good position to understand why the sunrise integral is referred to as "elliptic." If we compute the maximal cut \((\mathsf{D}_{i}^{-1}{\to}2\pi i\delta(\mathsf{D}_{i})\ \ \forall\ i)\) of (8) in the critical dimension (\(\varepsilon=0\)), we obtain a 1-fold integral over the square root \(Y(x)\) of the irreducible quartic polynomial:
\[E(\mathbb{C}):Y^{2}-(x-r_{1})(x-r_{2})\,(x-r_{3})(x-r_{4})=0\,. \tag{11}\]
where the roots \(r_{i}\) are explicitly recorded in [13, Eq. (4.32)]. In mathematics, the object \(E(\mathbb{C})\) given in (11) is known as an _elliptic curve_ and has a rich and well-documented geometric structure. In particular, for a detailed discussion on its relationship with tori, refer to [13, SS4.3].
### Looping the sunrise's loops: dual bases and differential equation
The next step involves constructing an explicit 2-loop basis based on the loop-by-loop splitting presented in (4). This procedure is schematically summarized in Fig. 2. We've divided the discussion into two steps, and further details can be found in [13].
Step I: fixing the 1-loop (fibre) basisExamining the middle panel of Fig. (2), we notice that the fibre basis is essentially a dual bubble basis (up to normalization). As established in [11], a canonical basis for the dual bubble, in the sense of [9], is
\[\widetilde{\varphi}_{1}^{\text{\tiny(1-loop)}}=N\ \frac{2\varepsilon\ \text{d}\theta_{2}\wedge\text{d}\ell_{2\parallel}}{\sqrt{(p +\ell_{1})^{2}}\ \ell_{2\perp}^{2}}\,\ \widetilde{\varphi}_{2}^{\text{\tiny(1-loop)}}=N\frac{2 \varepsilon\ \text{d}\theta_{3}\wedge\text{d}\ell_{2\parallel}}{\sqrt{(p+\ell_{1})^{2}}\ \ell_{2\perp}^{2}}\,\ \widetilde{ \varphi}_{3}^{\text{\tiny(1-loop)}}=N\frac{\text{d}\theta_{2}\wedge\text{d} \theta_{3}}{\sqrt{(p+\ell_{1})^{2}\ \ell_{2\perp}^{2}}\big{|}_{23}}\,, \tag{12}\]
where \(N\) is a constant. Here, we have abbreviated \(\theta(\text{D}_{i})\) as \(\theta_{i}\), with \(\text{D}_{i}\) specified in (10).
Subsequently, the \(1^{\text{st}}\) loop-by-loop constraint instructs us to promote \(N\) to a function of the kinematics. This constraint uniquely determines \(N\) as the Gram determinant on the left-over loop - i.e., \(N=(\ell_{1\perp}^{2})^{-1/2}\). Consequently, when localized on the boundary where all propagators are on-shell, the bubble form denominator becomes proportional to \(Y\) as given in (11). This renders the fibre basis non-algebraic. Consequently, through the \(2^{\text{nd}}\) loop-by-loop constraint, further restrictions are imposed on the remaining (base) basis.
Step II: fixing the left-over (base) basisIn addition to the \(2^{\text{nd}}\) loop-by-loop constraint, one might want the base basis to exhibit certain properties that feel simply natural. For instance, we may ask the base basis to be as close as possible of being uniformly transcendental (readers with exposure to polylogarithmic Feynman integrals will likely find this assumption legitimate.) Furthermore, we can narrow our attention to bases satisfying a differential equation with a specific modular transformation rule: it must be independent of the modular parameters \(a\) and \(b\) under a modular transformation (defined in [13, Eq. (4.52)].) Such a condition is crucial if one aims to rewrite/pullback the differential equation, initially written in terms of Mandelstam invariants and masses, into a modular form spanned by modular and Kronecker forms (a comprehensive review aimed at physicists can be found in [14, SS13].)
Figure 2: The splitting in (4) is schematically shown for the 2-loop sunrise. **Top:** The 2-loop (total space) dual basis, where both \(\ell_{1}\) and \(\ell_{2}\) are active integration variables. **Middle:** The 1-loop (fibre) basis, where only \(\ell_{2}\) is active. **Bottom:** The left-over (base) basis, where \(\ell_{2}\) is integrated out already and \(\ell_{1}\) is active.
The simplest basis that satisfies all these assumptions is presented in of [13, Eqs. (5.2-5.3)]. Notably, the elliptic sector is both surprisingly compact and natural:
\[\left\{\tilde{\varphi}_{ij}^{\text{\tiny(def.conv)}}\right\}_{j=4}^{7}=\left\{ \frac{\psi_{1}^{2}}{\pi\ \varepsilon\ W_{X}}\tilde{\nabla}_{X}^{\text{\tiny(new)}},\frac{(x-r_{1}) \psi_{1}}{\pi},\frac{Y(c)\psi_{1}}{\pi(x-c)},1\right\}\,\frac{\pi\mathrm{d} \theta_{1}\wedge\mathrm{d}x}{m_{1}^{4\varepsilon}\psi_{1}Y}\begin{pmatrix}0 \\ 1\end{pmatrix}. \tag{13}\]
This basis is considered natural because the first and last forms span the cohomology of the bare (unpunctured) elliptic curve. The other two meromorphic forms have poles at twisted poles (\(x=\infty\) and \(x=c\), respectively) in \(\mathrm{D}=4\), which accurately accounts for the two punctures present on the given elliptic curve, as illustrated in [13, Fig. 3]. In the basis above, \(X=\frac{p^{2}}{m_{2}^{2}},\psi_{1}\) denotes one period of the elliptic curve and the term \(W_{X}\) stands for a Wronskian. The various factors of \(\pi\) and \(\varepsilon\) ensure the basis is as close as possible of being of uniform transcendental weight.
Interestingly, this basis possesses another property that we did not explicitly enforce: its differential equation is linear and strictly lower-triangular. In other words, it takes the form:
\[\mathbf{\Omega}^{\text{\tiny(2-loop)}}=\mathbf{\Omega}^{\text{\tiny(2-loop) }}_{(0)}+\varepsilon\ \mathbf{\Omega}^{\text{\tiny(2-loop)}}_{(1)}\,,\quad\text{with}\ \mathbf{\Omega}^{\text{\tiny(2-loop)}}_{(1)}\ \text{lower-triangular}. \tag{14}\]
Notice that \(\mathbf{\Omega}^{\text{\tiny(2-loop)}}\) is _not_ in \(\varepsilon\)-form. To achieve this form, a gauge transformation \(\mathbf{U}\) must be performed on (13) such that the new differential equation reads
\[\varepsilon\ \tilde{\mathbf{\Omega}}^{\text{\tiny(2-loop)}}_{(1)}=(\mathbf{U} \cdot\mathbf{\Omega}^{\text{\tiny(2-loop)}}+\mathrm{d}\mathbf{U})\cdot\mathbf{U}^{-1}. \tag{15}\]
As highlighted in [13] and further verified with more general examples in [17], the matrix \(\mathbf{U}\) is entirely determined by modular symmetry. This implies that the differential equation
\[\mathbf{U}\cdot\mathbf{\Omega}^{\text{\tiny(2-loop)}}_{(0)}+\mathrm{d}\mathbf{U}=0\,, \tag{16}\]
can be solved analytically for \(\mathbf{U}\) without performing any integration (!), simply by asking that (16) is modular covariant. The matrix \(\mathbf{U}\) specific to the sunrise example is recorded in [13, App. E]. The corresponding differential equation is provided in [13, Eq. (5.81)] and is demonstrated to have a simple relation to that of Feynman integrals considered in [19].
## 5 Conclusion
Driven by the idea that multi-loop Feynman integrals are an iteration over simpler 1-loop problems, we developed a loop-by-loop method for the computation of multi-loop dual Feynman integrands' differential equations. Through the intersection number, dual Feynman integrands are in one-to-one correspondence with "conventional" Feynman integrands. These dual integrands, supported on generalized unitarity cuts, are inherently simpler.
In this talk, the primary benefit of working with dual integrands was the ability to localize to generalized unitarity cuts, which later on informed on an optimal choice of basis. The geometry tied to a Feynman integrand is often concealed within its cuts, and dual integrands bring this to the fore. Moreover, because dual integrands aren't constrained to "look like" traditional Feynman integrands, we considered a loop-by-loop basis that is entirely motivated from the underlying elliptic geometry.
As a simple yet non-trivial example, we constructed an \(\varepsilon\)-form dual basis for the three mass 2-loop elliptic sunrise family and the associated differential equation. In doing so, we saw that
breaking up the problem into simpler 1-loop problems yields several advantages. First, we can reuse the known \(\varepsilon\)-form basis and differential equation at 1-loop to construct the 2-loop basis and differential equation. Second, at each step only a small subset of variables are active on the fibre simplifying algebraic manipulations as well as integration by-parts.
While the starting base basis (13) was geometrically and physically well motivated, it did not have an \(\varepsilon\)-form differential equation. A gauge transformation was needed to bring the system into \(\varepsilon\)-form, and we saw that it was entirely fixed by modular covariance. We anticipate that this idea will prove useful for future cutting-edge computations of differential equations for topologies with elliptic-like geometries, whether one uses the dual form framework or not.
AcknowledgmentsThis work was co-funded by the National Science and Engineering Council of Canada (NSERC) (MG), the Simons Investigator Award #376208 (AP), and the European Union (ERC Consolidator Grant LoCoMotive 101043686 (FP)).
|
2301.13814 | Platinum contacts for 9-atom-wide armchair graphene nanoribbons | Creating a good contact between electrodes and graphene nanoribbons (GNRs)
has been a longstanding challenge in searching for the next GNR-based
nanoelectronics. This quest requires the controlled fabrication of sub-20 nm
metallic gaps, a clean GNR transfer minimizing damage and organic contamination
during the device fabrication, as well as work function matching to minimize
the contact resistance. Here, we transfer 9-atom-wide armchair-edged GNRs
(9-AGNRs) grown on Au(111)/mica substrates to pre-patterned platinum
electrodes, yielding polymer-free 9-AGNR field-effect transistor devices. Our
devices have a resistance in the range of $10^6$ to $10^8$ $\Omega$ in the
low-bias regime, which is 2 to 4 orders of magnitude lower than previous
reports. Density functional theory (DFT) calculations combined with the
non-equilibrium Green's function method (NEGF) explain the observed p-type
electrical characteristics and further demonstrate that platinum gives strong
coupling and higher transmission in comparison to other materials such as
graphene. | Chunwei Hsu, Michael Rohde, Gabriela Borin Barin, Guido Gandus, Daniele Passerone, Mathieu Luisier, Pascal Ruffieux, Roman Fasel, Herre S. J. van der Zant, Maria El Abbassi | 2023-01-31T18:01:23Z | http://arxiv.org/abs/2301.13814v1 | # Platinum contacts for 9-atom-wide armchair graphene nanoribbons
###### Abstract
Creating a good contact between electrodes and graphene nanoribbons (GNRs) has been a longstanding challenge in searching for the next GNR-based nanoelectronics. This quest requires the controlled fabrication of sub-20 nm metallic gaps, a clean GNR transfer minimizing damage and organic contamination during the device fabrication, as well as work function matching to minimize the contact resistance. Here, we transfer 9-atom-wide armchair-edged GNRs (9-AGNRs) grown on Au(111)/mica substrates to pre-patterned platinum electrodes, yielding polymer-free 9-AGNR field-effect transistor devices. Our devices have a resistance in the range of \(10^{6}\) to \(10^{8}\)\(\Omega\) in the low-bias regime, which is 2 to 4 orders of magnitude lower than previous reports. Density functional theory (DFT) calculations combined with the non-equilibrium Green's function method (NEGF) explain the observed p-type electrical characteristics and further demonstrate that platinum gives strong coupling and higher transmission in comparison to other materials such as graphene.
Atomically precise (GNRs) are a family of graphene-based quantum materials which have been predicted to host exotic physical properties and potential electronic applications [1]. Depending on their sizes and terminations, they can manifest magnetically ordered edges [2; 3; 4], tunable band-gaps [5; 6; 7] or high-charge mobility [8]. Properties such as bandgap tunability, topological properties as well as edge magnetism [9; 10; 11] and others are intrinsic to GNRs and only appear when atomic precision in the synthesis is achieved. To translate these properties into devices, transfer of the ribbons to an appropriate substrate and create a good electrical contact between the GNRs and the electrodes.
Two main contact approaches have been investigated so far in the literature. One is achieved by the direct deposition of electrodes on top of GNRs with lithographic tools [12; 13; 14; 15; 16]. GNR devices fabricated with this top-contact approach show high, non-Ohmic contacts, and in some cases, the current is limited by the contact resistance [12; 13; 15]. This indicates a poor contact of the GNRs with the possible presence of a large Schottky barrier. Additionally, these top-contact GNR devices can suffer from resist contamination and heating during metal evaporation in the lithography process. This is particularly destructive for the GNRs with reactive edges, such as spin-polarized edges and topologically protected edges [11; 17; 18].
Another approach for contacting GNRs is by transferring GNRs onto pre-patterned electrodes. For GNRs grown on Au-mica substrates, a polymer-free transfer of 9-AGNRS has been optimized and used in previous reports of Refs. [14; 19] and Ref. [20] with Pd and graphene electrodes, respectively. However, in the case of Pd nanogaps, large Schottky barriers limited the transport through these devices. Likewise, graphene electrodes did not solve the issue of contact resistance and introduced more uncertainties related to the fabrication, i.e., not well-defined gap sizes and lithography-related PMMA residues on the electrodes, the latter being a known concern for graphene devices [21].
In this letter, we study 9-AGNRs junctions with pre-patterned Pt electrodes forming nanogap ranging from 20 to 100 nm in width and 1 um in length. We transfer the 9-AGNRs after the nanogap fabrication and show GNR devices with low-bias (200 mV) Ohmic resistance in the range of \(10^{6}\) to \(10^{8}\)\(\Omega\), orders of magnitude lower than the previous reports [12; 13; 14; 15; 20]. Approximately 100% device yield and low resistance are realized as a result of a cleaner device fabrication process compared to the previous top-contact approach, where GNRs are subjected to polymer contamination and high process temperatures. With the field effect transistor geometry, we fur
Figure 1: Device schematics: a, 9-AGNR field effect transistor device with Pt electrodes and SiO\({}_{2}\) as the back gate oxide. b, Atomic structure of 9-AGNR. The armchair termination is indicated by the red lines in the structure.
ther demonstrate the high transmission in the Pt-GNR-Pt junctions with p-type transport properties, concluded from gate-dependent measurements. These observations are rationalized by density functional theory and non-equilibrium Green's function formalism (DFT+NEGF) calculations, establishing platinum as an excellent material for contacting 9-AGNRs.
We employ a field-effect transistor geometry to electrically characterize 9-AGNRs in a vacuum probe station. A schematic device lay-out is illustrated in Fig. 1a. With this geometry we measure the current-voltage (\(IV\)) characteristics of 9-AGNRs as well as their gate dependence (\(IV_{\text{g}}\)). The GNRs are transferred onto pre-patterned Pt gaps on a SiO\({}_{2}\)/Si substrate, where the Si wafer is used as a global back-gate electrode. The 9-AGNRs form transport channels by bridging the pre-patterned lithographically defined Pt nanogaps. The atomic structure of 9-AGNRs is also shown in Fig. 1b, where the four sides of the GNRs are armchair-terminated.
To form a clean 9-AGNR-electrode interface, we pattern Pt electrodes prior to introducing the GNRs, thus avoiding organic contamination of the junction. The device fabrication steps are shown in Fig. 2. We use a SiO\({}_{2}\)/Si substrate with a thermal oxide thickness of 285 nm. The substrates are first cleaned with acetone and isopropyl alcohol (IPA) for 5 minutes each to remove organic residues on the surface. Subsequently, the substrate is cleaned with an oxygen plasma at a power of 300 W for 3 minutes. After the cleaning, the substrate is spin-coated with PMMA 950K A2 (MicroChem) at 3000 rpm and baked at 180 \({}^{\circ}\)C for 3 min on a hot plate. This gives a resist thickness of about 80 nm.
The nanogaps with various widths (20-100 nm by design) are patterned by EBPG5000+ (Raith) with an acceleration voltage of 100 kV. To form well-defined nanogaps, a high dose at 2100 \(\upmu\)C/cm\({}^{2}\) is chosen together with a cold development technique. The nanogap structure is developed in IPA:MIBK (3:1) at a temperature of -20\({}^{\circ}\)C for 3 min. Afterwards, the electrodes are made by electron-beam evaporation of 3 nm of Ti at a rate of 0.5 A/s and 17 nm of Pt at a rate of 1 nm/s, followed by a lift-off process in hot acetone at 50 \({}^{\circ}\)C. With this thin PMMA resist layer and the cold development method, we achieve Pt nanogap as small as 20 nm with an aspect ratio of more than 100. This allows the possibility to measure end-to-end connected GNRs as they are in the length of few tens of nanometers (see Fig. S1a for a STM image of 9-AGNRs on Au(111)). Moreover, the large aspect ratio provides a high device yield as several 9-AGNRs can be connected in the same junction. Similar nanogaps with large aspect ratio can also be achieved with other methods but require a more elaborated technique such as a chromium oxide mask[22].
To transfer the 9-AGNRs onto the pre-patterned substrate, we follow the polymer-free transfer process described elsewhere[23, 15, 24]. In short, an Au film containing 9-AGNRs delaminates itself from its mica substrate when placed onto an aqueous HCl solution (Fig. 2e). Afterwards, a pre-patterned substrate is used to pull out the free-standing Au film from the diluted HCl solution. To remove the Au film from the 9-AGNRs, the substrate with Au film is covered with a gold etchant for 5 min., as shown in Fig. 2f, and subsequently rinsed with deionized water and cleaned with acetone and IPA. This transfer process preserves the 9-AGNR quality as no peak shift in the Raman spectra of 9-AGNRs was observed, before and after the transfer (Fig. S1b). Afterwards, the sample is mounted in a vacuum probe station at room temperature for electrical characterization.
Figure 3a shows a scanning electron microscopy (SEM) image of a typical Pt nanogap with a feature size of 20 nm. 9-AGNRs with an average length of 45 nm and maximum lengths up to 100 nm are transferred onto these nanogaps, forming Pt-9-AGNR-Pt junctions.
Figure 2: Fabrication steps: a-d, The pre-patterning of the Pt nanogap. e-g, Polymer-free transfer of 9-AGNR. h, Electrical measurements in a probestation. The blue lines represent the 9-AGNRs.
Here, we present electrical measurements of four different substrates, each contains multiple devices. Figure 3b summarizes the resistance of these junctions at 50 mV with all nanogap sizes.The junctions have a most probable low-bias resistance around \(10^{7}\)\(\Omega\), orders of magnitude lower than that in previous reports [12; 13; 14; 15; 20]. We also compare the low-bias resistance between junctions, made with a polymer-free transfer and PMMA-assisted transfer techniques on the same substrate (see Fig. S2). Consistently, the PMMA-assisted transferred junctions show a resistance that is 1 to 2 orders of magnitude higher, showing a clear influence of the use of PMMA on the transport properties, possibly due to different doping levels and contamination at the interfaces.
It is worth noting that the yield is nearly 100% for electrically conducting junctions, i.e., \(R<10\) G\(\Omega\) after GNR transfer. All devices were previously verified to be insulating. The large 9-AGNR/Pt contact area (1 \(\upmu\)m in vertical direction of Fig. 3a) may be of importance in this observation. A high yield benefiting from a large junction contact area was also observed previously, even in devices with \(\upmu\)m-size gaps [25]. This implies that the Pt-9-AGNR junctions comprise a network of GNRs, where a distance-dependent resistance is expected [25]. However, with the large spread in the resistance distribution, we do not observe significant difference in the low-bias resistance for devices with different nanogap sizes ranging from 20 nm to 100 nm (see Fig. S3).
To gain more insight into charge transport in the 9-AGNR devices, we show the \(IV\) characteristics of sample 1 in Fig. 4 (see Fig. S4 for other samples). The current shows a linear dependence on the bias voltage within the range of \(\pm 200\) mV. To probe the linearity of the \(IV\) characteristics, we have applied a bias voltage up to 1 V, shown in Fig. S5 for sample 2. This bias limit is chosen to prevent the possible creation of filamentary paths in silicon oxide, which can occur at a few Volts applied across thin oxide layers [26]. In this case, we observe small nonlinearity taking place typically around a few hundred mV. The low resistance and nearly Ohmic \(IV\)s of these pre-patterned Pt-9-AGNR-Pt junctions can be a result of a better work function matching in comparison to previously investigated electrode materials [12; 13; 14; 15; 20].
Figure 4b shows the gate dependent current at a bias voltage of 100 mV. From the ratio of the maximum and minimum current for gate voltages ranging from -20 V to 20 V, the on-off ratio is determined: \(R_{\text{on-off}}=I_{\text{max}}/I_{\text{min}}\). We obtain small \(R_{\text{on-off}}<10\) for sample 1. The highest on-off ratio of 25 is observed in sample 3, shown in Fig. S6d. Additional gate traces at a higher bias voltage of 1 V for sample 2 are shown in Fig. S7, where a maximum \(R_{\text{on-off}}\) of 30 is observed. A crucial observation in the gate traces, consistent p-type transport, which is consistent with previous observations in 9-AGNR-based transistors [12; 13; 14; 15; 20; 25].
The small gate dependence and on-off ratio of the current illustrate the poor gate efficiency of the 9-AGNR junctions. This poor efficiency may come from three contributions: (i) The electric field is screened by the metallic electrodes between the gate and GNR, and screened between GNR and GNR. These screening effects were demonstrated by simulations previously in ref. [16], where the electrostatic potential was completely screened in a densely packed GNR film with a GNR separation of 1.5 nm. It was also shown that a 20 nm nanogap is screened for more than 50% with an electrode thickness of 4 nm without the presence of GNRs [16]. This suggests that the densely packed 9-AGNR devices with a metal electrode height of 20 nm can be screened efficiently, leading to the observed poor gate efficiency. (ii) The low gate coupling is partially due to the low dielectric constant of the silicon oxide and the thick oxide thickness. An improvement for future experiments can be made by using a thin, high-\(\kappa\) gate oxide such as HfO\({}_{2}\). (iii) The transport mechanism can be dominated by a hopping-like mechanism with an intrinsic low gate coupling. A temperature dependent measurement is necessary to elucidate the transport mechanism in these GNR devices, which is a subject of study for the future.
To describe the low resistance and nearly linear IV characteristics of the 9-AGNR junctions, we employed
Figure 4: a, \(IV\) characteristics of 36 junctions in sample 1 within a bias voltage range of \(\pm\) 200 mV. The inset shows the normalized IV characteristics. b, Gate traces of 4 different junctions in sample 1, taken at a bias voltage of 100 mV.
Figure 3: a, Scanning electron microscopy image of Pt nanogap, with a feature size of 20 nm. b, Resistance determined at a bias voltage of 50 mV for four different samples with several devices on each of them.
the density functional theory + non equilibirum Green's function (DFT+NEGF) method to unveil the origin of this behavior in our 9-AGNR-Pt coupling. Details about the simulations can be found in SM.2.
We consider the atomistic models in Figs. 5a and b. They are representative of a 9-AGNR contacted with platinum and graphene, respectively, with the goal of comparing both configurations. The length of the GNR is set to 11 nm and the contacts are separated by a 8-nm gap. For ribbons with atomically precise edges transport is expected to be ballistic and the conductance should not vary with the distance between the contacts [27]. To assess the quality of the latter we compute the hybridization strength of the GNR with the underlying electrodes and report the result in Figs. 5c-d. For platinum contacts, the hybridization is large below the Fermi level, i.e., in the valence band (VB) of the GNR, but decreases above this energy. This finding is evidenced by the density of states projected onto the GNR (PDOS), which evolves from a continuum of states below in the VB to a series of discrete and narrow peaks above it, directly linkable to the molecular orbital states (MOs) of the uncoupled GNR (vertical dashed lines).
For graphene contacts on the other hand, the hybridization remains small throughout the entire energy window relevant for transport and the PDOS strongly resembles the discrete spectrum of the dissociated GNR. This indicates that, upon contact with graphene, the electronic states in the channel remain mostly bounded. The hybridization strength is reflected in the electronic transmission (see Fig. 5e), which approaches the ideal constant value of 1 of a perfectly contacted GNR in the regions of large hybridization and reduces to sharp peaks typical of resonant transport otherwise. The current calculated by applying a symmetric bias across the electrodes shows a linear dependence for platinum contacts within the range of 200 mV, in agreement with experiments (see Fig. 5f and 4a). In our calculations, the linear dependence stems from the \(p\)-type character of the GNR - any bias window finds some open channels available for transport. Contrarily, the current for graphene contacts shows a non-linear, step-like behaviour typical of molecular junctions in the resonant transport regime in which the central molecule is loosely coupled to the leads.
Our simulation results thus indicate that, upon contact with platinum, the GNR MOs strongly hybridize with the underlying material and broaden into a continuous density of channels available for transport. This in turn yields devices with low contact resistance and nearly linear IV characteristic.
We fabricate 9-AGNR field effect transistor devices with Pt contacts by employing a polymer-free transfer technique subsequent to the deposition of electrical contacts. The GNR devices, ranging from 20 nm to 100 nm in gap width, consistently show a low-bias resistance value, \(R\approx 10^{7}\)\(\Omega\), orders of magnitude lower than previ
Figure 5: a-b, Atomic models for 9-AGNR nanogap junctions with platinum and graphene contacts respectively. c-d, Hybridization function between the GNR and the metallic contacts as a function of energy. The hybridization with platinum is orders of magnitude larger than with graphene, which is reflected in the ”broad” PDOS below the Fermi level. For graphene, the PDOS is characterized by sharp peaks directly linkable to the MOs of the dissociated GNR indicating that the molecule is only loosely coupled to the contacts. e, Electronic transmission as a function of energy. It is characterized by narrow peaks in regions of small hybridization and approaches the idea constant value of 1 when the hybridization increases. f, Current obtained by integrating the electronic transmission in a bias window centered around the Fermi level. The current for platinum contacts shows the linear behaviour observed in experiments.
ous reports. Together with its nearly-Ohmic IV characteristics, the better device performance indicate that Pt electrodes with polymer-free transfer is ideal for 9-AGNR contacting. DFT+NEGF calculations demonstrate that Pt contact leads to a higher transmission than that other materials such as graphene. This not only explains the nearly linear \(IV\) characteristics and \(p\)-type transport observed in the experiments, but also points out that Pt is a better contact material for a transparent contact interface.
## Author contributions
**Chunwei Hsu**: Conceptualization (equal); Methodology (equal); Validation (equal); Visualization (lead); Writing - original draft (lead); Writing - review & editing (lead). **Michael Rohde**: Formal Analysis (lead); Validation (equal); Visualization (equal); Writing - review & editing (equal). **Gabriela Borin Barin**: Resources (lead); Writing - original draft (equal); Writing - review & editing (equal). **Guido Gandus**: Software (lead); Formal Analysis (equal); Methodology (equal); Writing - original draft (equal); Writing - review & editing (equal). **Daniele Passerone**: Supervision (equal); Writing - review & editing (equal). **Mathieu Luiser**: Supervision (equal); Writing - review & editing (equal). **Pascal Ruffieux**: Funding Acquisition (equal); Supervision (equal); Writing - review & editing (equal). **Roman Fasel**: Funding Acquisition (equal); Supervision (equal); Writing - review & editing (equal). **Herre S. J. van der Zan**: Funding Acquisition (equal); Supervision (equal); Validation (equal); Writing - review & editing (equal). **Maria El Abbassi**: Supervision (lead); Methodology (equal); Validation (lead); Writing - original draft (equal); Writing - review & editing (equal).
## Acknowledgement
This study was supported by the EU and FET open project QuIET (number 767187). C. H. and H. S. J. v.d.Z. acknowledge The Netherlands Organization for Scientific Research (Natuurkunde Vrije Programma's: 680.90.18.01). We acknowledge funding by the Swiss National Science Foundation under grant no. 200020-182015, the European Union Horizon 2020 research and innovation program under grant agreement no. 881603 (Graphene Flagship Core 3), and the Office of Naval Research BRC Program under the grant N00014-18-1-2708. We furthermore greatly appreciate the financial support from the Werner Siemens Foundation (Carbo Quant). DP, ML and GG acknowledge the NCCR MARVEL funded by the Swiss National Science Foundation (grant no. 51NF40-205602).
## Data availability statement
The data that support the findings of this study are available from the corresponding author upon reasonable request.
|
2309.00092 | Irredundant bases for the symmetric group | An irredundant base of a group $G$ acting faithfully on a finite set $\Gamma$
is a sequence of points in $\Gamma$ that produces a strictly descending chain
of pointwise stabiliser subgroups in $G$, terminating at the trivial subgroup.
Suppose that $G$ is $\operatorname{S}_n$ or $\operatorname{A}_n$ acting
primitively on $\Gamma$, and that the point stabiliser is primitive in its
natural action on $n$ points. We prove that the maximum size of an irredundant
base of $G$ is $O\left(\sqrt{n}\right)$, and in most cases $O\left((\log
n)^2\right)$. We also show that these bounds are best possible. | Colva M. Roney-Dougal, Peiran Wu | 2023-08-31T19:20:40Z | http://arxiv.org/abs/2309.00092v2 | # Irredundant bases for the symmetric group
###### Abstract
An irredundant base of a group \(G\) acting faithfully on a finite set \(\Gamma\) is a sequence of points in \(\Gamma\) that produces a strictly descending chain of pointwise stabiliser subgroups in \(G\), terminating at the trivial subgroup. Suppose that \(G\) is \(\mathrm{S}_{n}\) or \(\mathrm{A}_{n}\) acting primitively on \(\Gamma\), and that the point stabiliser is primitive in its natural action on \(n\) points. We prove that the maximum size of an irredundant base of \(G\) is \(O\left(\sqrt{n}\right)\), and in most cases \(O\left((\log n)^{2}\right)\). We also show that these bounds are best possible.
**Keywords** irredundant base, symmetric group **MSC2020** 20B15; 20D06, 20E15
## 1 Introduction
Let \(G\) be a finite group that acts faithfully and transitively on a set \(\Gamma\) with point stabiliser \(H\). A sequence \((\gamma_{1},\ldots,\gamma_{l})\) of points of \(\Gamma\) is an _irredundant base_ for the action of \(G\) on \(\Gamma\) if
\[G>G_{\gamma_{1}}>G_{\gamma_{1},\gamma_{2}}>\cdots>G_{\gamma_{1},\ldots,\gamma_ {l}}=1. \tag{1}\]
Let \(\mathrm{b}(G,H)\) and \(\mathrm{I}(G,H)\) denote the minimum and the maximum sizes of an irredundant base in \(\Gamma\) for \(G\) respectively.
Recently, Gill & Liebeck showed in [7] that if \(G\) is an almost simple group of Lie type of rank \(r\) over the field \(\mathbb{F}_{p^{f}}\) of characteristic \(p\) and \(G\) is acting primitively, then
\[\mathrm{I}(G,H)\leqslant 177r^{8}+\Omega(f),\]
where \(\Omega(f)\) is the number of prime factors of \(f\), counted with multiplicity.
Suppose now that \(G\) is the symmetric group \(\mathrm{S}_{n}\) or the alternating group \(\mathrm{A}_{n}\). An upper bound for \(\mathrm{I}(G,H)\) is the maximum length of a strictly descending chain of subgroups in \(G\), known as the _length_, \(\ell(G)\), of \(G\). Define \(\varepsilon(G)\coloneqq\ell(G/\sec G)\). Cameron, Solomon, and Turull proved in [4] that
\[\ell(G)=\left\lfloor\frac{3n-3}{2}\right\rfloor-b_{n}+\varepsilon(G),\]
where \(b_{n}\) denotes the number of \(1\)s in the binary representation of \(n\). For \(n\geqslant 2\), this gives
\[\ell(G)\leqslant\frac{3}{2}n-3+\varepsilon(G). \tag{2}\]
This type of upper bound is best possible for such \(G\) in general, in that for the natural action of \(\mathrm{S}_{n}\) or \(\mathrm{A}_{n}\) on \(n\) points, the maximum irredundant base size is \(n-2+\varepsilon(G)\). A recent paper [8] by Gill & Loda determined the exact values of \(\mathrm{I}(G,H)\) when \(H\) is maximal and intransitive in its natural action on \(n\) points, and in each case \(\mathrm{I}(G,H)\geqslant n-3+\varepsilon(G)\).
In this article, we present improved upper bounds for \(\mathrm{I}(G,H)\) in the case where \(H\) is primitive. Note that whenever we refer to the "primitivity" of a subgroup of \(G\), we do so with respect to the natural action of \(G\) on \(n\) points. We say that a primitive subgroup \(H\) of \(G\) is _large_ if there are integers \(m\) and \(k\) such that \(H\) is \((\mathrm{S}_{m}\wr\mathrm{S}_{k})\cap G\) in product action or there are integers \(m\) and \(r\) such that \(H\) is \(\mathrm{S}_{m}\cap G\) acting on the \(r\)-subsets of a set of size \(m\). Logarithms are taken to the base \(2\).
**Theorem 1**.: _Suppose \(G\) is \(\mathrm{S}_{n}\) or \(\mathrm{A}_{n}\) (\(n\geqslant 7\)) and \(H\neq\mathrm{A}_{n}\) is a primitive maximal subgroup of \(G\)._
1. _Either_ \(\mathrm{I}(G,H)<(\log n)^{2}+\log n+1\)_, or_ \(H\) _is large and_ \(\mathrm{I}(G,H)<3\sqrt{n}-1\)_._
2. _There are infinitely many such pairs_ \((G,H)\) _for which_ \(\mathrm{I}(G,H)\geqslant\sqrt{n}\)_._
3. _There are infinitely many such pairs_ \((G,H)\) _for which_ \(\mathrm{I}(G,H)>\left(\log n\right)^{2}/(2(\log 3)^{2})+\log n/(2\log 3)\) _and_ \(H\) _is not large._
We also state upper bounds for \(\mathrm{I}(G,H)\) in terms of the degree \(t\) of the action of \(G\). It is easy to show that \(\mathrm{I}(G,H)\leqslant\mathrm{b}(G,H)\log t\). Burness, Guralnick, and Saxl showed in [3] that if \(G\) and \(H\) are as in Theorem 1, then with a finite number of exceptions, \(\mathrm{b}(G,H)=2\), from which it follows that
\[\mathrm{I}(G,H)\leqslant 2\log t.\]
Similar \(O(\log t)\) upper bounds on the maximum irredundant base size were recently shown to hold for all non-large-base subgroups [9, 10], raising the question of whether such bounds are best possible in our case. Using Theorem 1, we shall obtain better bounds in terms of \(t\).
**Corollary 2**.:
1. _There exist constants_ \(c_{1},c_{2}\in\mathbb{R}_{>0}\) _such that, if_ \(G\) _is_ \(\mathrm{S}_{n}\) _or_ \(\mathrm{A}_{n}\) _(_\(n\geqslant 7\)_) and_ \(H\neq\mathrm{A}_{n}\) _is a primitive maximal subgroup of_ \(G\) _of index_ \(t\)_, then either_ \(\mathrm{I}(G,H)<c_{1}(\log\log t)^{2}\)_, or_ \(H\) _is large and_ \(\mathrm{I}(G,H)<c_{2}\left(\log t/\log\log t\right)^{1/2}\)_._
2. _There is a constant_ \(c_{3}\in\mathbb{R}_{>0}\) _and infinitely many such pairs_ \((G,H)\) _for which_ \(\mathrm{I}(G,H)>c_{3}\left(\log t/\log\log t\right)^{1/2}\)_._
3. _There is a constant_ \(c_{4}\in\mathbb{R}_{>0}\) _and infinitely many such pairs_ \((G,H)\) _for which_ \(\mathrm{I}(G,H)>c_{4}(\log\log t)^{2}\) _and_ \(H\) _is not large._
**Remark 3**.: We may take \(c_{1}=3.5\), \(c_{2}=6.1\), \(c_{3}=1\), \(c_{4}=0.097\). If we assume \(n>100\), then \(c_{1}=1.2\) and \(c_{2}=4.4\) suffice.
A sequence \(\mathcal{B}\) of points in \(\Gamma\) is _independent_ if no proper subsequence \(\mathcal{B}^{\prime}\) satisfies \(G_{(\mathcal{B}^{\prime})}=G_{(\mathcal{B})}\). The maximum size of an independent sequence for the action of \(G\) on \(\Gamma\) is denoted \(\mathrm{H}(G,H)\). It can be shown that \(\mathrm{b}(G,H)\leqslant\mathrm{H}(G,H)\leqslant\mathrm{I}(G,H)\). Another closely related property of the action is the _relational complexity_, denoted \(\mathrm{RC}(G,H)\), a concept which originally arose in model theory. Cherlin, Martin, and Saracino defined \(\mathrm{RC}(G,H)\) in [5] under the name "arity" and showed that \(\mathrm{RC}(G,H)\leqslant\mathrm{H}(G,H)+1\).
**Corollary 4**.: _Suppose \(G\) is \(\mathrm{S}_{n}\) or \(\mathrm{A}_{n}\) (\(n\geqslant 7\)) and \(H\neq\mathrm{A}_{n}\) is a primitive maximal subgroup of \(G\). Then either \(\mathrm{RC}(G,H)<(\log n)^{2}+\log n+2\), or \(H\) is large and \(\mathrm{RC}(G,H)<3\sqrt{n}\)._
The maximal subgroups of the symmetric and alternating groups were classified in [1, 11]. In order to prove statements (i) and (ii) of Theorem 1, we examine two families of maximal subgroups in more detail and determine lower bounds on the maximum irredundant base size, given in the next two results.
**Theorem 5**.: _Let \(p\) be an odd prime number and \(d\) a positive integer such that \(p^{d}\geqslant 7\) and let \(n=p^{d}\). Suppose \(G\) is \(\mathrm{S}_{n}\) or \(\mathrm{A}_{n}\) and \(H\) is \(\mathrm{AGL}_{d}(p)\cap G\). If \(d=1\), then_
\[\mathrm{I}(G,H)=1+\Omega(p-1)+\varepsilon(G).\]
_If \(d\geqslant 2\) and \(p=3,5\), then_
\[\frac{d(d+1)}{2}+d-1+\varepsilon(G)\leqslant\mathrm{I}(G,H)<\frac{d(d+1)}{2}( 1+\log p)+\varepsilon(G).\]
_If \(d\geqslant 2\) and \(p\geqslant 7\), then_
\[\frac{d(d+1)}{2}+d\,\Omega(p-1)-1+\varepsilon(G)\leqslant\mathrm{I}(G,H)< \frac{d(d+1)}{2}(1+\log p)+\varepsilon(G).\]
**Theorem 6**.: _Let \(m\geqslant 5\) and \(k\geqslant 2\) be integers and let \(n=m^{k}\). Suppose \(G\) is \(\mathrm{S}_{n}\) or \(\mathrm{A}_{n}\) and \(H\) is \((\mathrm{S}_{m}\wr\mathrm{S}_{k})\cap G\) in product action. Then_
\[1+(m-1)(k-1)+\varepsilon(G)\leqslant\mathrm{I}(G,H)\leqslant\frac{3}{2}mk- \frac{1}{2}k-1.\]
After laying out some preliminary results in SS 2, we shall prove Theorems 5 and 6 in SS 3 and SS 4 respectively, before proving Theorem 1 and Corollary 2 in SS 5.
## 2 The maximum irredundant base size
In this section, we collect two general lemmas. Let \(G\) be a finite group acting faithfully and transitively on a set \(\Gamma\) with point stabiliser \(H\). If \((\gamma_{1},\ldots,\gamma_{l})\) is an irredundant base of \(G\), then it satisfies (1). The tail of the chain in (1) is a strictly descending chain of subgroups in \(G_{\gamma_{1}}\), which is conjugate to \(H\). Therefore,
\[\mathrm{I}(G,H)\leqslant\ell(H)+1\leqslant\Omega(|H|)+1.\]
To obtain a lower bound for \(\mathrm{I}(G,H)\), one approach is to look for a large explicit irredundant base. The following lemma says it suffices to find a long chain of subgroups in \(G\) such that every subgroup in the chain is a pointwise stabiliser of some subset in \(\Gamma\).
**Lemma 2.1**.: _Let \(l\) be the largest natural number such that there are subsets \(\Delta_{0},\Delta_{1},\ldots,\Delta_{l}\subseteq\Gamma\) satisfying_
\[G_{(\Delta_{0})}>G_{(\Delta_{1})}>\cdots>G_{(\Delta_{l})}.\]
_Then \(\mathrm{I}(G,H)=l\)._
Proof.: Since \(l\) is maximal, we may assume that \(\Delta_{0}=\emptyset\) and \(\Delta_{l}=\Gamma\) and that \(\Delta_{i-1}\subseteq\Delta_{i}\), replacing \(\Delta_{i}\) with \(\Delta_{1}\cup\cdots\cup\Delta_{i}\) if necessary. For each \(i\in\{1,\ldots,l\}\), write \(\Delta_{i}\setminus\Delta_{i-1}=\{\gamma_{i,1},\ldots,\gamma_{i,m_{i}}\}\). Then \((\gamma_{1,1},\ldots,\gamma_{1,m_{1}},\gamma_{2,1},\ldots,\gamma_{2,m_{2}}, \ldots,\gamma_{l,1},\ldots,\gamma_{l,m_{l}})\) is a base for \(G\) and every subgroup \(G_{(\Delta_{i})}\) appears in the corresponding chain of point stabilisers. Therefore, by removing all redundant points, we obtain an irredundant base of size at least \(l\), so \(\mathrm{I}(G,H)\geqslant l\).
On the other hand, given any irredundant base \((\gamma_{1},\ldots,\gamma_{m})\) of \(G\), we can take \(\Delta_{i}\coloneqq\{\gamma_{1},\ldots,\gamma_{i}\}\). Therefore, \(\mathrm{I}(G,H)=l\).
Once we have an upper or lower bound for \(\mathrm{I}(G,H)\), we can easily obtain a corresponding bound for the maximum irredundant base size of various subgroups of \(G\).
**Lemma 2.2**.: _Suppose \(M\) is a subgroup of \(\mathrm{S}_{n}\) with \(M\nleqslant\mathrm{A}_{n}\). Then_
\[\mathrm{I}(\mathrm{S}_{n},M)-1\leqslant\mathrm{I}(\mathrm{A}_{n},M\cap\mathrm{ A}_{n})\leqslant\mathrm{I}(\mathrm{S}_{n},M).\]
Proof.: This follows immediately from [9, Lemma 2.8] and [10, Lemma 2.3].
## 3 The affine case
In this section, we prove Theorem 5. The upper bounds will follow easily from examinations of group orders. Therefore, we focus most of our efforts on the construction of an irredundant base, leading to the lower bounds.
Let \(p\) be a prime number and \(d\) be an integer such that \(p^{d}\geqslant 7\) and let \(V\) be a \(d\)-dimensional vector space over the field \(\mathbb{F}_{p}\). Let \(G\) be \(\mathrm{Sym}(V)\) or \(\mathrm{Alt}(V)\). Consider the affine group \(\mathrm{AGL}(V)\), the group of all invertible affine transformations of \(V\), and let \(H\coloneqq\mathrm{AGL}(V)\cap G\).
**Theorem 3.1** ([11]).: _The subgroup \(H\) is maximal in \(G\) (with \(p^{d}\geqslant 7\)) if and only if one of the following holds:_
1. \(d\geqslant 2\) _and_ \(p\geqslant 3\)_;_
2. \(G=\operatorname{Sym}(V)\)_,_ \(d=1\) _and_ \(p\geqslant 7\)_;_
3. \(G=\operatorname{Alt}(V)\)_,_ \(d\geqslant 3\) _and_ \(p=2\)_;_
4. \(G=\operatorname{Alt}(V)\)_,_ \(d=1\)_, and_ \(p=13,19\) _or_ \(p\geqslant 29\)_._
In this section, we only consider the case where \(p\) is odd. Owing to Lemma 2.2, we shall assume \(G=\operatorname{Sym}(V)\) and \(H=\operatorname{AGL}(V)\) for now. In the light of Lemma 2.1, we introduce a subgroup \(T\) of diagonal matrices and look for groups containing \(T\) that are intersections of \(G\)-conjugates of \(H\) (SS 3.1) and subgroups of \(T\) that are such intersections (SS 3.2), before finally proving Theorem 5 (SS 3.3).
### Subspace stabilisers and the diagonal subgroup
Let \(T\) be the subgroup of all diagonal matrices in \(\operatorname{GL}(V)\) with respect to a basis \(\mathbf{b}_{1},\ldots,\mathbf{b}_{d}\). Let \(\mu\) be a primitive element of \(\mathbb{F}_{p}\). We now find a strictly descending chain of groups from \(\operatorname{Sym}(V)\) to \(T\) consisting of intersections of \(G\)-conjugates of \(H\). We treat the cases \(d=1\) and \(d\geqslant 2\) separately.
**Lemma 3.2**.: _Suppose \(d=1\) and \(G=\operatorname{Sym}(V)\). Then there exists \(x\in G\) such that \(H\cap H^{x}=T\)._
Proof.: Since \(V\) is \(1\)-dimensional, \(\operatorname{GL}(V)=T\) is generated by the scalar multiplication \(m_{\mu}\) by \(\mu\). Let \(\mathbf{u}\in V\setminus\{\mathbf{0}\}\) and let \(t_{\mathbf{u}}\) be the translation by \(\mathbf{u}\). Then \(H=\langle t_{\mathbf{u}}\rangle\rtimes\langle m_{\mu}\rangle\) is the normaliser of \(\langle t_{\mathbf{u}}\rangle\) in \(G\) and \(\langle t_{\mathbf{u}}\rangle\) is a characteristic subgroup of \(H\). Hence \(H\) is self-normalising in \(G\). Define
\[x\coloneqq(\mathbf{u}\ \ \mu^{-1}\mathbf{u})(\mu\mathbf{u}\ \ \mu^{-2}\mathbf{u}) \cdots(\mu^{\frac{p-3}{2}}\mathbf{u}\ \ \mu^{-\frac{p-1}{2}}\mathbf{u})\in G.\]
Then \(x\notin H\) and so \(x\) does not normalise \(H\). But \(x\) normalises \(\langle m_{\mu}\rangle\), as \({m_{\mu}}^{x}={m_{\mu}}^{-1}\). Therefore,
\[T=\langle m_{\mu}\rangle\leqslant H\cap H^{x}<H.\]
Since the index \(|H:T|=p\) is prime, \(H\cap H^{x}=T\).
The following two lemmas concern the case \(d\geqslant 2\). An affine subspace of \(V\) is a subset of the form \(\mathbf{v}+W\), where \(\mathbf{v}\in V\) and \(W\) is a vector subspace of \(V\). The (affine) dimension of \(\mathbf{v}+W\) is the linear dimension of \(W\). For an affine transformation \(h=gt_{\mathbf{u}}\) with \(g\in\operatorname{GL}(V)\) and \(t_{\mathbf{u}}\) denoting the translation by some \(\mathbf{u}\in V\), if \(\operatorname{fix}(h)\) is non-empty, then \(\operatorname{fix}(h)\) is an affine subspace of \(V\), since \(\operatorname{fix}(h)=\mathbf{v}+\ker(g-\operatorname{id}_{V})\) for any \(\mathbf{v}\in\operatorname{fix}(h)\).
**Lemma 3.3**.: _Suppose \(d\geqslant 2\), \(p\geqslant 3\), and \(G=\operatorname{Sym}(V)\). Let \(W\) be a proper, non-trivial subspace of \(V\) and let \(K<\operatorname{GL}(V)\) be the setwise stabiliser of \(W\). Then there exists \(x\in G\) such that \(H\cap H^{x}=K\)._
Proof.: Let \(\lambda\in\mathbb{F}_{p}^{\times}\setminus\{1\}\) and define \(x\in\operatorname{Sym}(V)\) by setting
\[\mathbf{v}^{x}\coloneqq\begin{cases}\lambda\mathbf{v},&\text{if }\mathbf{v}\in W, \\ \mathbf{v},&\text{otherwise.}\end{cases}\]
We first show that \(K=\operatorname{C}_{H}(x)\) and then that \(H\cap H^{x}=\operatorname{C}_{H}(x)\).
Firstly, let \(g\in K\). For all \(\mathbf{v}\in W\), we calculate that \(\mathbf{v}^{g^{x}}=(\lambda^{-1}\mathbf{v})^{gx}=(\lambda^{-1}\mathbf{v}^{g} )^{x}=\mathbf{v}^{g}\). For all \(\mathbf{v}\in V\setminus W\), we see that \(\mathbf{v}^{g^{x}}=\mathbf{v}^{gx}=\mathbf{v}^{g}\). Hence \(g^{x}=g\), and so \(K\leqslant\operatorname{C}_{H}(x)\). Now, let \(h\) be an element of \(\operatorname{C}_{H}(x)\) and write \(h=gt_{\mathbf{u}}\) with \(g\in\operatorname{GL}(V)\) and \(\mathbf{u}\in V\), so that \(h^{-1}=t_{-\mathbf{u}}g^{-1}\). Suppose for a contradiction that there exists \(\mathbf{v}\in W\setminus\{\mathbf{0}\}\) with \(\lambda\mathbf{v}^{g}+\mathbf{u}\notin W\). Then
\[\mathbf{v}=\mathbf{v}^{xhx^{-1}h^{-1}}=(\lambda\mathbf{v})^{hx^{-1}h^{-1}}=( \lambda\mathbf{v}^{g}+\mathbf{u})^{x^{-1}h^{-1}}=(\lambda\mathbf{v}^{g}+ \mathbf{u})^{h^{-1}}=\lambda\mathbf{v}.\]
Since \(\lambda\neq 1\), this is a contradiction and so for all \(\mathbf{v}\in W\),
\[\mathbf{v}=(\lambda\mathbf{v}^{g}+\mathbf{u})^{x^{-1}h^{-1}}=(\mathbf{v}^{g}+ \lambda^{-1}\mathbf{u})^{h^{-1}}=\mathbf{v}+(\lambda^{-1}-1)\mathbf{u}^{g^{-1}}.\]
Hence \(\mathbf{u}=\mathbf{0}\) and \(\mathbf{v}^{g}\in W\). Therefore, \(h=gt_{\mathbf{0}}\) stabilises \(W\), whence \(h\in K\). Thus, \(\mathrm{C}_{H}(x)=K\).
Since \(\mathrm{C}_{H}(x)\leqslant H\cap H^{x}\), it remains to show that \(H\cap H^{x}\leqslant\mathrm{C}_{H}(x)\). Suppose otherwise. Then there is some \(h\in H\cap H^{x}\) such that \(h^{\prime}\coloneqq xhx^{-1}h^{-1}\neq 1\). The set \(\mathrm{fix}(h^{\prime})\) is either empty or an affine subspace of dimension at most \(d-1\). Moreover, for any \(\mathbf{v}\in V\), if \(\mathbf{v}\notin(W\setminus\{\mathbf{0}\})\cup W^{h^{-1}}\), then \(x\) fixes both \(\mathbf{v}\) and \(\mathbf{v}^{h}\), and \(\mathbf{v}^{h^{\prime}}=\mathbf{v}^{hx^{-1}h^{-1}}=\mathbf{v}^{hh^{-1}}= \mathbf{v}\), whence \(\mathbf{v}\in\mathrm{fix}(h^{\prime})\). Therefore,
\[V=(W\setminus\{\mathbf{0}\})\cup W^{h^{-1}}\cup\mathrm{fix}(h^{\prime}).\]
Then
\[p^{d}=|V|\leqslant|W\setminus\{\mathbf{0}\}|+\left|W^{h^{-1}}\right|+\left| \mathrm{fix}(h^{\prime})\right|\leqslant(p^{d-1}-1)+p^{d-1}+p^{d-1}=3p^{d-1}-1.\]
This is a contradiction as \(p\geqslant 3\), and so \(H\cap H^{x}=\mathrm{C}_{H}(x)=K\).
We now construct a long chain of subgroups of \(G\) by intersecting subspace stabilisers.
**Lemma 3.4**.: _Suppose \(d\geqslant 2\) and \(G=\mathrm{Sym}(V)\). Let \(l_{1}\coloneqq d(d+1)/2-1\). Then there exist subspace stabilisers \(K_{1},\dots,K_{l_{1}}\) such that_
\[G>H>K_{1}>K_{1}\cap K_{2}>\dots>\bigcap_{i=1}^{l_{1}}K_{i}=T. \tag{3}\]
Proof.: Let \(\mathcal{I}\coloneqq\{(i,j)\ |\ i,j\in\{1,\dots,d\},i\leqslant j\}\setminus\{(1,d)\}\) be ordered lexicographically. Note that \(|\mathcal{I}|=l_{1}\). For each \((i,j)\in\mathcal{I}\), let \(K_{i,j}\) be the stabiliser in \(\mathrm{GL}(V)\) of \(\langle\mathbf{b}_{i},\mathbf{b}_{i+1}\dots,\mathbf{b}_{j}\rangle\) and define \(\mathcal{I}_{i,j}\coloneqq\{(k,l)\in\mathcal{I}\ |\ (k,l)\leqslant(i,j)\}.\) Since \(T\leqslant K_{i,j}\) for all \(i,j\), we see that
\[T\leqslant\bigcap_{(i,j)\in\mathcal{I}}K_{i,j}\leqslant\bigcap_{i=1}^{d}K_{i, i}=T.\]
Hence equality holds, proving the final equality in (3).
We now show that, for all \((i,j)\in\mathcal{I}\),
\[\bigcap_{(k,l)\in\mathcal{I}(i,j)\setminus\{(i,j)\}}\hskip-14.226378ptK_{k,l} >\bigcap_{(k,l)\in\mathcal{I}(i,j)}\hskip-14.226378ptK_{k,l}.\]
For \(1\leqslant j<d\), let \(g_{1,j}\) be the linear map that sends \(\mathbf{b}_{j}\) to \(\mathbf{b}_{j}+\mathbf{b}_{j+1}\) and fixes \(\mathbf{b}_{k}\) for \(k\neq j\). Then \(g_{1,j}\) stabilises \(\langle\mathbf{b}_{1}\rangle\,,\dots,\langle\mathbf{b}_{j-1}\rangle\) and any sum of these subspaces, but not \(\langle\mathbf{b}_{1},\dots,\mathbf{b}_{j}\rangle\). Hence \(g_{1,j}\in K_{1,l}\) for all \(l<j\) but \(g_{1,j}\notin K_{1,j}\). For \(2\leqslant i\leqslant j\leqslant d\), let \(g_{i,j}\) be the linear map that sends \(\mathbf{b}_{j}\) to \(\mathbf{b}_{i-1}+\mathbf{b}_{j}\) and fixes \(\mathbf{b}_{k}\) for \(k\neq j\). Then \(g_{i,j}\) stabilises \(\langle\mathbf{b}_{1}\rangle\,,\dots,\langle\mathbf{b}_{j-1}\rangle\,,\langle \mathbf{b}_{j},\mathbf{b}_{i-1}\rangle\,,\langle\mathbf{b}_{j+1}\rangle\,, \dots,\langle\mathbf{b}_{d}\rangle\) and any sum of these subspaces, but not \(\langle\mathbf{b}_{i},\dots,\mathbf{b}_{j}\rangle\). Hence \(g_{i,j}\in K_{k,l}\) for all \((k,l)<(i,j)\) but \(g_{i,j}\notin K_{i,j}\).
Therefore, the \(K_{i,j}\)'s, ordered lexicographically by the subscripts, are as required.
We have now found the initial segment of an irredundant base of \(\mathrm{Sym}(V)\). The next subsection extends this to a base.
### Subgroups of the diagonal subgroup
We now show that, with certain constraints on \(p\), every subgroup of \(T\) is an intersection of \(G\)-conjugates of \(T\), and hence, by Lemma 3.4, an intersection of \(G\)-conjugates of \(H\). We first prove a useful result about subgroups of the symmetric group generated by a \(k\)-cycle.
**Lemma 3.5**.: _Let \(s\in\mathrm{S}_{m}\) be a cycle of length \(k<m\) and let \(a\) be a divisor of \(k\). Suppose that \((k,a)\neq(4,2)\). Then there exists \(x\in\mathrm{S}_{m}\) such that_
\[\left\langle s\right\rangle\cap\left\langle s\right\rangle^{x}=\left\langle s ^{a}\right\rangle.\]
Proof.: Without loss of generality, assume \(s=(1\ 2\ \cdots\ k)\) and \(a>1\). If \(a=k\), then take \(x\coloneqq(1\ m)\), so that \(\left\langle s\right\rangle\cap\left\langle s\right\rangle^{x}=1\), as \(m\notin\mathrm{supp}(s^{i})\) and \(m\in\mathrm{supp}((s^{i})^{x})\) for all \(1\leqslant i<k\). Hence we may assume \(a<k\) and \(k\neq 4\). We find that
\[s^{a}=(1\ \ a+1\ \ \cdots\ \ k-a+1)(2\ \ a+2\ \ \cdots\ \ k-a+2)\cdots(a\ \ 2a\ \ \cdots\ \ k).\]
Let
\[x\coloneqq(1\ \ 2\ \ \cdots\ \ a)(a+1\ \ a+2\ \ \cdots\ \ 2a)\cdots(k-a+1\ \ k-a+2\ \ \cdots\ \ k).\]
Then \((s^{a})^{x}=s^{a}\). Hence \(\left\langle s^{a}\right\rangle=\left\langle s^{a}\right\rangle^{x}\leqslant \left\langle s\right\rangle\cap\left\langle s\right\rangle^{x}\).
To prove that equality holds, suppose \(\left\langle s^{a}\right\rangle<\left\langle s\right\rangle\cap\left\langle s \right\rangle^{x}\). Then there exists \(b\in\{1,\ldots,a-1\}\) such that \((s^{b})^{x}=s^{c}\) for some \(c\) not divisible by \(a\). Computing
\[1^{s^{c}}=1^{x^{-1}s^{b}x}=a^{s^{b}x}=(a+b)^{x}=a+b+1=1^{s^{a+b}}.\]
Therefore,
\[2^{s^{c}}=2^{s^{a+b}}=\begin{cases}a+b+2,&\text{if $b\neq a-1$ or $k>2a$,}\\ 1,&\text{if $b=a-1$ and $k=2a$.}\end{cases} \tag{4}\]
On the other hand,
\[2^{x^{-1}s^{b}x}=1^{s^{b}x}=(b+1)^{x}=\begin{cases}b+2,&\text{if $b\neq a-1$,}\\ 1,&\text{if $b=a-1$.}\end{cases} \tag{5}\]
Comparing (4) and (5), we see that \(b=a-1\) and \(k=2a\). Hence \(a^{s^{c}}=a^{s^{a+b}}=a-1\), whereas
\[a^{x^{-1}s^{b}x}=(a-1)^{s^{b}x}=(2a-2)^{x}=2a-1\]
(\(x\) sends \(2a-2\) to \(2a-1\) as \(k\neq 4\)), a contradiction. The result follows.
Recall from SS 3.1 the subgroup \(T\) of \(\mathrm{GL}(V)\) and the primitive element \(\mu\) of \(\mathbb{F}_{p}\). For each \(i\in\{1,\ldots,d\}\), let \(g_{i}\in\mathrm{GL}(V)\) send \(\mathbf{b}_{i}\) to \(\mu\mathbf{b}_{i}\) and fix \(\mathbf{b}_{j}\) for \(j\neq i\). Then \(T=\left\langle g_{1},\ldots,g_{d}\right\rangle\).
**Lemma 3.6**.: _Suppose \(d\geqslant 1\), \(p\geqslant 3\), and \(G=\mathrm{Sym}(V)\). Let \(i\in\{1,\ldots,d\}\) and let \(a\) be a divisor of \((p-1)\) with \((p,a)\neq(5,2)\). Then there exists \(x\in G\) such that_
\[T\cap T^{x}=\left\langle g_{1},\ldots,g_{i-1},{g_{i}}^{a},g_{i+1},\ldots,g_{d} \right\rangle.\]
Proof.: Up to a change of basis, \(i=1\). The map \(g_{1}\in\mathrm{GL}(V)<G\) has a cycle \(s=(\mathbf{b}_{1}\ \mu\mathbf{b}_{1}\ \mu^{2}\mathbf{b}_{1}\ \cdots\ \mu^{p-2} \mathbf{b}_{1})\). Treating \(s\) as a permutation on the subspace \(\left\langle\mathbf{b}_{1}\right\rangle\), we see that, for all \(\mathbf{u}\in\left\langle\mathbf{b}_{1}\right\rangle\) and \(\mathbf{w}\in\left\langle\mathbf{b}_{2},\ldots,\mathbf{b}_{d}\right\rangle\) (if \(d=1\), then consider \(\mathbf{w}=\mathbf{0}\)),
\[(\mathbf{u}+\mathbf{w})^{g_{1}}=\mathbf{u}^{g_{1}}+\mathbf{w}=\mathbf{u}^{s}+ \mathbf{w}.\]
By Lemma 3.5, since \(s\) is a \((p-1)\)-cycle and \((p-1,a)\neq(4,2)\), there exists \(x\in\operatorname{Sym}(\langle\mathbf{b}_{1}\rangle)\) such that \(\langle s\rangle\cap\langle s\rangle^{x}=\langle s^{a}\rangle\). Define \(\tilde{x}\in G\) by setting
\[(\mathbf{u}+\mathbf{w})^{\tilde{x}}\coloneqq\mathbf{u}^{x}+\mathbf{w}\]
for all \(\mathbf{u}\in\langle\mathbf{b}_{1}\rangle\) and \(\mathbf{w}\in\langle\mathbf{b}_{2},\ldots,\mathbf{b}_{d}\rangle\). Let \(g\) be any element of \(T\) and write \(g=g_{1}^{c}g^{\prime}\) with \(c\in\{1,\ldots,p-1\}\) and \(g^{\prime}\in\langle g_{2},\ldots,g_{d}\rangle\). Then, with \(\mathbf{u},\mathbf{w}\) as above,
\[(\mathbf{u}+\mathbf{w})^{g}=\mathbf{u}^{g_{1}^{c}}+\mathbf{w}^{g^{\prime}}= \mathbf{u}^{s^{c}}+\mathbf{w}^{g^{\prime}}\]
and similarly
\[(\mathbf{u}+\mathbf{w})^{g^{\tilde{x}}}=\mathbf{u}^{(s^{c})^{x}}+\mathbf{w}^ {g^{\prime}}.\]
Hence \(g^{\tilde{x}}\in T\) if and only if \((s^{c})^{x}\in\langle s\rangle\), which holds if and only if \(a\mid c\). Therefore, \(T\cap T^{\tilde{x}}=\langle g_{1}^{a},g_{2},\ldots,g_{d}\rangle\,,\) as required.
**Lemma 3.7**.: _Suppose \(d\geqslant 1\), \(p\geqslant 3\), and \(G=\operatorname{Sym}(V)\). Let \(l_{2}\coloneqq d\) if \(p=3,5\), and \(l_{2}\coloneqq d\,\Omega(p-1)\) otherwise. Then there are subsets \(Y_{1},\ldots,Y_{l_{2}}\subseteq G\) such that_
\[T>\bigcap_{x\in Y_{1}}T^{x}>\bigcap_{x\in Y_{2}}T^{x}>\cdots>\bigcap_{x\in Y _{l_{2}}}T^{x}=1.\]
Proof.: First, suppose \(p=3\) or \(p=5\). For all \(i\in\{1,\ldots,d\}\), by Lemma 3.6, there exists \(y_{i}\in G\) such that
\[T\cap T^{y_{i}}=\langle g_{1},\ldots,g_{i-1},g_{i+1},\ldots,g_{d}\rangle\,;\]
setting \(Y_{i}\coloneqq\{y_{1},\ldots,y_{i}\}\) gives
\[\bigcap_{x\in Y_{i}}T^{x}=\langle g_{i+1},\ldots,g_{d}\rangle\,.\]
Therefore, \(Y_{1},\ldots,Y_{d}\) are as required.
Now, suppose \(p\geqslant 7\). Let \(a_{1},\ldots,a_{\Omega(p-1)}\) be a sequence of factors of \((p-1)\) such that \(a_{i}\mid a_{i+1}\) for all \(i\). Let \(\mathcal{I}\coloneqq\{1,\ldots,d\}\times\{1,\ldots,\Omega(p-1)\}\) be ordered lexicographically. For each pair \((i,j)\in\mathcal{I}\), by Lemma 3.6, there exists \(y_{i,j}\in G\) such that
\[T\cap T^{y_{i,j}}=\langle g_{1},\ldots,g_{i-1},g_{i}{}^{a_{j}},g_{i+1},\ldots, g_{d}\rangle\,;\]
setting \(Y_{i,j}\coloneqq\{y_{i^{\prime},j^{\prime}}\mid(i^{\prime},j^{\prime})\in \mathcal{I},(i^{\prime},j^{\prime})<(i,j)\}\) gives
\[\bigcap_{x\in Y_{i,j}}T^{x}\,=\langle g_{i}{}^{a_{j}},g_{i+1},\ldots,g_{d} \rangle\,.\]
Therefore, the \(Y_{i,j}\)'s, ordered lexicographically by the subscripts, are as required.
This completes our preparations for the proof of Theorem 5.
### Proof of Theorem 5
Recall the assumption that \(G\) is \(\mathrm{S}_{p^{d}}\) or \(\mathrm{A}_{p^{d}}\) (\(p\) is an odd prime and \(p^{d}\geqslant 7\)), which we identify here with \(\operatorname{Sym}(V)\) or \(\operatorname{Alt}(V)\), and \(H=\operatorname{AGL}_{d}(p)\cap G\), which we identify with \(\operatorname{AGL}(V)\cap G\).
Proof of Theorem 5.: First, suppose \(d\geqslant 2\), \(p\geqslant 3\), and \(G=\operatorname{Sym}(V)\). Let \(K_{1},\ldots,K_{l_{1}}\) be as in Lemma 3.4. For each \(i\in\{1,\ldots,l_{1}\}\), by Lemma 3.3, there exists \(x_{i}\in G\) such that \(H\cap H^{x_{i}}=K_{i}\). Define \(X_{i}\coloneqq\{1\}\cup\{x_{j}\mid 1\leqslant j<i\}\subseteq G\) for all such \(i\). Then by Lemma 3.4,
\[G>H=\bigcap_{x\in X_{1}}H^{x}>\bigcap_{x\in X_{2}}H^{x}>\cdots>\bigcap_{x\in X _{l_{1}+1}}H^{x}=T. \tag{6}\]
Let \(Y_{1},\ldots,Y_{l_{2}}\subseteq G\) be as in Lemma 3.7. For each \(i\in\{1,\ldots,l_{2}\}\), let \(Z_{i}\coloneqq\{xy\mid x\in X_{l_{1}+1},y\in Y_{i}\}\), so that
\[\bigcap_{z\in Z_{i}}H^{z}=\bigcap_{y\in Y_{i}}\left(\bigcap_{x\in X _{l_{1}+1}}H^{x}\right)^{y}=\bigcap_{y\in Y_{i}}T^{y}.\]
Then Lemma 3.7 gives
\[T>\bigcap_{z\in Z_{1}}H^{x}>\bigcap_{z\in Z_{2}}H^{x}>\cdots> \bigcap_{z\in Z_{l_{2}}}H^{x}=1. \tag{7}\]
Concatenating the chains (6) and (7), we obtain a chain of length \(l_{1}+l_{2}+1\).
Now, suppose \(d\geqslant 2\), \(p\geqslant 3\), and \(G\) is \(\operatorname{Sym}(V)\) or \(\operatorname{Alt}(V)\). By Lemma 2.1 and Lemma 2.2, since \(\operatorname{AGL}(V)\nleq\operatorname{Alt}(V)\), the lower bounds in the theorem hold. For the upper bound on \(\operatorname{I}(G,H)\), simply compute
\[\operatorname{I}(G,H) \leqslant 1+\Omega(|H|)\leqslant\Omega(p^{d}(p^{d}-1)(p^{d}-p) \cdots(p^{d}-p^{d-1}))+\varepsilon(G)\] \[<\frac{d(d+1)}{2}+\log((p^{d}-1)(p^{d-1}-1)\cdots(p-1))+ \varepsilon(G)\] \[<\frac{d(d+1)}{2}(1+\log p)+\varepsilon(G).\]
Finally, suppose \(d=1\) and \(p\geqslant 7\). Using Lemma 3.7, we obtain the chain (7) again. Concatenating the chain \(G>H>T\) with (7) and applying Lemma 2.1 and Lemma 2.2, we see that \(\operatorname{I}(G,H)\geqslant 1+\Omega(p-1)+\varepsilon(G)\). In fact, equality holds, as \(\operatorname{I}(G,H)\leqslant 1+\Omega(|H|)=1+\Omega(p-1)+\varepsilon(G)\).
## 4 The product action case
In this section, we prove Theorem 6. Once again, most work goes into the explicit construction of an irredundant base in order to prove the lower bounds, while the upper bounds will be obtained easily from the length of \(\operatorname{S}_{n}\).
Throughout this section, let \(m\geqslant 5\) and \(k\geqslant 2\) be integers, and let \(G\) be \(\operatorname{S}_{m^{k}}\) or \(\operatorname{A}_{m^{k}}\). Let \(M\coloneqq\operatorname{S}_{m}\wr\operatorname{S}_{k}\) act in product action on \(\Delta\coloneqq\{(a_{1},\ldots,a_{k})\mid a_{1},\ldots,a_{k}\in\{1,\ldots,m\}\}\) and identify \(M\) with a subgroup of \(\operatorname{S}_{m^{k}}\).
**Theorem 4.1** ([11]).: _The group \(M\cap G\) is a maximal subgroup of \(G\) if and only if one of the following holds:_
1. \(m\equiv 1\pmod{2}\)_;_
2. \(G=\operatorname{S}_{m^{k}}\)_,_ \(m\equiv 2\pmod{4}\) _and_ \(k=2\)_;_
3. \(G=\operatorname{A}_{m^{k}}\)_,_ \(m\equiv 0\pmod{4}\) _and_ \(k=2\)_;_
4. \(G=\operatorname{A}_{m^{k}}\)_,_ \(m\equiv 0\pmod{2}\) _and_ \(k\geqslant 3\)_._
The strategy to proving the lower bound in Theorem 6 is once again to find suitable two-point stabilisers from which a long chain of subgroups can be built.
For each pair of points \(\alpha,\beta\in\Delta\), let \(d(\alpha,\beta)\) denote the Hamming distance between \(\alpha\) and \(\beta\), namely the number of coordinates that differ.
**Lemma 4.2**.: _Let \(x\in M\). Then for all \(\alpha,\beta\in\Delta\),_
\[d(\alpha^{x},\beta^{x})=d(\alpha,\beta).\]
Proof.: Write \(x\) as \((v_{1},\ldots,v_{k})w\) with \(v_{1},\ldots,v_{k}\in\mathrm{S}_{m}\) and \(w\in\mathrm{S}_{k}\). Let \(\alpha=(a_{1},\ldots,a_{k})\) and \(\beta=(b_{1},\ldots,b_{k})\). Write \(\alpha^{x}=(a^{\prime}_{1},\ldots,a^{\prime}_{k})\) and \(\beta^{x}=(b^{\prime}_{1},\ldots,b^{\prime}_{k})\). Then for each \(i\in\{1,\ldots,k\}\),
\[a_{i}=b_{i}\Longleftrightarrow{a_{i}}^{v_{i}}={b_{i}}^{v_{i}}\Longleftrightarrow {a^{\prime}_{iw}}=b^{\prime}_{iw}.\]
Since \(w\) is a permutation of \(\{1,\ldots,k\}\), the result holds.
Define \(u\in\mathrm{S}_{m}\) to be \((1\ 2\ \cdots\ m)\) if \(m\) is odd, and \((1\ 2\ \cdots\ m-1)\) if \(m\) is even, so that \(u\) is an even permutation. Let \(U\coloneqq\langle u\rangle\leqslant\mathrm{S}_{m}\) and note that \(\mathrm{C}_{\mathrm{S}_{m}}(u)=U\). The group \(U\) will play a central role in the next lemma.
**Lemma 4.3**.: _Let \(i\in\{2,\ldots,k\}\) and \(r\in\{1,\ldots,m\}\). Let \(T_{r}\) be the stabiliser of \(r\) in \(\mathrm{S}_{m}\) and let \(W_{i}\) be the pointwise stabiliser of \(1\) and \(i\) in \(\mathrm{S}_{k}\). Then there exists \(x_{i,r}\in\mathrm{A}_{m^{k}}\) such that_
\[M\cap M^{x_{i,r}}=\left(U\times\left(\mathrm{S}_{m}\right)^{i-2}\times T_{r} \times\left(\mathrm{S}_{m}\right)^{k-i}\right)\rtimes W_{i}.\]
Proof.: Without loss of generality, assume \(i=2\). Define \(x=x_{2,r}\in\mathrm{Sym}(\Delta)\) by
\[(a_{1},a_{2},\ldots,a_{k})^{x}=\begin{cases}({a_{1}}^{u},a_{2},\ldots,a_{k})& \text{if $a_{2}=r$},\\ (a_{1},a_{2},\ldots,a_{k})&\text{otherwise}.\end{cases}\]
The permutation \(x\) is a product of \(m^{k-2}\) disjoint \(|u|\)-cycles and is therefore even.
Let \(K\coloneqq\left(U\times T_{r}\times\left(\mathrm{S}_{m}\right)^{k-2}\right) \rtimes W_{2}\). We show first that \(K\leqslant M\cap M^{x}\). Let \(h=(v_{1},\ldots,v_{m})w^{-1}\) be an element of \(K\). Then \(v_{1}\in U\), \(v_{2}\) fixes \(r\), and \(w\) fixes \(1\) and \(2\). Therefore, for all \(\alpha=(a_{1},a_{2},\ldots,a_{k})\in\Delta\), if \(a_{2}=r\), then
\[\alpha^{hx} =({a_{1}}^{v_{1}},a_{2},{a_{3}}^{v_{3w}},\ldots,{a_{k}}^{v_{k^{w} }})^{x}=({a_{1}}^{v_{1}u},a_{2},{a_{3}}^{v_{3w}},\ldots,{a_{k}}^{v_{k^{w}}})\] \[=({a_{1}}^{w_{1}},a_{2},{a_{3}}^{v_{3w}},\ldots,{a_{k}}^{v_{k^{w} }})=({a_{1}}^{u},a_{2},a_{3},\ldots,a_{k})^{h}=\alpha^{zh};\]
and if \(a_{2}\neq r\), then
\[\alpha^{hx} =({a_{1}}^{v_{1}},{a_{2}}^{v_{2}},{a_{3}}^{v_{3w}},\ldots,{a_{k}} ^{v_{k^{w}}})^{x}=({a_{1}}^{v_{1}},{a_{2}}^{v_{2}},{a_{3}}^{v_{3w}},\ldots,{a_{ k}}^{v_{k^{w}}})\] \[=({a_{1}},{a_{2}},{a_{3}},\ldots,{a_{k}})^{h}=\alpha^{zh}.\]
Therefore, \(x\) and \(h\) commute. Since \(h\) is arbitrary, \(K=K\cap K^{x}\leqslant M\cap M^{x}\).
Let \(B\) be the base group \(\left(\mathrm{S}_{m}\right)^{k}\) of \(M\). Since \(K\leqslant M\cap M^{x}\), we find that \(B\cap K\leqslant B\cap M^{x}\). We now show that \(B\cap M^{x}\leqslant B\cap K\), so let \(h_{1}=(v_{1},\ldots,v_{k})\in B\cap M^{x}\). Then \({h_{1}}^{x^{-1}}\in M\). We show that \(v_{1}\in U\) and \(v_{2}\) fixes \(r\), so that \(h_{1}\in K\). By letting \(g_{1}\coloneqq(1,1,v_{3},\ldots,v_{k})\in K\) and replacing \(h_{1}\) with \(g_{1}^{-1}h_{1}\), we may assume \(v_{3}=\cdots=v_{k}=1\). Let \(h_{2}\coloneqq xh_{1}x^{-1}h_{1}^{-1}={h_{1}}^{x^{-1}}h_{1}^{-1}\in M\), and let \(\alpha\coloneqq(a,b,c,\ldots,c)\) and \(\beta\coloneqq(a,r,c,\ldots,c)\) be elements of \(\Delta\) with \(a\neq m\) and \(b\notin\{r,r^{v_{2}^{-1}}\}\). Then \(\alpha\) and \(\alpha^{h_{1}}\) are both fixed by \(x\), and so \(\alpha^{h_{2}}=\alpha\). On the other hand,
\[\beta^{h_{2}}=\begin{cases}({a^{w_{1}u^{-1}v_{1}^{-1}}},r,c,\ldots,c),&\text{if $r ^{v_{2}}=r$},\\ ({a^{u}},r,c,\ldots,c),&\text{otherwise}.\end{cases}\]
Since \(d(\alpha^{h_{2}},\beta^{h_{2}})=d(\alpha,\beta)=1\) by Lemma 4.2 and \(a^{u}\neq a\), it must be the case that \(r^{v_{2}}=r\) and \(a^{uv_{1}u^{-1}v_{1}^{-1}}=a\). Therefore, \(v_{2}\in T_{r}\) and, as \(a\) is arbitrary in \(\{1,\ldots,m-1\}\), we deduce that \(v_{1}\in\mathrm{C}_{\mathrm{S}_{m}}(u)=U\) and hence \(h_{1}\in K\). Thus, \(B\cap M^{x}\leqslant B\cap K\) and so \(B\cap M^{x}=B\cap K\).
To show that \(M\cap M^{x}\leqslant K\), let \(h_{3}\in M\cap M^{x}\). Now, \(B\unlhd M\) and so \(B\cap K=B\cap M^{x}\unlhd M\cap M^{x}\). Therefore,
\[h_{3}\in\mathrm{N}_{M}(B\cap K)=\left(\mathrm{N}_{\mathrm{S}_{m}}(U)\times T_{r} \times\left(\mathrm{S}_{m}\right)^{k-2}\right)\rtimes W_{2}.\]
The equality uses the fact that \(\mathrm{N}_{\mathrm{S}_{m}}(U)\neq T_{r}\) (as \(m\geqslant 5\)). Through left multiplication by an element of \(K\), we may assume \(h_{3}\in\mathrm{N}_{\mathrm{S}_{m}}(U)\times\left(\mathrm{1}_{\mathrm{S}_{m}} \right)^{k-1}\). Then \(h_{3}\in B\cap M^{x}\leqslant K\). Since \(h_{3}\) is arbitrary, \(M\cap M^{x}\leqslant K\). Therefore, \(K=M\cap M^{x}\), as required.
We are now ready to prove the main result for the product action case. Recall the assumption that \(G\) is \(\mathrm{S}_{m^{k}}\) or \(\mathrm{A}_{m^{k}}\) and \(H=M\cap G\).
Proof of Theorem 6.: Firstly, suppose that \(H=M\). Let \(\mathcal{I}\coloneqq\{2,\ldots,k\}\times\{1,\ldots,m-1\}\), ordered lexicographically. For each \((i,r)\in\mathcal{I}\), let \(x_{i,r}\in\mathrm{A}_{m^{k}}\leqslant G\) be as in Lemma 4.3, and define
\[X_{i,r}\coloneqq\{1\}\cup\{x_{i^{\prime},r^{\prime}}\mid(i^{\prime},r^{\prime })\in\mathcal{I},(i^{\prime},r^{\prime})\leqslant(i,r)\}\subseteq G.\]
Then for all \((i,r)\in\mathcal{I}\),
\[B\cap\bigcap_{x\in X_{i,r}}M^{x}=U\times(1_{\mathrm{S}_{m}})^{i-2}\times( \mathrm{S}_{m})_{1,\ldots,r}\times(\mathrm{S}_{m})^{k-i}.\]
Hence, for all \((i,r),(j,s)\in\mathcal{I}\) with \((i,r)<(j,s)\), \(\bigcap_{x\in X_{i,r}}M^{x}>\bigcap_{x\in X_{j,s}}M^{x}.\) This results in the following chain of stabiliser subgroups, of length \((m-1)(k-1)+2\):
\[G>M>\bigcap_{x\in X_{2,1}}M^{x}>\cdots>\bigcap_{x\in X_{2,m-1}}M^{x}>\bigcap _{x\in X_{3,1}}M^{x}>\cdots>\bigcap_{x\in X_{k,m-1}}M^{x}>1.\]
Therefore, by Lemma 2.1, \(\mathrm{I}(G,H)=\mathrm{I}(G,M)\geqslant(m-1)(k-1)+2\).
Now, if \(H\neq M\), then \(G=\mathrm{A}_{m^{k}}\), and \(\mathrm{I}(G,H)\geqslant\mathrm{I}(\mathrm{S}_{m^{k}},M)-1\geqslant(m-1)(k-1)+1\) by Lemma 2.2.
Finally, for the upper bound on \(\mathrm{I}(G,H)\), we use (2) and [4, Lemma 2.1] to compute
\[\mathrm{I}(G,H) \leqslant 1+\ell(H)\leqslant 1+\ell(M)\leqslant 1+k\,\ell( \mathrm{S}_{m})+\ell(\mathrm{S}_{k})\] \[\leqslant 1+k\left(\frac{3}{2}m-2\right)+\left(\frac{3}{2}k-2 \right)\leqslant\frac{3}{2}mk-\frac{1}{2}k-1.\qed\]
## 5 Proof of Theorem 1
In this final section, we zoom out for the general case and prove Theorem 1 by considering the order of \(H\) and assembling results from previous sections.
Recall that \(G\) is \(\mathrm{S}_{n}\) or \(\mathrm{A}_{n}\) (\(n\geqslant 7\)) and \(H\neq\mathrm{A}_{n}\) is a primitive maximal subgroup of \(G\). Maroti proved in [12] several useful upper bounds on the order of a primitive subgroup of the symmetric group.
**Lemma 5.1**.:
1. \(|H|<50n^{\sqrt{n}}\)_._
2. _At least one of the following holds:_ 1. \(H=S_{m}\cap G\) _acting on_ \(r\)_-subsets of_ \(\{1,\ldots,m\}\) _with_ \(n=\binom{m}{r}\) _for some integers_ \(m,r\) _with_ \(m>2r\geqslant 4\)_;_ 2. \(H=(\mathrm{S}_{m}\wr\mathrm{S}_{k})\cap G\) _with_ \(n=m^{k}\) _for some_ \(m\geqslant 5\) _and_ \(k\geqslant 2\)_;_ 3. \(|H|<n^{1+\lfloor\log n\rfloor}\)_;_ 4. \(H\) _is one of the Mathieu groups_ \(M_{11},M_{12},M_{23},M_{24}\) _acting_ \(4\)_-transitively._
Proof.: (i) follows immediately from [12, Corollary 1.1]. (ii) follows from [12, Theorem 1.1] and the description of the maximal subgroups of \(\mathrm{S}_{n}\) and \(\mathrm{A}_{n}\) in [11].
Equipped with these results as well as Theorems 5 and 6, we are ready to prove Theorem 1.
Proof of Theorem 1.: If \(H\) is as in case (a) of Lemma 5.1(ii), then \(n=\binom{m}{r}\geqslant\binom{m}{2}=\frac{m(m-1)}{2}\). Hence \(m<2\sqrt{n}\) and, by (2),
\[\mathrm{I}(G,H)\leqslant 1+\ell(H)\leqslant 1+\ell(S_{m})<3\sqrt{n}-1.\]
If \(H\) is as in case (b) of Lemma 5.1(ii), then \(n=m^{k}\). By Theorem 6, \(\mathrm{I}(G,H)\leqslant\frac{3}{2}mk-\frac{1}{2}k-1\). If \(k=2\), then
\[\mathrm{I}(G,H)\leqslant 3m-2<3\sqrt{n}-1.\]
If \(k\geqslant 3\), then
\[\mathrm{I}(G,H)<\frac{3}{2}m\frac{\log n}{\log m}\leqslant\frac{3}{2}\sqrt[3]{n }\frac{\log n}{\log 5}<3\sqrt{n}-1.\]
If \(H\) is as in case (c) of Lemma 5.1(ii), then
\[\mathrm{I}(G,H)\leqslant 1+\ell(H)\leqslant 1+\log|H|<1+\log\left(n^{1+\log n} \right)=(\log n)^{2}+\log n+1.\]
Using the lists of maximal subgroups in [6], one can check that \(\ell(M_{11})=7\), \(\ell(M_{12})=8\), \(\ell(M_{23})=11\), and \(\ell(M_{24})=14\). It is thus easy to verify that \(\mathrm{I}(G,H)\leqslant 1+\ell(H)<(\log n)^{2}\) in case (d) of Lemma 5.1(ii). Therefore, part (i) of the theorem holds.
We now prove parts (ii) and (iii). By Theorem 3.1, if \(n=3^{d}\) for some integer \(d\geqslant 2\), then \(H=\mathrm{AGL}_{d}(3)\cap G\) is a maximal subgroup of \(G\). Theorem 5 now gives
\[\mathrm{I}(G,H)>\frac{d^{2}}{2}+\frac{d}{2}=\frac{(\log n)^{2}}{2(\log 3)^{2 }}+\frac{\log n}{2\log 3},\]
as required.
By Theorem 4.1, if \(n=m^{2}\) for some odd integer \(m\geqslant 5\), then \(H=(\mathrm{S}_{m}\wr\mathrm{S}_{2})\cap G\) is a maximal subgroup of \(G\). Theorem 6 now gives \(\mathrm{I}(G,H)\geqslant m=\sqrt{n}\), as required.
Finally, we prove an additional lemma.
**Lemma 5.2**.: _Let \(t\) be the index of \(H\) in \(G\). There exist constants \(c_{5},c_{6},c_{7},c_{8}\in\mathbb{R}_{>0}\) such that_
1. \(c_{5}\log t/\log\log t<n<c_{6}\log t/\log\log t\)_._
2. \(c_{7}\log\log t<\log n<c_{8}\log\log t\)_._
Proof.: It suffices to prove that such constants exist for \(n\) sufficiently large, so we may assume \(n>100\). We first note that \(\log t<\log|G|\leqslant n\log n\), from which we obtain
\[\log\log t<\log n+\log\log n<\log n+(\log n)\frac{\log\log 100}{\log 100}<1.412 \log n.\]
Hence we may take \(c_{7}=1/1.412>0.708\) for \(n>100\). By Lemma 5.1(i),
\[\log t =\log|G:H|=\log|G|-\log|H|>\log\frac{n!}{2}-\log\left(50n^{\sqrt{ n}}\right)\] \[>(n\log n-n\log e-1)-\left(\sqrt{n}\log n+\log 50\right)=n\log n-n \log e-\sqrt{n}\log n-\log 100\] \[>n\log n-n(\log e)\frac{\log n}{\log 100}-\sqrt{n}(\log n) \frac{\sqrt{n}}{\sqrt{100}}-(\log 100)\frac{n\log n}{100\log 100}\] \[>0.672\,n\log n,\]
where the second inequality follows from Stirling's approximation and the last inequality follows from the fact that \(\log e/\log 100<0.218\). We deduce further that \(\log\log t>\log n\) and hence take \(c_{8}=1\) for \(n>100\).
Finally, \(\log t/\log\log t<n\log n/\log n=n\) and \(\log t/\log\log t>0.672\,n\log n/1.412\log n=0.672\,n/1.412\). Therefore, for \(n>100\), we may take \(c_{5}=1\), \(c_{6}=1.412/0.672<2.11\).
Corollary 2 now follows by combining Theorem 1 and Lemma 5.2.
**Remark 5.3**.: Verifying all cases with \(7\leqslant n\leqslant 100\) by enumerating primitive maximal subgroups of \(\mathrm{S}_{n}\) and \(\mathrm{A}_{n}\) in Magma [2], we may take \(c_{5}=1\), \(c_{6}=4.03\), \(c_{7}=0.70\), and \(c_{8}=1.53\) in the statement of Lemma 5.2. With these values of the constants and those in the proof of Lemma 5.2, it is straightforward to obtain the values of the constants \(c_{2},c_{3},c_{4}\) given in Remark 3. For the values of \(c_{1}\), we use in addition the fact that, for any \(n_{0}\), if \(n\geqslant n_{0}\), then \((\log n)^{2}+(\log n)+1=(\log n)^{2}\left(1+1/\log n+1/(\log n)^{2}\right)<c_{8 }^{2}\left(1+1/\log n_{0}+1/(\log n_{0})^{2}\right)(\log\log t)^{2}\).
AcknowledgementThe authors would like to thank the Isaac Newton Institute for Mathematical Sciences for its support and hospitality during the programme _Groups, representations and applications: new perspectives_, when work on this article was undertaken. This work was supported by EPSRC grant N\({}^{\underline{\alpha}}\) EP/R014604/1, and also partially supported by a grant from the Simons Foundation.
|
2309.16828 | Insight from the Kullback--Leibler divergence into adaptive importance
sampling schemes for rare event analysis in high dimension | We study two adaptive importance sampling schemes for estimating the
probability of a rare event in the high-dimensional regime $d \to \infty$ with
$d$ the dimension. The first scheme, motivated by recent results, seeks to use
as auxiliary distribution a projection of the optimal auxiliary distribution
(optimal among Gaussian distributions, and in the sense of the
Kullback--Leibler divergence); the second scheme is the prominent cross-entropy
method. In these schemes, two samples are used: the first one to learn the
auxiliary distribution and the second one, drawn according to the learnt
distribution, to perform the final probability estimation. Contrary to the
common belief that the sample size needs to grow exponentially in the dimension
to make the estimator consistent and avoid the weight degeneracy phenomenon, we
find that a polynomial sample size in the first learning step is enough. We
prove this result assuming that the sought probability is bounded away from
$0$. For the first scheme, we show that the sample size only needs to grow like
$rd$ with $r$ the effective dimension of the projection, while for
cross-entropy, the polynomial growth rate remains implicit although insight on
its value is provided. In addition to proving consistency, we also prove that
in the regimes studied, the importance sampling weights do not degenerate. | Jason Beh, Yonatan Shadmi, Florian Simatos | 2023-09-28T20:19:29Z | http://arxiv.org/abs/2309.16828v1 | Insight from the Kullback-Leibler divergence into adaptive importance sampling schemes for rare event analysis in high dimension
###### Abstract
We study two adaptive importance sampling schemes for estimating the probability of a rare event in the high-dimensional regime \(d\to\infty\) with \(d\) the dimension. The first scheme, motivated by recent results, seeks to use as auxiliary distribution a projection of the optimal auxiliary distribution (optimal among Gaussian distributions, and in the sense of the Kullback-Leibler divergence); the second scheme is the prominent cross-entropy method. In these schemes, two samples are used: the first one to learn the auxiliary distribution and the second one, drawn according to the learnt distribution, to perform the final probability estimation. Contrary to the common belief that the sample size needs to grow exponentially in the dimension to make the estimator consistent and avoid the weight degeneracy phenomenon, we find that a polynomial sample size in the first learning step is enough. We prove this result assuming that the sought probability is bounded away from \(0\). For the first scheme, we show that the sample size only needs to grow like \(rd\) with \(r\) the effective dimension of the projection, while for cross-entropy, the polynomial growth rate remains implicit although insight on its value is provided. In addition to proving consistency, we also prove that in the regimes studied, the importance sampling weights do not degenerate.
###### Contents
* 1 Introduction
* 1.1 Avoiding the curse-of-dimensionality for adaptive importance sampling schemes
* 1.2 Main results
* 1.2.1 Minimal notation
* 1.2.2 High-dimensional efficiency of target densities
* 1.2.3 High-dimensional efficiency of estimations of target densities
* 1.3 Discussion of the assumption \(\inf_{d}p_{f}(A)>0\)
* 1.4 Literature overview
* 2
###### Abstract
We consider the \
take \(n_{p}\gg e^{\alpha d}\) for some \(\alpha>0\) in order to make \(\hat{p}_{f}(A)\) consistent (meaning for instance that \(\hat{p}_{f}(A)/p_{f}(A)\to 1\) in some suitable sense).
In this paper, we study adaptive importance sampling schemes where the auxiliary density is estimated: we have a target auxiliary density \(g_{\mathrm{tar}}\) which is estimated by \(\hat{g}_{\mathrm{tar}}\) using a sample of size \(n_{g}\). We stress the distinction between the two sample sizes \(n_{p}\) and \(n_{g}\): \(n_{g}\) is the sample size used to estimate the target auxiliary density \(g_{\mathrm{tar}}\), whereas \(n_{p}\) refers to the IS sample size used to estimate \(p_{f}(A)\) with \(g_{\mathrm{tar}}\). We restrict the analysis to a Gaussian setting, but this restriction is actually quite general since it is well-known that under fairly general conditions, a random variable \(X\) can be written as \(\Phi(Y)\) with \(Y\) Gaussian [39, 38]: then, we have \(\mathbb{P}(X\in A)=\mathbb{P}(Y\in A^{\prime})\) with \(A^{\prime}=\Phi^{-1}(A)\), and so our results apply with \(\Phi^{-1}(A)\) instead of \(A\).
Our main result is that the curse-of-dimensionality can be avoided provided \(n_{g}\) only grows polynomially in \(d\). More precisely, we study three different families of target auxiliary densities \(g_{\mathrm{tar}}\) and for each, we show that if \(n_{g}\) grows polynomially in \(d\) (with the exponent depending on the given target auxiliary density), then \(n_{p}\) does not need to grow exponentially in \(d\): actually, any sequence \(n_{p}\to\infty\) growing to infinity makes \(\hat{p}_{f}(A)\) consistent. Said otherwise, our results show that (within the particular assumptions made) the curse-of-dimensionality can be avoided in adaptive importance sampling schemes, provided the sample size of the adaptation step grows polynomially, and not exponentially, in the dimension.
Our results also shed light on the weight-degeneracy phenomenon, which states that, as the dimension increases, the largest importance sampling weight takes all the mass. One way to formulate the weight-degeneracy phenomenon is that, as \(d\to\infty\), we have
\[\frac{\max_{i=1,\ldots,n_{p}}(f/g)(Y_{i})}{\sum_{i=1,\ldots,n_{p}}(f/g)(Y_{i}) }\Rightarrow 1\]
with \(\Rightarrow\) denoting convergence in distribution. Such a behavior clearly prevents importance sampling estimators to converge, and this is why a large literature has been devoted to avoiding this phenomenon (see the literature overview in Section 1.4). Moreover, Chatterjee and Diaconis have recently proposed to use this ratio for testing for convergence [17, Section 2]. Our results show at the same time that the importance sampling estimator \(\hat{p}_{f}(A)\) is consistent, and that weight degeneracy is avoided. To capture this, we will use the following terminology. In the following definition, the distribution \(g\) may be random: then \(\mathbb{E}(\phi(Y)\mid g)\) with \(Y\sim g\) is a notation for
\[\mathbb{E}\left(\phi(Y)\mid g\right)=\int_{\mathbb{R}^{d}}\phi(y)g(y)\mathrm{y}.\]
Actually, \(g\) will be a Gaussian density with some random parameter \((\mu,\Sigma)\), and so conditioning on \(g\) is tantamount to conditioning on \((\mu,\Sigma)\).
**Definition 1.1** (High-dimensional efficiency for \(A\)).: _For each dimension \(d\), let \(A\), \(f\), \(g\) and the \(Y_{i}\)'s be as above (with \(g\) potentially random), and let in addition \(\ell=f/g\)._
_As \(d\to\infty\), we say that the sequence of auxiliary distributions \(g\) is efficient in high dimension for \(A\) if, for any sequence \(n_{p}\to\infty\), the two following conditions
_hold:_
\[\mathbb{E}\left(\frac{\max_{i=1,\ldots,n_{p}}\ell(Y_{i})}{\sum_{i=1,\ldots,n_{p}} \ell(Y_{i})}\mid g\right)\Rightarrow 0 \tag{2}\]
_and_
\[\mathbb{E}\left(\left|\frac{1}{p_{f}(A)n_{p}}\sum_{i=1}^{n_{p}}\ell(Y_{i}) \xi_{A}(Y_{i})-1\right|\mid g\right)\Rightarrow 0. \tag{3}\]
What is important in this statement is that the sampling size \(n_{p}\) does not need to grow at some prescribed rate with the dimension: thus, this avoids the curse-of-dimensionality in a strong sense. Chatterjee and Diaconis [17] proved that the minimal sampling size for an IS scheme is of the order of \(e^{D(f||g)}\) or \(e^{D(f||g)}\) with \(f|_{A}\) the distribution \(f\) conditioned on \(A\) and \(D(h||g)\) the Kullback-Leibler divergence between two densities \(h\) and \(g\): that the sampling size may grow at any speed actually hinges upon the fact that \(D(f||g)\) and \(D(f|_{A}||g)\) remain bounded, which is the kind of results that we will prove in this paper.
Note moreover that only (3) depends on \(A\), but the idea is that \(g\) will be chosen as a function of \(A\), which makes (2) implicitly depend on \(A\) as well. As will be seen below, the price to pay will be in the sampling size in the adaptation step where the auxiliary density is learned (in particular, \(g\) will be taken as an estimator \(\hat{g}_{\text{tar}}\) of some target density \(g_{\text{tar}}\)), but in this step, the sampling size will only need to grow polynomially in the dimension, and not exponentially as when the curse-of-dimensionality occurs.
Finally, an important feature of (3) is that we consider convergence in the \(L_{1}\) norm. This approach is discussed extensively in Sections 1 and 2 of [17] to which the reader is referred for more details.
### Main results
As mentioned earlier, our results are concerned with auxiliary densities \(\hat{g}_{\text{tar}}\) which are estimations of deterministic target densities \(g_{\text{tar}}\). Our first result (Theorem 1.2) concerns the efficiency in high dimension of these target densities. However, these target densities cannot be used in practice, and so we turn in Theorem 1.5 to the efficiency in high dimension of estimations of these target densities, which can be used in practice.
#### 1.2.1 Minimal notation
We introduce here the minimal set of notation necessary in order to state our main results, Theorems 1.2 and 1.5 below. Further notation will be introduced in Section 2.1.
Let in the sequel \(\mathcal{S}_{d}\) denote the space of \(d\times d\) symmetric, semi-definite positive matrices. For \(\mu\in\mathbb{R}^{d}\) and \(\Sigma\in\mathcal{S}_{d}\), we denote by \(N(\mu,\Sigma)\) the \(d\)-dimensional Gaussian distribution with mean \(\mu\) and covariance matrix \(\Sigma\). In the rest of the paper, we consider the case where the initial distribution \(f\) is the density of a \(d\)-dimensional standard Gaussian vector in dimension \(d\), i.e.,
\[f(x)=(2\pi)^{-d/2}e^{-\|x\|^{2}/2},\ x\in\mathbb{R}^{d},\]
where here and in the sequel, \(\|x\|\) denotes the \(L_{2}\)-norm of some vector \(x\in\mathbb{R}^{d}\) (note also that here and elsewhere, we identify a distribution with its density).
For any density \(g\) on \(\mathbb{R}^{d}\) and any measurable set \(B\subset\mathbb{R}^{d}\), we denote by \(p_{g}(B)=\int\xi_{B}g\) the measure of the set \(B\) under \(g\), and \(g|_{B}=g\xi_{B}/p_{g}(B)\) the distribution \(g\) conditioned on \(B\). Concerning random variables, we will adopt the following convention:
* \(X\) will refer to a generic random variable, and its distribution will be indicated by a subscript in the probability or in the expectation: for instance, \(\mathbb{E}_{f}(X)\) is the mean of \(X\) under \(\mathbb{P}_{f}\), i.e., when \(X\)'s distribution is \(f\);
* we will use \(Y\) to refer to random variables drawn according to a given distribution: in this case, their mean will be denoted by the generic \(\mathbb{E}\).
For instance, when the \(Y_{i}\)'s are i.i.d. drawn according to \(g\), then we will write
\[\mathbb{E}\left(\frac{1}{n}\sum_{i=1}^{n}\frac{f(Y_{i})}{g(Y_{i})}\phi(Y_{i}) \right)=\mathbb{E}\left(\frac{f(Y_{i})}{g(Y_{i})}\phi(Y_{i})\right)=\mathbb{E} _{g}\left(\frac{f(X)}{g(X)}\phi(X)\right)=\mathbb{E}_{f}(\phi(X)).\]
Another example is the probability \(p_{g}(B)\) which can equivalently be written as \(p_{g}(B)=\mathbb{P}_{g}(X\in B)=\mathbb{P}(Y\in B)\) with \(Y\sim g\).
For \(x\in\mathbb{R}_{+}\), we denote by \([x]=\max\{n\in\mathbb{N}:n\leq x\}\) its integer part.
Finally, for a cadlag function \(f:\mathbb{R}_{+}\to\mathbb{R}_{+}\), we consider its left-continuous inverse
\[f^{-1}(t)=\inf\{s\geq 0:f(s)\geq t\},\ t\geq 0.\]
Note that for \(x,t\geq 0\) we have \(f(x)\geq t\Leftrightarrow x\geq f^{-1}(t)\) and that if \(f\) is continuous, then \(f(f^{-1}(t))=t\).
#### 1.2.2 High-dimensional efficiency of target densities
It is well-known that the optimal IS auxiliary density for estimating the probability \(p_{f}(A)\) is \(f|_{A}=f\xi_{A}/p_{f}(A)\), i.e., the distribution \(f\) conditioned on \(A\). Indeed, for this choice of auxiliary density, we have \(\hat{p}=p_{f}(A)\) (with \(\hat{p}\) defined in (1)), i.e., \(p_{f}(A)\) is perfectly estimated. Of course, \(f|_{A}\) is intractable as it involves the unknown quantity \(p_{f}(A)\). Among Gaussian auxiliary densities, the one that minimizes the Kullback-Leibler divergence with \(f|_{A}\) is \(g_{A}=N(\mu_{A},\Sigma_{A})\) with \(\mu_{A}\) and \(\Sigma_{A}\) the mean and variance of \(f|_{A}\):
\[\mu_{A}=\mathbb{E}_{f|_{A}}(X)\ \ \text{and}\ \ \Sigma_{A}=\mathbb{V}\text{ar}_{f|_{A }}(X)=\mathbb{E}_{f|_{A}}(XX^{\top})-\mu_{A}\mu_{A}^{\top}.\]
This makes \(g_{A}\) a natural candidate for a good auxiliary density, and it will be our first target density (note the difference between the notation \(g|_{A}\) and \(g_{A}\), the former referring to a conditioned version of \(g\), and the latter to a Gaussian density with some prescribed parameter).
The second target density that we will study is \(g_{\text{proj}}\), obtained by projecting \(\Sigma_{A}\) onto a low-dimensional subspace. Various subspaces on which to project were proposed recently [29, 28, 62], and they all lead to considering a Gaussian auxiliary density with mean \(\mu_{A}\) and variance \(\Sigma_{\text{proj}}\) defined as
\[\Sigma_{\text{proj}}=\sum_{k=1}^{r}(v_{k}-1)d_{k}d_{k}^{\top}+I\ \ \text{with}\ \ v_{k}=d_{k}^{\top}\Sigma_{A}d_{k} \tag{4}\]
where the \(d_{i}\)'s form an orthonormal family, and \(r\) is the dimension of the small subspace on which to project. In practice, we have \(r\leq 3\) most of the times, but
our results will apply to any \(r\leq d\). They apply in particular for \(r=d\) in which case we have \(g_{\mathrm{proj}}=g_{A}\), and so \(g_{A}\) can be seen as special case of \(g_{\mathrm{proj}}\). Several choices are considered in [29, 28, 62]:
* in [62], a smooth approximation \(\tilde{\xi}_{A}\approx\xi_{A}\) of the characteristic function is considered. The \(d_{k}\)'s are the eigenvectors of the matrix \(H:=\mathbb{E}_{f|_{A}}((\nabla\log\tilde{\xi}_{A}(X))(\nabla\log\tilde{\xi}_{ A}(X))^{\top})\) and they are ranked in decreasing order of the corresponding eigenvalues, i.e., \(d_{1}\) corresponds to the largest eigenvalue of \(H\), \(d_{2}\) to the second largest eigenvalue, etc;
* in [28], only one direction is considered (\(r=1\)) and \(d_{1}=\mu_{A}/\|\mu_{A}\|\);
* in [29], the \(d_{k}\)'s are the eigenvectors of \(\Sigma_{A}\), and they are ranked in decreasing order according to the image by the function \(h(x)=x-1-\log x\) of the eigenvalues, i.e., \(d_{1}\) is associated to the eigenvalue maximizing \(h\), etc.
These different choices were analyzed in [28, 29, 62] and were found to perform very well numerically. However, an analytic explanation for that success was not complete and this work makes another step in this direction.
Finally, our third target auxiliary density is that obtained by the cross-entropy method (CE) with adaptive levels. The cross-entropy method works for a set \(A\) of the form \(A=\{x\in\mathbb{R}^{d}:\varphi(x)\geq q\}\) for some measurable function \(\varphi:\mathbb{R}^{d}\to\mathbb{R}\) and some threshold \(q\in\mathbb{R}\). In Algorithm 2 below we will introduce the usual version of CE, but for now we present its deterministic counterpart, which is the version of cross-entropy one would implement if all quantities were known and need not be estimated. The deterministic version of the cross-entropy method has a single parameter \(\rho\in(0,1)\) as input, and is detailed in Algorithm 1.
```
0:\(\rho\in(0,1)\)
1. define \(\mu_{0}=0\) and \(\Sigma_{0}=I\) and start with \(g_{0}=f=N(\mu_{0},\Sigma_{0})\);
2. Iterate the following steps: (a) given \(g_{t}\), consider \(q_{t}=F^{-1}(1-\rho)\), where \(F\) is the cumulative distribution function of \(\varphi(X)\) under \(\mathbb{P}_{g_{t}}\); (b) define \(A_{t}=\{x:\varphi(x)>q_{t}\}\) and \[\mu_{t+1}=\mu_{A_{t}}=\mathbb{E}_{f|_{A_{t}}}(X)\ \ \text{and}\ \ \Sigma_{t+1}=\Sigma_{A_{t}}=\mathbb{V}\mathrm{ar}_{f|_{A_{t}}}(X);\] (c) define \(g_{t+1}=N(\mu_{t+1},\Sigma_{t+1})\), let \(t=t+1\) and go to Step (a).
```
**Algorithm 1** Deterministic version of CE
In CE, there would be a stopping criterion in step 2, typically, one would stop when \(q_{t}\geq q\), i.e., \(A_{t}\subset A\), and then use \(g_{t}\) as auxiliary IS distribution. Here we do not consider the stopping criterion, and we rather prove that every \(g_{t}\) is efficient in high dimension for \(A\).
We can now state our main results concerning the deterministic target auxiliary densities \(g_{A}\), \(g_{\mathrm{proj}}\) and \(g_{t}\). The result is proved under the crucial assumption that \(\inf_{d}p_{f}(A)>0\), which is discussed in Section 1.3. Moreover, we say that a function \(\varphi:\mathbb{R}^{d}\to\mathbb{R}\) has no atom if for every \(x\in\mathbb{R}\), the set \(\varphi^{-1}(\{x\})\) has zero Lebesgue measure.
**Theorem 1.2**.: _If \(\inf_{d}p_{f}(A)>0\), then:_
* \(g_{A}\) _is efficient in high dimension for_ \(A\)_;_
* \(g_{\mathrm{proj}}\) _is efficient in high dimension for_ \(A\) _for any_ \(r\) _and any orthonormal family_ \((d_{1},\ldots,d_{r})\)_;_
* _if for every_ \(d\)_,_ \(\varphi\) _has no atom and if_ \(\inf_{d}\rho>0\)_, then for any_ \(t\geq 0\)__\(g_{t}\) _is efficient in high dimension for_ \(A\)_._
#### 1.2.3 High-dimensional efficiency of estimations of target densities
Theorem 1.2 indicates that the three target densities considered are suitable candidates as auxiliary densities in high-dimension. However, in practice they are intractable and so they need to be estimated: for \(g_{A}\) and \(g_{\mathrm{proj}}\), this is because they involve the unknown parameters \(\mu_{A}\) and \(\Sigma_{A}\), and for \(g_{t}\), it is because it relies on the computation of the quantiles \(q_{t}\) and on the conditional mean and variance under \(f|_{A_{t}}\), which are also unknown.
Although \(f|_{A}\) is analytically intractable because of the normalizing constant \(p_{f}(A)\), various simulation schemes (typically, MCMC) which allow to sample from a distribution only known up to the normalization constant can be used to sample from it. We therefore assume that we are given a sample \((Y_{A,i})_{i}\) of i.i.d. random variables drawn according to \(f|_{A}\), and this sample is used to estimate \(g_{A}\) and \(g_{\mathrm{proj}}\) as follows: \(\hat{g}_{A}=N(\hat{\mu}_{A},\hat{\Sigma}_{A})\) with
\[\hat{\mu}_{A}=\frac{1}{n_{g}}\sum_{k=1}^{n_{g}}Y_{A,k}\ \ \text{and}\ \ \hat{\Sigma}_{A}=\frac{1}{n_{g}}\sum_{k=1}^{n_{g}}Y_{A,k}Y_{A,k}^{\top}-\hat{ \mu}_{A}\hat{\mu}_{A}^{\top} \tag{5}\]
and \(\hat{g}_{\mathrm{proj}}=N(\hat{\mu}_{A},\hat{\Sigma}_{\mathrm{proj}})\) with
\[\hat{\Sigma}_{\mathrm{proj}}=\sum_{k=1}^{r}(\hat{v}_{k}-1)\hat{d}_{k}\hat{d}_ {k}^{\top}+I\ \ \text{with}\ \ \hat{v}_{k}=\tilde{d}_{k}^{\top}\hat{\Sigma}_{A}\hat{d}_{k} \tag{6}\]
where the \(\hat{d}_{k}\)'s form an orthonormal family, and are thought to be estimators of some deterministic \(d_{k}\)'s, the target directions on which one would project if one could. The \(\hat{d}_{k}\)'s are allowed to be random, but need to be independent from the \(Y_{A,i}\)'s, see Remark 1.3 below.
In cases where even sampling from \(f|_{A}\) is out of reach, a common strategy is to resort to the cross-entropy method. A deterministic version of the cross-entropy method was outlined above, but this deterministic version involves the quantiles \(q_{t}\) and the parameters \(\mu_{t}\) and \(\Sigma_{t}\) which cannot be computed analytically, and thus need to be estimated in practical schemes. Here we will consider the CE scheme with adaptive levels described in Algorithm 2, which leads to estimations of the deterministic \(g_{t}\)'s from Algorithm 1.
In the above, note that we have introduced another sequence \(m\) which is the size of the sample used in the quantile estimation step. As for the other sequences, \(m=m(d)\) is implicitly a sequence depending on \(d\).
**Remark 1.3**.: _The algorithms studied here slightly differ from those proposed in the literature because of different dependency structures. More precisely, in [28, 29, 62], the directions \(\hat{d}_{k}\) on which to project are indeed estimations of target directions \(d_{k}\), but they are computed from \(\hat{\Sigma}_{A}\) and so are not independent from the \(Y_{A,i}\)'s, contrary to what we assume. Likewise, in true CE schemes,
the same sample is used in steps 2a and 2b to estimate both the quantile and the mean and variance. This simpler dependency structure (whereby the \(\hat{d}_{k}\)'s are independent from the \(Y_{A,i}\)'s, and \(\hat{q}_{t}\) is independent from \((\hat{p}_{t},\hat{\mu}_{t+1},\hat{\Sigma}_{t+1})\)), makes the theoretical analysis easier. We leave it as open research to study the algorithms with the full dependency structure._
**Remark 1.4**.: _Another popular version of CE is the fixed-level CE, close to the subset simulation algorithm [3, 24, 14], where one replaces the sets \(\hat{A}_{t}\) with deterministic sets fixed in advance. This version is simpler to analyze, and the tools developed herein could be used to study the fixed-level version of CE._
We can now state our main result where we are interested in estimation of the target auxiliary densities of Theorem 1.2. Theorem 1.2 has shown that, under the assumption \(\inf_{d}p_{f}(A)>0\), these densities are efficient in high dimension: since \(n_{g}\) (and \(m\) for the CE) is the sample size involved in the additional estimation step, it is to be expected that if it is sufficiently large, then the estimated densities will be close enough to their deterministic counterparts and will thus remain efficient in high dimension. This is precisely what the following result formalizes through the additional (compared to Theorem 1.2) growth conditions on \(n_{g}\).
**Theorem 1.5**.: _If \(\inf_{d}p_{f}(A)>0\), then:_
* _if_ \(n_{g}\gg d^{2}\)_, then_ \(\hat{g}_{A}\) _is efficient in high dimension for_ \(A\)_;_
* _for any_ \(r\) _and any orthonormal family_ \((\hat{d}_{k},k=1,\ldots,r)\) _independent from the_ \(Y_{A,i}\)_'s, if_ \(n_{g}\gg rd\)_, then_ \(\hat{g}_{\mathrm{proj}}\) _is efficient in high dimension for_ \(A\)_;_
* _if_ \(\inf_{d}p>0\) _and for every_ \(d\)_,_ \(\varphi\) _has no atom, then for any_ \(t\geq 0\) _there exists a constant_ \(\kappa_{t}>0\) _such that if_ \(m\to\infty\) _and_ \(n_{g}\gg d^{\kappa_{t}}\)_, then_ \(\hat{g}_{t}\) _is efficient in high dimension for_ \(A\)
Before proceeding, let us comment this result along two directions: a comparison between \(\hat{g}_{A}\) and \(\hat{g}_{\text{proj}}\), and a discussion on the constants \(\kappa_{t}\).
_Discussion on \(\hat{g}_{A}\) vs \(\hat{g}_{\text{proj}}\)._ Let us first discuss some insight in \(\hat{g}_{A}\) and \(\hat{g}_{\text{proj}}\) provided by this result. If one could sample from \(g_{A}\) and \(g_{\text{proj}}\), even though both are efficient in high dimension according to Theorem 1.2, it is clear that \(g_{A}\) would be preferable since it is the optimal auxiliary density. However, \(g_{\text{proj}}\) involves less parameters and is therefore intuitively easier to estimate. Thus, although \(g_{A}\) is better than \(g_{\text{proj}}\), \(\hat{g}_{A}\) incurs more estimation error which could make \(\hat{g}_{\text{proj}}\) preferable to \(\hat{g}_{A}\). Theorem 1.5 provides evidence in that direction, in that it shows that \(\hat{g}_{A}\) remains efficient in high dimension provided \(n_{g}\gg d^{2}\), whereas for \(\hat{g}_{\text{proj}}\), one only needs \(n_{g}\gg rd\). As mentioned earlier, in practice we typically have \(r\leq 3\), and so one only needs a linear growth rate for \(\hat{g}_{\text{proj}}\), but a quadratic growth rate for \(\hat{g}_{A}\).
Of course Theorem 1.5 does not claim that these growth rates are sharp, and that the conditions \(n_{g}\gg d^{2}\) and \(n_{g}\gg rd\) are necessary for \(\hat{g}_{A}\) and \(\hat{g}_{\text{proj}}\) to be efficient in high dimension. Nonetheless, the following result suggests that the \(d^{2}\) threshold is sharp. In the following result, we assume that \(\mu_{A}\) and \(\Sigma_{A}\) are estimated from a sample (\(Y_{A,k}\)) drawn according to \(N(\mu_{A},\Sigma_{A})\) instead of \(f|_{A}\): since by definition, \(N(\mu_{A},\Sigma_{A})\) and \(f|_{A}\) have the same mean and variance, drawing the \(Y_{A,k}\)'s according to \(N(\mu_{A},\Sigma_{A})\) in (5) still gives consistent estimators. Of course this scheme is of no practical interest, as there does not seem to be methods to sample from \(N(\mu_{A},\Sigma_{A})\) without knowing \(\mu_{A}\) and \(\Sigma_{A}\). However, this scheme presents a theoretical interest, in that if the \(Y_{A,k}\)'s are Gaussian, then \(\hat{\mu}_{A}\) is Gaussian and \(\hat{\Sigma}_{A}\) follows a Wishart distribution. In this case, explicit formulas are available which allows to prove the following result.
**Proposition 1.6**.: _Assume that in (5) the \(Y_{A,k}\)'s are i.i.d. drawn according to \(N(\mu_{A},\Sigma_{A})\) instead of \(f|_{A}\). Assume that \(n_{g}\gg d\): then \(\sup_{d}\mathbb{E}(D(f||\hat{g}_{A}))<\infty\) if \(n_{g}\gg d^{2}\), and \(\mathbb{E}(D(f||\hat{g}_{A}))\to\infty\) if \(n_{g}\ll d^{2}\)._
The proof of this result is given in the appendix. As mentioned previously in the introduction, Chatterjee and Diaconis [17] proved that the sampling size needs to be at least \(e^{D(f||\hat{g}_{A})}\) in order for the IS estimator to be close to its target. Thus, the fact that the expected KL divergence diverges for \(n_{g}\ll d^{2}\) is an indication that the \(d^{2}\) threshold is sharp, in that if \(n_{g}\ll d^{2}\), then there is a minimal growth rate imposed upon \(n_{p}\), namely \(e^{D(f||\hat{g}_{A})}\), and so \(\hat{g}_{A}\) cannot be efficient in high dimension, at least in the way we defined it.
_Discussion on the constants \(\kappa_{t}\)._ Let us now discuss the constant \(\kappa_{t}\). For \(t=0\) we have \(\kappa_{0}=1\), and for \(t\geq 1\), we are only able to prove existence of some \(\kappa_{t}>0\). To give some intuition on this constant, let us introduce the notation \(\lambda_{1}(\Sigma)\) for the smallest eigenvalue of a symmetric, positive definitive matrix \(\Sigma\). Let further \(\hat{\lambda}_{*,t}=\min\{\lambda_{1}(\hat{\Sigma}_{1}),\ldots,\lambda_{1}( \hat{\Sigma}_{t})\}\) and \(\hat{\kappa}_{*,t}=8\max(1,1/\hat{\lambda}_{*,t}-1)\), so that \(\hat{\kappa}_{*,t}=8\) if \(\lambda_{1}(\hat{\Sigma}_{k})\geq 1/2\) for \(k=1,\ldots,t\), and \(\hat{\kappa}_{*,t}=8(1/\hat{\lambda}_{*,t}-1)>8\) otherwise. In Section 5.2.1 below, we explain that if \(n_{g}\gg d^{\kappa}\) for some \(\kappa>\hat{\kappa}_{*,t}\), then we could prove that \(\hat{g}_{t}\) is efficient in high dimension for \(A\). This would give a more explicit expression for the exponent of the required growth rate, but this would not be satisfactory because the growth rate would be random.
As \(\hat{\Sigma}_{t}\) is an estimator of \(\Sigma_{t}\), it is clear that this result suggests that Theorem 1.5 should hold for any \(\kappa_{t}>\kappa_{*,t}\) with \(\kappa_{*,t}=8\max(1,1/\lambda_{*,t}-1)\) with
\(\lambda_{*,t}=\min\{\lambda_{1}(\Sigma_{1}),\ldots,\lambda_{1}(\Sigma_{t})\}\). Because of monotonicity, in order to establish such a result, it would be enough to prove that
\[\forall\varepsilon>0,\ \mathbb{P}(\lambda_{1}(\hat{\Sigma}_{t})\geq(1- \varepsilon)\lambda_{1}(\Sigma_{t}))\to 1. \tag{9}\]
However, it is well-known that controlling the smallest eigenvalue of random matrices is a difficult task, see for instance [6], and we did not manage to find simple arguments to prove (9). However, we managed to prove the existence of some \(\underline{\lambda}_{t}>0\) such that \(\mathbb{P}(\lambda_{1}(\hat{\Sigma}_{t})\geq\underline{\lambda}_{t})\to 1\), and then Theorem 1.5 holds with \(\kappa_{t}=8\max(1,1/\underline{\lambda}_{*,t}-1)\) with \(\underline{\lambda}_{*,t}=\min\{\underline{\lambda}_{1},\ldots,\underline{ \lambda}_{t}\}\). We believe that, upon additional technical assumptions (e.g., on the growth rate of \(m\) and regularly properties for \(\varphi\)), one could prove something like (9) and therefore relate \(\kappa_{t}\) to the \(\lambda_{1}(\Sigma_{t})\)'s. However, our main objective was to show that polynomial growth rates were enough, and so we content ourselves with the result as stated above, although it could most probably be strengthened along various directions.
Note finally that the reason why smallest eigenvalues play a role in our proofs is that we need finite \(\alpha\)-th moments of the likelihood ratios \(f/\hat{g}_{t}\). More precisely, we need \(\alpha>0\) such that
\[\mathbb{E}_{f}\left[\left(\frac{f(X)}{\hat{g}_{t}(X)}\right)^{\alpha}\mid\hat {g}_{t}\right]<\infty,\ \ \text{almost surely}.\]
But for this to hold, one needs
\[\alpha<\min\left(1,\frac{\lambda_{1}(\hat{\Sigma}_{t})}{1-\lambda_{1}(\hat{ \Sigma}_{t})}\right)\]
which is where the smallest eigenvalues kick in.
### Discussion of the assumption \(\inf_{d}p_{f}(A)>0\)
Note that a central assumption in all our results is that the probability \(p_{f}(A)\) is bounded away from \(0\), i.e., \(\inf_{d}p_{f}(A)>0\). This assumption is quite typical in previous works that study high-dimensional importance sampling in a reliability context, see for instance [5].
One of the important insight of our work is that if \(\inf_{d}p_{f}(A)>0\), then \(D(f|_{A}||g_{A})\) and \(D(f||g_{A})\) are bounded (see Corollary 3.2 and the proof of Proposition 2.14). As mentioned earlier, this implies by the results of Chatterjee and Diaconis [17] that there is no minimal growth rate for \(n_{p}\). The following result shows that if \(p_{f}(A)\to 0\), then either \(D(f|_{A}||g_{A})\) or \(D(f||g_{A})\) is unbounded, which imposes a minimal growth rate on \(n_{p}\). In the naive Monte-Carlo scheme, the required number of samples grow like \(1/p_{f}(A)\), while in Section 4 we will prove that \(D(f|_{A}||g_{A})\leq-\log p_{f}(A)\), suggesting also (assuming that this bound is sharp) by the Chatterjee-Diaconis result that the required sample size should grow like \(1/p_{f}(A)\). Further investigation on minimal growth rates for \(n_{p}\) when \(p_{f}(A)\to 0\) represents an interesting research question which we leave untouched, and here we content ourselves with the following result.
**Theorem 1.7**.: _Assume that the condition \(p_{f}(A)\to 0\) holds. Then we have either \(\sup_{d}D(f||g_{A})=\infty\) or \(\sup_{d}D(f|_{A}||g_{A})=\infty\)._
### Literature overview
#### 1.4.1 Importance sampling as a sampling scheme
Importance sampling is a popular numerical method that can be used for sampling from an intricate distribution and for reducing the variance of a Monte-Carlo estimator, see for instance [48, 55] for a general introduction. The literature on the former case of using IS for sampling is very large. Starting from the basic IS schemes, many improved variants have been proposed, using mixtures and control variates [49], resampling schemes [56, 57], use of particular auxiliary densities [32] or local MCMC-like moves [46], to list only a few. Moreover, instead of aiming to sample from a given distribution, one may instead aim to sample from a sequence of distributions, leading to so-called sequential MC or IS schemes, see for instance [23, 26]. Sequential schemes can also be used in static contexts [21], and this idea lead to the fundamental population Monte-Carlo algorithm and its variants [13, 12]. Finally, adaptive IS schemes involve learning steps whereby parameters of the auxiliary distribution are updated against past performance [11, 47].
The theoretical analysis of the basic IS scheme is straightforward: as a sum of i.i.d. random variables, its consistency is settled by the law of large numbers and its speed by the central limit theorem. However, in more advanced schemes, resampling and recycling of samples create intricate dependency structures which make the analysis challenging. Theoretical results on the behavior of complex adaptive IS schemes can for instance be found in [1, 24, 25, 42, 53].
Concerning the high-dimensional behavior of IS, "it is well known that importance sampling is usually inefficient in high-dimensional spaces" [26]. One of the main reason is the weight degeneracy problem, whereby the largest IS weight takes all the mass. This phenomenon is due the potential heavy tail of likelihood ratios, which arises in high dimension as the densities naturally become singular with respect to one another. For this reason, various schemes have been proposed by transforming the weights in order to reduce their variance [27, 30, 35, 36, 43, 58, 63].
Although verified empirically, to our knowledge weight degeneracy has only been addressed theoretically in the related context of particle filters [7] (see also [60] for a review), where it is proved that the sample size needs to grow exponentially in the dimension in order to avoid weight degeneracy. In an unpublished report [40], the same authors have additional results on IS where for an i.i.d. target and a test function that only depends on one coordinate, they claim that the sample size needs to grow at least in \(\exp(d^{1/3})\) with \(d\to\infty\) the dimension (see in particular [40, Proposition 3.6]1). High-dimensional results are also obtained in [8, 9], who consider an i.i.d. target that is approximated by a sequential MC scheme through bridging distributions. Among other results, they prove that provided the number of bridging densities grows linearly in the dimension, the effective sample size remains bounded, therefore suggesting that "AIS schemes may beat the curse of dimensionality in some scenarios if properly designed" [11]. Our main results, Theorem 1.5 above, point toward a similar
conclusion in the context of rare event probability estimation.
#### 1.4.2 Importance sampling in a reliability context
In reliability, the overarching theme is the estimation of the probability \(p:=\mathbb{P}_{f}(X\in A)\) of an important event \(A\) (e.g., the failure of a critical component) which is deemed rare, so that \(p\) is small. The coefficient of variation \(\sqrt{\mathbb{Var}(\hat{I})}/\mathbb{E}(\hat{I})\) of the naive MC estimator \(\hat{I}\) scales like \(1/\sqrt{p}\), which calls upon improved techniques, such as the prominent subset simulation method [3, 14, 18, 54].
In the IS realm, various schemes have also been proposed, see for instance [4, 19, 50, 52] and [44, 45, 61] for a review. Recall from its description in Algorithms 1 and 2 above that CE aims at building a sequence of densities \(g_{t}\) getting closer and closer to the optimal density \(g_{A}\): CE can thus be seen as a special case of sequential IS, but because of its importance for our work, we will reserve a special discussion to CE below.
In high dimension, auxiliary distributions specific to reliability problems have been proposed to avoid weight degeneracy [20, 51, 65]. From a theoretical perspective, the authors study in [3] the performance of IS for high-dimensional reliability problems. The set-up is quite similar to ours, as the authors assume that the probability \(p\) is bounded away from \(0\), and they also consider Gaussian auxiliary distributions (they also consider mixture of Gaussian distributions, but do not have theoretical results in this case). Their main result is that in order for IS to be applicable in high dimension in the case where the initial distribution is standard normal, the covariance matrix of the auxiliary density must be a finite-rank perturbation of the identity. This is very close in spirit to what we prove here, as our proofs essentially rely on proving that \(\|\Sigma-I\|\) remains bounded, with \(\Sigma\) the covariance matrix of the auxiliary density considered. Note however that a significant difference between [5] and our results is that the authors in [5] consider the variance as the performance metric, which imposes a restriction on the set of auxiliary distributions that can be considered. More precisely, in order for \(f(X)/g(X)\) to have a finite second moment, with \(X\sim g\), \(f=N(0,I)\) and \(g=N(\mu,\Sigma)\), all eigenvalues of \(\Sigma\) must be larger than \(1/2\). If this condition is violated, then the authors conclude in [5] that IS is not applicable; since we consider the \(L_{1}\) norm, our scheme still works. Note however that, as explained in the discussion following Theorem 1.5, the threshold \(1/2\) has a strong impact on the performance of CE because of its influence on the growth rate \(\kappa_{t}\).
To conclude this literature overview, let us focus more precisely on the CE method [22, 37, 59]. In low dimension, numerous numerical results tend to suggest that CE and its improvements are quite efficient, see for instance [15, 31, 51]. However, even in this case, theoretical results backing up these numerical observations are pretty scarce. We are only aware of [33] (which provides proofs of some results announced earlier in [34]) which provides theoretical guarantees on the convergence of a modified version of CE. In high dimension, CE may suffer from weight degeneracy similarly as for general IS schemes discussed above. The dimension-reduction strategies discussed above aim at avoiding this problem [28, 29, 62]. Thus, to the best of our knowledge, our results are the first ones to theoretically address the behavior of CE in high dimension.
Preliminary results
### Further notation
Recall the notation already introduced in Section 1.2.1: here, we complement this notation with further notation needed in the paper. In the sequel, a vector \(x\in\mathbb{R}^{d}\) will be considered as a column vector, and its coordinates will be written \(x(1),\ldots,x(d)\). For vectors, indices will refer to sequences, for instance, \(X_{1},\ldots,X_{n}\) will typically denote an i.i.d. sequence of \(\mathbb{R}^{d}\)-valued random variables drawn according to \(f\), and \(X_{i}(k)\) will denote \(X_{i}\)'s \(k\)-th coordinate. Let in the sequel \(\mathcal{M}_{d}\) denote the space of \(d\times d\) matrices, and recall that \(\mathcal{S}_{d}\) denotes the space of \(d\times d\) symmetric, semi-definite positive matrices. For a matrix \(M\in\mathcal{M}_{d}\), we will write its entries either by \(M(i,j)\) or by \(M_{ij}\).
For \(x\in\mathbb{R}^{d}\) and \(M\in\mathcal{M}_{d}\), we denote by \(|x|\) and \(|M|\) the sum of the absolute values of its coordinates or entries:
\[|x|=\sum_{k=1}^{d}\lvert x(k)\rvert\ \ \text{and}\ \ |M|=\sum_{i,j}\lvert M_{ij}\rvert.\]
Note that we omit the dependency in the dimension in that \(\lvert\cdot\rvert\) denotes the \(L_{1}\) for different dimensions. This abuse of notation will be enforced throughout the paper as most of the times, dependency on \(d\) will be omitted in order to ease the notation. Let further \(\lVert x\rVert^{2}\) and \(\lVert M\rVert^{2}\) denote the sum of the square of its coordinates or entries:
\[\lVert x\rVert^{2}=\sum_{k=1}^{d}x(k)^{2}\ \ \text{and}\ \ \lVert M\rVert^{2}=\sum_{i,j}M_{ij}^{2}.\]
Note that \(\lVert x\rVert\leq\lvert x\rvert\) and \(\lVert M\rVert\leq\lvert M\rvert\). Further, for \(M\) symmetric, \(\lVert M\rVert\) is its Frobenius norm, and we have \(\lVert M\rVert^{2}=\operatorname{tr}(MM^{\top})\). For \(M\in\mathcal{M}_{d}\) a square matrix, we denote by \(\det(M)\) is determinant, and if \(M\) is symmetric, we denote by \(\lambda_{1}(M)\leq\cdots\leq\lambda_{d}(M)\) its eigenvalues ranked in increasing order. We will use repeatedly and without notice the variational characterization of eigenvalues, which implies in particular that, for \(M\in\mathcal{M}_{d}\) symmetric,
\[\lambda_{1}(M)\lVert x\rVert^{2}\leq x^{\top}Mx\leq\lambda_{d}(M)\lVert x \rVert^{2},\ x\in\mathbb{R}^{d}.\]
Concerning the \(L_{1}\) matrix norm, we will use the following result.
**Lemma 2.1**.: _For \(\Sigma,\Sigma^{\prime}\in\mathcal{M}_{d}\) symmetric, we have \(\lvert\lambda_{1}(\Sigma)-\lambda_{1}(\Sigma^{\prime})\rvert\leq\lVert\Sigma- \Sigma^{\prime}\rVert\)._
Proof.: Let \(v\in\mathbb{R}^{d}\) with \(\lVert v\rVert=1\). Then we have
\[v^{\top}\Sigma v=v^{\top}(\Sigma-\Sigma^{\prime})v+v^{\top}\Sigma^{\prime}v \geq v^{\top}(\Sigma-\Sigma^{\prime})v+\lambda_{1}(\Sigma^{\prime}).\]
Moreover,
\[\left(v^{\top}(\Sigma-\Sigma^{\prime})v\right)^{2}\leq\max\left(\lambda_{1}( \Sigma-\Sigma^{\prime})^{2},\lambda_{d}(\Sigma-\Sigma^{\prime})^{2}\right) \leq\lVert\Sigma-\Sigma^{\prime}\rVert^{2}\]
and so
\[v^{\top}\Sigma v\geq\lambda_{1}(\Sigma^{\prime})-\lVert\Sigma-\Sigma^{\prime }\rVert.\]
Since this holds for any \(v\in\mathbb{R}^{d}\) with unit norm, this entails
\[\lambda_{1}(\Sigma)\geq\lambda_{1}(\Sigma^{\prime})-\lVert\Sigma-\Sigma^{ \prime}\rVert\]
which gives the result by symmetry between \(\Sigma\) and \(\Sigma^{\prime}\).
We define the function \(\Psi:\mathcal{S}_{d}\to\mathbb{R}_{+}\) by
\[\Psi(\Sigma)=\frac{1}{2}(\operatorname{tr}(\Sigma)-\log\det(\Sigma)-d). \tag{10}\]
Note that if \(\psi(x)=x-\log x-1\) for \(x>0\), then we have \(\Psi(\Sigma)=\frac{1}{2}\sum_{i=1}^{d}\psi(\lambda_{i}(\Sigma))\). As \(\psi\geq 0\) and \(\psi(x)=0\Leftrightarrow x=1\), this shows that \(\Psi(\Sigma)\geq 0\) and that \(\Psi(\Sigma)=0\Leftrightarrow\Sigma=I\), with \(I\) the identity matrix.
Given two density functions \(g\) and \(g^{\prime}\) on \(\mathbb{R}^{d}\), with \(g\) absolutely continuous with respect to \(g^{\prime}\), we define the Kullback-Leibler (KL) divergence between \(g\) and \(g^{\prime}\) by
\[D(g||g^{\prime})=\int g(x)\log(g(x)/g^{\prime}(x))dx=\mathbb{E}_{g}\left(\log \left(\frac{g(X)}{g^{\prime}(X)}\right)\right).\]
(Recall that \(X\) stands for a generic random variable, and that then its law/density is indicated in the subscript of the expectation or measure; and that when using the notation \(Y\)'s for a random variable, its law will always be specified.)
For \(B\subset\mathbb{R}^{d}\) measurable and \(g\) a density on \(\mathbb{R}^{d}\), we denote by \(\mu_{B}^{g}\) and \(\Sigma_{B}^{g}\) the mean and variance of \(g|_{B}\):
\[\mu_{B}^{g}=\mathbb{E}_{g|_{B}}(X)\ \ \text{and}\ \ \Sigma_{B}^{g}=\mathbb{V} \mathrm{ar}_{g|_{B}}(X). \tag{11}\]
When \(g=f\) (the standard Gaussian density), we omit the superscript and simply write
\[\mu_{B}=\mu_{B}^{f}\ \ \text{and}\ \ \Sigma_{B}=\Sigma_{B}^{f}. \tag{12}\]
Finally, we use \(\Rightarrow\) to denote convergence in distribution, and we say that a sequence \(X\) of real-valued random variables, implicitly indexed by the dimension \(d\), is bounded with high probability (whp) if there exists \(K\geq 0\) such that \(\mathbb{P}(|X|\leq K)\to 1\) as \(d\to\infty\). Thus if \(X\) is bounded whp, it is tight.
### Results from Chatterjee and Diaconis [17]
In order to study the high-dimensional efficiency of some sequence of auxiliary distributions, we will crucially rely on the recent results of Chatterjee and Dianonis [17]: this result shows that it is enough to focus on the KL divergence and on the tail behavior of the log-likelihood. According to Theorem 1.1 in [17], for any measurable \(\phi:\mathbb{R}^{d}\to\mathbb{R}\) and any \(n\geq e^{D(f||g)}\), we have
\[\mathbb{E}\left(\left|\frac{1}{n}\sum_{i=1}^{n}\ell(Y_{i})\phi( Y_{i})-\mathbb{E}_{f}(\phi(X))\right|\right)\leq\left(\mathbb{E}_{f}\left( \phi(X)^{2}\right)\right)^{1/2}\times\\ \left[\left(\frac{e^{D(f||g)}}{n}\right)^{1/4}+2\left(\mathbb{P}_ {f}\left(L(X)\geq\frac{1}{2}\log n+\frac{1}{2}D(f||g)\right)\right)^{1/2}\right] \tag{13}\]
where \(\ell=f/g\), \(L=\log\ell\) and the \(Y_{i}\)'s are i.i.d. \(\sim g\). When \(\phi\equiv 1\) and \(f=f|_{A}\) for some measurable set \(A\subset\mathbb{R}^{d}\), then (13) becomes for \(n\geq e^{D(f|_{A}||g)}\)
\[\mathbb{E}\left(\left|\frac{1}{p_{f}(A)n}\sum_{i=1}^{n}\ell(Y_{i })\xi_{A}(Y_{i})-1\right|\right)\leq\left(\frac{e^{D(f|_{A}||g)}}{n}\right)^{ 1/4}\\ +2\left(\mathbb{P}_{f|_{A}}\left(L_{A}(X)\geq\frac{1}{2}\log n+ \frac{1}{2}D(f|_{A}||g)\right)\right)^{1/2} \tag{14}\]
with \(L_{A}=\log(f|_{A}/g)\). In the sequel, (13) will be referred to as the CD bound, while (14), which as we have just seen is simply a special case of (13), will be referred to as the conditional CD bound.
An important insight from the CD bounds (13) and (14) is that in order to show that some auxiliary distribution \(g\) is efficient in high dimension for \(A\), it is sufficient to control its KL divergence with \(f\) and \(f|_{A}\), and also the tails of the log-likelihoods \(\log(f/g)\) and \(\log(f|_{A}/g)\) (under \(f\) and \(f|_{A}\), respectively). Recall in the next statement that \(g\) can be random.
**Lemma 2.2**.: _If \(D(f||g)\) is bounded whp and \(\mathbb{P}_{f}(L(X)-D(f||g)\geq t\mid g)\Rightarrow 0\) as \(d\rightarrow\infty\) for any sequence \(t=t(d)\rightarrow\infty\), then (2) holds._
_If \(D(f|_{A}||g)\) is bounded whp and \(\mathbb{P}_{f|_{A}}(L_{A}(X)-D(f|_{A}||g)\geq t\mid g)\Rightarrow 0\) as \(d\rightarrow\infty\) for any sequence \(t=t(d)\rightarrow\infty\), then (3) holds._
Proof.: We prove the result for a deterministic auxiliary density \(g\): the result for a random distribution follows by conditioning on \(g\). The second part of the lemma follows directly from the conditional CD bound (14) and with \(t=(\log n-D(f|_{A}||g))/2\) which diverges to \(\infty\) since \(D(f|_{A}||g)\) is tight (because if it bounded whp). Note that we can invoke the bound (14) since \(n\geq e^{D(f|_{A}||g)}\) holds with high probability, since again \(D(f|_{A}||g)\) is tight. As for the first part of the lemma, the (unconditional) CD bound (13) with \(\phi\equiv 1\) implies that \(\frac{1}{n}\sum_{i=1}^{n}\ell(Y_{i})\to 1\) in \(L_{1}\) by the same arguments, which implies (2) by Theorem 2.3 in [17].
**Remark 2.3**.: _A simpler condition for (2), which does not require to go through the CD bounds, is that there exists \(\alpha>1\) such that \(\sup_{d}\mathbb{E}_{g}(\ell(X)^{\alpha})<\infty\): under this condition, it is easy to prove that \(\frac{1}{n}\sum_{i}\ell(Y_{i})\) is tight and that \(\frac{1}{n}\max_{i}\ell(Y_{i})\Rightarrow 0\) (where the \(Y_{i}\)'s are i.i.d. distributed according to \(g\)), which readily implies (2). In Lemma 2.13 below, we will derive a bound on the \(\alpha\)-th moment of the likelihood ratio: as this bound also involves the terms \(D(f||g)\), \(\Sigma\) and \(\mu\), going through Lemma 2.13 rather than the CD bounds does not lead to any significant simplification of the arguments above._
### General formula for the Kullback-Leibler divergence
For the following result, recall that \(g|_{B}=g\xi_{B}/p_{g}(B)\) is the measure \(g\) conditioned on \(B\), and that \(\mu_{B}^{g}\) and \(\Sigma_{B}^{g}\) denote the mean and variance of \(g|_{B}\) (see (11)).
**Lemma 2.4**.: _Let \(g=N(\mu,\Sigma)\) and \(g^{\prime}=N(\mu^{\prime},\Sigma^{\prime})\) be two \(d\)-dimensional Gaussian distributions with \(\mu,\mu^{\prime}\in\mathbb{R}^{d}\) and \(\Sigma,\Sigma^{\prime}\in\mathcal{S}_{d}\), and let \(B\subset\mathbb{R}^{d}\) be any measurable set. Then we have_
\[D(g|_{B}||g^{\prime})=-\log p_{g}(B)-\Psi(\Sigma^{-1}\Sigma_{B}^ {g})-\frac{1}{2}\left(\mu-\mu_{B}^{g}\right)^{\top}\Sigma^{-1}(\mu-\mu_{B}^{g}) \\ +\Psi(\Sigma^{\prime-1}\Sigma_{B}^{g})+\frac{1}{2}\left(\mu^{ \prime}-\mu_{B}^{g}\right)^{\top}\Sigma^{\prime-1}(\mu^{\prime}-\mu_{B}^{g}). \tag{15}\]
Proof.: By definition, we have
\[D(g|_{B}||g^{\prime})=\mathbb{E}_{g|_{B}}\left(\log\left(\frac{g|_{B}(X)}{g^{ \prime}(X)}\right)\right)=\mathbb{E}_{g|_{B}}\left(\log\left(\frac{g(X)}{p_{g} (B)g^{\prime}(X)}\right)\right)\]
using for the second equality that the random variable \(\xi_{B}(X)\) is \(\mathbb{P}_{g|_{B}}\)-almost surely equal to \(1\). Continuing, we get
\[D(g|_{B}||g^{\prime})=-\log p_{g}(B)+\mathbb{E}_{g|_{B}}\left(\log\left(g(X) \right)\right)-\mathbb{E}_{g|_{B}}\left(\log\left(g^{\prime}(X)\right)\right). \tag{16}\]
We have
\[\mathbb{E}_{g|_{B}}\left(\log\left(g^{\prime}(X)\right)\right)=- \frac{d}{2}\log(2\pi)-\frac{1}{2}\log\det(\Sigma^{\prime})\\ -\frac{1}{2}\mathbb{E}_{g|_{B}}\left((X-\mu^{\prime})^{\top} \Sigma^{\prime-1}(X-\mu^{\prime})\right). \tag{17}\]
Using the identity \(\operatorname{tr}(xy^{\top})=x^{\top}y\) and the linearity of the trace and the expectation, which makes them commute, we obtain
\[\mathbb{E}_{g|_{B}}\left((X-\mu^{\prime})^{\top}\Sigma^{\prime-1}(X-\mu^{ \prime})\right)=\operatorname{tr}\left[\Sigma^{\prime-1}\mathbb{E}_{g|_{B}} \left((X-\mu^{\prime})(X-\mu^{\prime})^{\top}\right)\right].\]
Further, since \(\Sigma_{B}^{g}\) is the variance of \(X\) under \(\mathbb{P}_{g|_{B}}\) and \(\mu_{B}^{g}\) its mean, we have
\[\mathbb{E}_{g|_{B}}\left((X-\mu^{\prime})(X-\mu^{\prime})^{\top}\right)= \Sigma_{B}^{g}+\left(\mu_{B}^{g}-\mu^{\prime}\right)\left(\mu_{B}^{g}-\mu^{ \prime}\right)^{\top}\]
and so (using again \(\operatorname{tr}(Vxx^{\top})=x^{\top}Vx\))
\[\mathbb{E}_{g|_{B}}\left((X-\mu^{\prime})^{\top}\Sigma^{\prime-1} (X-\mu^{\prime})\right) =\operatorname{tr}\left[\Sigma^{\prime-1}\left(\Sigma_{B}^{g}+ \left(\mu_{B}^{g}-\mu^{\prime}\right)\left(\mu_{B}^{g}-\mu^{\prime}\right)^{ \top}\right)\right]\] \[=\operatorname{tr}\left(\Sigma^{\prime-1}\Sigma_{B}^{g}\right)+ \left(\mu_{B}^{g}-\mu^{\prime}\right)^{\top}\Sigma^{\prime-1}(\mu_{B}^{g}-\mu^ {\prime}).\]
Plugging in this relation into (17), we obtain
\[\mathbb{E}_{g|_{B}}\left(\log\left(g^{\prime}(X)\right)\right)=- \frac{d}{2}\log(2\pi)-\frac{1}{2}\log\det(\Sigma^{\prime})\\ -\frac{1}{2}\operatorname{tr}\left(\Sigma^{\prime-1}\Sigma_{B}^{ g}\right)-\frac{1}{2}(\mu_{B}^{g}-\mu^{\prime})\Sigma^{\prime-1}(\mu_{B}^{g}- \mu^{\prime})^{\top}\]
and going back to the definition (10) of \(\Psi\), this gives
\[\mathbb{E}_{g|_{B}}\left(\log\left(g^{\prime}(X)\right)\right)=- \frac{d}{2}\log(2\pi)-\Psi(\Sigma^{\prime-1}\Sigma_{B}^{g})-\frac{1}{2}\log \det(\Sigma_{B}^{g})-\frac{d}{2}\\ -\frac{1}{2}(\mu_{B}^{g}-\mu^{\prime})\Sigma^{\prime-1}(\mu_{B}^{ g}-\mu^{\prime})^{\top}.\]
Since this formula is valid for any \(\mu^{\prime}\) and \(\Sigma^{\prime}\), it is also valid for \(\mu^{\prime}=\mu\) and \(\Sigma^{\prime}=\Sigma\), and for this choice it gives
\[\mathbb{E}_{g|_{B}}\left(\log\left(g(X)\right)\right)=-\frac{d}{2 }\log(2\pi)-\Psi(\Sigma^{-1}\Sigma_{B}^{g})-\frac{1}{2}\log\det(\Sigma_{B}^{g}) -\frac{d}{2}\\ -\frac{1}{2}(\mu_{B}^{g}-\mu)\Sigma^{-1}(\mu_{B}^{g}-\mu)^{\top}.\]
Plugging in the two previous relations into (16) leads to (15) as desired.
**Corollary 2.5**.: _Let \(g=N(\mu,\Sigma)\) with \(\mu\in\mathbb{R}^{d}\) and \(\Sigma\in\mathcal{S}_{d}\). Then for any measurable set \(B\in\mathbb{R}^{d}\), we have_
\[p_{f}(B)\geq p_{g}(B)\exp\left(-\Psi(\Sigma_{B}^{g})-\frac{1}{2}\|\mu_{B}^{g} \|^{2}\right).\]
Proof.: We have
\[p_{f}(B) =\mathbb{P}_{f}(X\in B)\] \[=\mathbb{E}_{g}\left(\frac{f(X)}{g(X)}\xi_{B}(X)\right)\] \[=\mathbb{P}_{g}(X\in B)\mathbb{E}_{g}\left(\frac{f(X)}{g(X)}\mid X \in B\right)\] \[=\mathbb{E}_{g|_{B}}\left(\frac{f(X)}{g|_{B}(X)}\right)\] \[=\mathbb{E}_{g|_{B}}\left(\exp\left\{\log\left(\frac{f(X)}{g|_{B} (X)}\right\}\right)\right).\]
Using Jensen's inequality with the convex function \(\exp\), we obtain
\[p_{f}(B)\geq\exp\left\{\mathbb{E}_{g|_{B}}\left(\log\left(\frac{f(X)}{g|_{B}( X)}\right)\right)\right\}=\exp\left(-D(g|_{B}||f)\right).\]
Applying (15) with \(g^{\prime}=f\), we see that
\[D(g|_{B}||f)=-\log p_{g}(B)-\Psi(\Sigma^{-1}\Sigma_{B}^{g})- \frac{1}{2}\left(\mu-\mu_{B}^{g}\right)^{\top}\Sigma^{-1}(\mu-\mu_{B}^{g})\\ +\Psi(\Sigma_{B}^{g})+\frac{1}{2}\|\mu_{B}^{g}\|^{2}\]
and so
\[D(g|_{B}||f)\leq-\log p_{g}(B)+\Psi(\Sigma_{B}^{g})+\frac{1}{2}\|\mu_{B}^{g}\| ^{2}.\]
Plugging this inequality in the inequality \(p_{f}(B)\geq e^{-D(g|_{B}||f)}\) derived above gives the result.
### Results on the function \(\Psi\)
In this section we gather useful results on \(\Psi(\Sigma)\).
**Lemma 2.6**.: _There exist two families of positive constants \(\{c_{\varepsilon,K}^{\pm}:\varepsilon,K\in(0,\infty)\}\), independent of the dimension \(d\), such that for any \(d\geq 1\) and any \(\Sigma\in\mathcal{S}_{d}\), the following implication holds for any \(\varepsilon,K\in(0,\infty)\):_
\[\varepsilon\leq\lambda_{1}(\Sigma)\leq\lambda_{d}(\Sigma)\leq K \Longrightarrow c_{\varepsilon,K}^{-}\|\Sigma-I\|^{2}\leq\Psi(\Sigma)\leq c_{ \varepsilon,K}^{+}\|\Sigma-I\|^{2}.\]
Proof.: Since \(\psi(x)\sim\frac{1}{2}(1-x)^{2}\) for \(x\to 1\), for each \(\varepsilon,K\in(0,\infty)\), there exist \(c_{\varepsilon,K}^{-}\leq c_{\varepsilon,K}^{+}\) such that \(c_{\varepsilon,K}^{-}(1-x)^{2}\leq\psi(x)\leq c_{\varepsilon,K}^{+}(1-x)^{2}\) for any \(x\in[\varepsilon,K]\). This gives the result since \(\Psi(\Sigma)=\sum_{i}\psi(\lambda_{i}(\Sigma))\) and \(\|\Sigma-I\|^{2}=\sum_{i}(\lambda_{i}(\Sigma)-1)^{2}\).
For the next statement, recall that a sequence of real-valued random variables \(X\) is said to be bounded whp if \(\mathbb{P}(|X|\leq K)\to 1\) for some \(K\geq 0\).
**Lemma 2.7**.: _For each \(d\) consider \(\Sigma\in\mathcal{S}_{d}\) possibly random. Then the following three conditions are equivalent:_
1. \(\Psi(\Sigma)\) _is bounded whp;_
2. \(\Psi(\Sigma^{-1})\) _is bounded whp;_
3. _the three sequences_ \(1/\lambda_{1}(\Sigma)\)_,_ \(\lambda_{d}(\Sigma)\) _and_ \(\|\Sigma-I\|\) _are bounded whp._
Proof.: Let us first prove these equivalences with almost surely bounded instead of bounded whp: at the end of the proof, we will explain how to go from almost surely to whp. Let us first show that \(1\Rightarrow 3\), so assume that \(\Psi(\Sigma)\) is almost surely bounded and let us show that \(1/\lambda_{1}(\Sigma)\), \(\lambda_{d}(\Sigma)\) and \(\|\Sigma-I\|\) are almost surely bounded. We have \(\Psi(\Sigma)\geq\psi(\lambda_{1}(\Sigma))\) and so \(\sup_{d}\psi(\lambda_{1}(\Sigma))<\infty\), and so necessarily \(\inf_{d}\lambda_{1}(\Sigma)>0\) because \(\psi(x)\to\infty\) as \(x\to 0\). The same argument implies \(\sup_{d}\lambda_{d}(\Sigma)<\infty\). And since \(\lambda_{1}(\Sigma)\) and \(\lambda_{d}(\Sigma)\) are bounded away from \(0\) and \(\infty\), the boundedness of \(\|\Sigma-I\|\) comes from Lemma 2.6.
The implication \(3\Rightarrow 1\) is immediate in view of Lemma 2.6.
To conclude, note that
\[\|\Sigma^{-1}-I\|^{2}=\sum_{i}\left(\frac{1}{\lambda_{i}(\Sigma)}-1\right)^{2 }\leq\frac{1}{\lambda_{1}(\Sigma)^{2}}\|\Sigma-I\|^{2}. \tag{18}\]
In particular, \(3\) implies \(\sup_{d}\|\Sigma^{-1}-I\|<\infty\) and so we can invoke the implication \(3\Rightarrow 1\) for \(\Sigma^{-1}\), which shows that \(3\Rightarrow 2\). The implication \(2\Rightarrow 3\) follows for similar reasons.
Let us now explain how to go from almost sure to whp. Let us for instance show that \(1\Rightarrow 2\), the arguments for the other implications are the same. Let \(K\geq 0\) such that \(\mathbb{P}(\Psi(\Sigma)\leq K)\to 1\). Then under \(\mathbb{P}(\cdot\mid\Psi(\Sigma)\leq K)\), \(\Psi(\Sigma)\) is almost surely bounded (by \(K\)) and so we have proved that \(\Psi(\Sigma^{-1})\) is almost surely bounded, i.e., there exists \(K^{\prime}\) such that \(\mathbb{P}(\Psi(\Sigma^{-1})\leq K^{\prime}\mid\Psi(\Sigma)\leq K)=1\). Writing
\[\mathbb{P}(\Psi(\Sigma^{-1})\leq K^{\prime})=\mathbb{P}(\Psi( \Sigma)\leq K)\mathbb{P}(\Psi(\Sigma^{-1}) \leq K^{\prime}\mid\Psi(\Sigma)\leq K)\\ +\mathbb{P}(\Psi(\Sigma^{-1})\leq K^{\prime},\Psi(\Sigma)>K)\]
and noting that \(\mathbb{P}(\Psi(\Sigma)\leq K)\to 1\), \(\mathbb{P}(\Psi(\Sigma^{-1})\leq K^{\prime}\mid\Psi(\Sigma)\leq K)=1\) and \(\mathbb{P}(\Psi(\Sigma^{-1})\leq K^{\prime},\Psi(\Sigma)>K)\to 0\), we obtain that \(\mathbb{P}(\Psi(\Sigma^{-1})\leq K^{\prime})\to 1\) as desired.
**Lemma 2.8**.: _For each \(d\geq 1\), let \(\Sigma,\Sigma^{\prime}\in\mathcal{S}_{d}\) possibly random. If \(\Psi(\Sigma)\) and \(\Psi(\Sigma^{\prime})\) are bounded whp, then \(\Psi(\Sigma\Sigma^{\prime})\) is bounded whp._
Proof.: As in Lemma 2.7, it is enough to prove the results with almost surely bounded instead of bounded whp. For simplicity, the almost surely quantifiers are left out. So assume that \(\Psi(\Sigma)\) and \(\Psi(\Sigma^{\prime})\) are bounded, and let us show that \(\Psi(\Sigma\Sigma^{\prime})\) is bounded. Lemma 2.7 implies that the sequences \(\lambda_{d}(\Sigma)\), \(1/\lambda_{1}(\Sigma)\) and \(\|\Sigma-I\|\) are bounded, and the same holds with \(\Sigma^{\prime}\) instead of \(\Sigma\). Since \(\lambda_{d}(\Sigma)\) is the matrix-norm induced by the \(L_{2}\)-norm on \(\mathbb{R}^{d}\), it is submultiplicative, and so \(\lambda_{d}(\Sigma\Sigma^{\prime})\leq\lambda_{d}(\Sigma)\lambda_{d}(\Sigma^{ \prime})\), which implies \(\lambda_{1}(\Sigma\Sigma^{\prime})\geq\lambda_{1}(\Sigma)\lambda_{1}(\Sigma^{ \prime})\) since \(\lambda_{1}(\Sigma)=1/\lambda_{d}(\Sigma^{-1})\). Therefore, \(\lambda_{d}(\Sigma\Sigma^{\prime})\) and \(1/\lambda_{1}(\Sigma\Sigma^{\prime})\) are bounded. Moreover, since
\[\|\Sigma\Sigma^{\prime}-I\|=\|(\Sigma-I)(\Sigma^{\prime}-I)+\Sigma^{\prime}-I+ \Sigma-I\|,\]
the triangle inequality and the sub-multiplicativity of the Frobenius norm imply that
\[\|\Sigma\Sigma^{\prime}-I\|\leq\|\Sigma-I\|\|\Sigma^{\prime}-I\|+\|\Sigma-I\| +\|\Sigma^{\prime}-I\|\]
and so \(\|\Sigma\Sigma^{\prime}-I\|\) is bounded. Lemma 2.7 implies that \(\Psi(\Sigma\Sigma^{\prime})\) is bounded.
**Corollary 2.9**.: _Let \(g=N(\mu,\Sigma)\) with \(\mu\in\mathbb{R}^{d}\) and \(\Sigma\in\mathcal{S}_{d}\), \(B\subset\mathbb{R}^{d}\) measurable and \(g_{B}=N(\mu_{B}^{g},\Sigma_{B}^{g})\); \(\mu\), \(\Sigma\) and \(B\) may be random. If \(\|\mu\|\), \(\Psi(\Sigma)\) and \(1/p_{g}(B)\) are bounded whp, then \(D(g|_{B}||g_{B})\), \(\Psi(\Sigma_{B}^{g})\) and \(\|\mu_{B}^{g}\|\) are bounded whp._
Proof.: As in the previous two proofs, we prove the results with bounded instead of bounded whp. If we apply (15) with \(\mu^{\prime}=\mu_{B}^{g}\) and \(\Sigma^{\prime}=\Sigma_{B}^{g}\), we obtain
\[D(g|_{B}||g_{B})=-\log p_{g}(B)-\Psi(\Sigma^{-1}\Sigma_{B}^{g})-\frac{1}{2} \left(\mu-\mu_{B}^{g}\right)^{\top}\Sigma^{-1}(\mu-\mu_{B}^{g}).\]
Since \(\Psi\geq 0\), we see that \(D(g|_{B}||g_{B})\leq-\log p_{g}(B)\) which gives the boundedness of \(D(g|_{B}||g_{B})\). Moreover, since \(D(g|_{B}||g_{B})\geq 0\) we obtain
\[\Psi(\Sigma^{-1}\Sigma_{B}^{g})+\frac{1}{2}\left(\mu-\mu_{B}^{g}\right)^{\top }\Sigma^{-1}(\mu-\mu_{B}^{g})\leq-\log p_{g}(B).\]
This shows that \(\Psi(\Sigma^{-1}\Sigma_{B}^{g})\) is bounded. But \(\Psi(\Sigma)\) is assumed to be bounded, and so \(\Psi(\Sigma^{-1})\) is bounded by Lemma 2.7, which implies that \(\Psi(\Sigma_{B}^{g})\) is bounded by Lemma 2.8. Likewise, the boundedness of \(\left(\mu-\mu_{B}^{g}\right)^{\top}\Sigma^{-1}(\mu-\mu_{B}^{g})\) implies that of \(\|\mu_{B}^{g}\|\) because:
\[\left(\mu-\mu_{B}^{g}\right)^{\top}\Sigma^{-1}(\mu-\mu_{B}^{g})\geq\frac{1}{ \lambda_{d}(\Sigma)}\|\mu-\mu_{B}^{g}\|^{2}\geq\frac{1}{\lambda_{d}(\Sigma)} \left(\|\mu_{B}^{g}\|-\|\mu\|\right)^{2}.\]
Since \(\lambda_{d}(\Sigma)\) is bounded (by Lemma 2.7, because \(\Psi(\Sigma)\) is), and \(\|\mu\|\) is bounded by assumption, the boundedness of \(\left(\mu-\mu_{B}^{g}\right)^{\top}\Sigma^{-1}(\mu-\mu_{B}^{g})\) indeed implies that of \(\|\mu_{B}^{g}\|\) by the inequality of the previous display.
### Bound on the tail of the log-likelihoods
In the next statement recall that \(f=N(0,I)\) is the standard Gaussian density in dimension \(d\) and that \(f|_{B}=f\xi_{B}/p_{f}(B)\) is the density \(f\) conditioned on \(B\) with mean \(\mu_{B}\) and variance \(\Sigma_{B}\) (see (12)).
**Lemma 2.10**.: _For \(B\subset\mathbb{R}^{d}\) measurable, \(y\in\mathbb{R}^{d}\) and \(V\in\mathcal{M}_{d}\) symmetric, we have_
\[\mathbb{V}\mathrm{ar}_{f|_{B}}(y^{\top}X)\leq\lambda_{d}(\Sigma_{B})\|y\|^{2} \ \text{ and }\ \mathbb{V}\mathrm{ar}_{f|_{B}}\left(X^{\top}VX\right)\leq\frac{2}{p_{f}(B)}\|V\|^{2}.\]
Proof.: The first inequality follows from the fact that \(\mathbb{V}\mathrm{ar}_{f|_{B}}(y^{\top}X)=y^{\top}\Sigma_{B}y\) and the variational characterization of eigenvalues. Let us prove the second inequality. First, note that for any function \(h:\mathbb{R}^{d}\to\mathbb{R}\), we have \(\mathbb{V}\mathrm{ar}_{f|_{B}}(h(X))\leq\frac{1}{p_{f}(B)}\mathbb{V}\mathrm{ar }_{f}(h(X))\). Indeed, we have
\[\mathbb{V}\mathrm{ar}_{f|_{B}}(h(X))=\mathbb{E}_{f|_{B}}\left[(h(X)-\mathbb{E} _{f|_{B}}(h(X))^{2}\right]\leq\mathbb{E}_{f|_{B}}\left[(h(X)-\mathbb{E}_{f}(h(X) )^{2}\right]\]
where the last inequality follows by the variational characterization of the mean. By definition of \(f|_{B}\), we have
\[\mathbb{E}_{f|_{B}}\left[(h(X)-\mathbb{E}_{f}(h(X)))^{2}\right]=\mathbb{E}_{f }\left[(h(X)-\mathbb{E}_{f}(h(X)))^{2}\mid X\in B\right]\]
from which the desired inequality \(\mathbb{V}\mathrm{ar}_{f|_{B}}(h(X))\leq\frac{1}{p_{f}(B)}\mathbb{V}\mathrm{ar}_{f }(h(X))\) readily follows. In particular,
\[\mathbb{V}\mathrm{ar}_{f|_{B}}\left(X^{\top}VX\right)\leq\frac{1}{p_{f}(B)} \mathbb{V}\mathrm{ar}_{f}\left(X^{\top}VX\right).\]
Write \(V=U^{\top}\Delta U\) with \(U\) orthonormal and \(\Delta\) the diagonal matrix with diagonal elements the \(\lambda_{i}(V)\)'s, so that \(X^{\top}VX=(UX)^{\top}\Delta(UX)\). Under \(\mathbb{P}_{f}\), \(X\) is standard Gaussian and since \(U\) is orthonormal, \(UX\) is also standard Gaussian, so that
\[\mathbb{V}\mathrm{ar}_{f}(X^{\top}VX)=\mathbb{V}\mathrm{ar}_{f}(X^{\top}\Delta X).\]
Since \(X^{\top}\Delta X=\sum_{i}\lambda_{i}(V)X(i)^{2}\) with, under \(\mathbb{P}_{f}\), the \(X(i)\)'s i.i.d., we obtain
\[\mathbb{V}\mathrm{ar}_{f}(X^{\top}VX)=\mathbb{V}\mathrm{ar}_{f}(X(1)^{2})\sum _{i}\lambda_{i}(V)^{2}=2\|V\|^{2}\]
using for the last equality that \(\sum_{i}\lambda_{i}(V)^{2}=\|V\|^{2}\) and \(\mathbb{V}\mathrm{ar}_{f}(X(1)^{2})=2\). This proves the result.
**Corollary 2.11**.: _Let \(B\subset\mathbb{R}^{d}\) measurable, \(g=N(\mu,\Sigma)\) with \(\mu\in\mathbb{R}^{d}\) and \(\Sigma\in\mathcal{S}_{d}\) and \(L=\log(f|_{B}/g)\). Then for any \(t>0\) we have_
\[\mathbb{P}_{f|_{B}}(L(X)-\mathbb{E}_{f|_{B}}(L(X))\geq t)\leq\frac{4}{t^{2}} \left(\frac{2\|\Sigma^{-1}-I\|^{2}}{p_{f}(B)}+\frac{\lambda_{d}(\Sigma_{B})}{ \lambda_{1}(\Sigma)^{2}}\|\mu\|^{2}\right). \tag{19}\]
Proof.: For \(x\in B\), we have
\[L(x) =\log(f|_{B}(x)/g(x))\] \[=-\log p_{f}(B)-\frac{1}{2}\|x\|^{2}+\frac{1}{2}\log\det(\Sigma)+ \frac{1}{2}(x-\mu)^{\top}\Sigma^{-1}(x-\mu)\] \[=-\log p_{f}(B)+\frac{1}{2}\mu^{\top}\Sigma^{-1}\mu+\frac{1}{2} \log\det(\Sigma)+\frac{1}{2}x^{\top}(\Sigma^{-1}-I)x-x^{\top}\Sigma^{-1}\mu.\]
Let \(Z_{1}=\frac{1}{2}X^{\top}(\Sigma^{-1}-I)X\) and \(Z_{2}=-X^{\top}\Sigma^{-1}\mu\), and \(\bar{Z}_{i}=Z_{i}-\mathbb{E}_{f|_{B}}(Z_{i})\) for \(i=1,2\) be their centered versions: then \(L(X)-\mathbb{E}_{f|_{B}}(L(X))=\bar{Z}_{1}+\bar{Z}_{2}\) and so
\[\mathbb{P}_{f|_{B}}(L(X)-\mathbb{E}_{f|_{B}}(L(X))\geq t) =\mathbb{P}_{f|_{B}}(\bar{Z}_{1}+\bar{Z}_{2}\geq t)\] \[\leq\frac{4}{t^{2}}\left(\mathbb{V}\mathrm{ar}_{f|_{B}}(Z_{1})+ \mathbb{V}\mathrm{ar}_{f|_{B}}(Z_{2})\right)\]
and so Lemma 2.10 gives
\[\mathbb{P}_{f|_{B}}(L(X)-\mathbb{E}_{f|_{B}}(L(X))\geq t)\leq\frac{4}{t^{2}} \left(\frac{2}{p_{f}(B)}\|\Sigma^{-1}-I\|^{2}+\lambda_{d}(\Sigma_{B})\|\Sigma ^{-1}\mu\|^{2}\right).\]
The result thus follows from the fact that \(\|\Sigma^{-1}\mu\|^{2}=\mu^{\top}\Sigma^{-2}\mu\leq\lambda_{d}(\Sigma^{-2})\| \mu\|^{2}\) and \(\lambda_{d}(\Sigma^{-2})=1/\lambda_{1}(\Sigma)^{2}\).
When \(B=\mathbb{R}^{d}\), we will sometimes need the following strengthening of Corollary 2.11. In the sequel, let \(\alpha_{*}(\Sigma)\) for \(\Sigma\in\mathcal{S}_{d}\) be defined as follows:
\[\alpha_{*}(\Sigma)=\min\left(1,\frac{\lambda_{1}(\Sigma)}{1-\lambda_{1}(\Sigma )}\right)=\left\{\begin{array}{cl}\frac{\lambda_{1}(\Sigma)}{1-\lambda_{1} (\Sigma)}&\text{ if }\lambda_{1}(\Sigma)<\frac{1}{2},\\ 1&\text{ else}\end{array}\right. \tag{20}\]
**Lemma 2.12**.: _If \(\Sigma\in\mathcal{S}_{d}\) and \(\alpha<\alpha_{*}(\Sigma)\), then \((\alpha+1)I-\alpha\Sigma^{-1}\in\mathcal{S}_{d}\)._
Proof.: Let \(W=(\alpha+1)I-\alpha\Sigma^{-1}\): by definition, it is symmetric and so we only have to show that \(\lambda_{1}(W)>0\). We have
\[\lambda_{1}(W)=\alpha+1+\lambda_{1}(-\alpha\Sigma^{-1})=\alpha+1-\alpha \lambda_{d}(\Sigma^{-1})=\alpha+1-\frac{\alpha}{\lambda_{1}(\Sigma)}\]
and so
\[\lambda_{1}(W)=1+\frac{\lambda_{1}(\Sigma)-1}{\lambda_{1}(\Sigma)}\alpha=\frac {1-\lambda_{1}(\Sigma)}{\lambda_{1}(\Sigma)}\left(\frac{\lambda_{1}(\Sigma)}{1 -\lambda_{1}(\Sigma)}-\alpha\right).\]
The first equality clearly shows that \(\lambda_{1}(W)>0\) if \(\lambda_{1}(\Sigma)\geq 1\). For \(\lambda_{1}(\Sigma)<1/2\), the second equality can be rewritten as \(\lambda_{1}(W)=(\alpha_{*}(\Sigma)-\alpha)/\alpha_{*}(\Sigma)\) which is \(>0\). Finally, for \(\lambda_{1}(\Sigma)\in[1/2,1)\), we have \(\frac{\lambda_{1}(\Sigma)}{1-\lambda_{1}(\Sigma)}\geq 1=\alpha_{*}(\Sigma)\) and so using that \((1-\lambda_{1}(\Sigma))/\lambda_{1}(\Sigma)>0\), the second inequality leads to
\[\lambda_{1}(W)\geq\frac{1-\lambda_{1}(\Sigma)}{\lambda_{1}(\Sigma)}(1-\alpha) =\frac{1-\lambda_{1}(\Sigma)}{\lambda_{1}(\Sigma)}(\alpha_{*}(\Sigma)-\alpha) >0.\]
This proves the result.
**Lemma 2.13**.: _Let \(g=N(\mu,\Sigma)\) with \(\mu\in\mathbb{R}^{d}\) and \(\Sigma\in\mathcal{S}_{d}\) and \(L=\log(f/g)\). Then for every \(\alpha<\alpha^{\prime}<\alpha_{*}(\Sigma)\), we have_
\[\mathbb{E}_{f}\left[\left(\frac{f(X)}{g(X)}\right)^{\alpha}\right] \\ \leq\exp\left(\alpha D(f||g)+\frac{1}{2}q\alpha^{2}\|\Sigma^{-1} \mu\|^{2}+\frac{\alpha}{2\alpha^{\prime}}\Psi((\alpha^{\prime}+1)I-\alpha^{ \prime}\Sigma^{-1})\right) \tag{21}\]
_where \(q=\alpha^{\prime}/(\alpha^{\prime}-\alpha)\)._
Proof.: Let \(W=(\alpha^{\prime}+1)I-\alpha^{\prime}\Sigma^{-1}\), which belongs to \(\mathcal{S}_{d}\) by Lemma 2.12 (so that \(\Psi(W)\) is well defined). Let \(\bar{Z}_{1}=\frac{1}{2}X^{\top}(\Sigma^{-1}-I)X-\frac{1}{2}\mathrm{tr}(\Sigma^ {-1}-I)\) and \(\bar{Z}_{2}=-X^{\top}\Sigma^{-1}\mu\): proceeding similarly as in the proof of Corollary 2.11, we see that \(L(X)-\mathbb{E}_{f}(L(X))=\bar{Z}_{1}+\bar{Z}_{2}\) and so
\[\mathbb{E}_{f}\left[\left(\frac{f(X)}{g(X)}\right)^{\alpha}\right] =\mathbb{E}_{f}\left[\exp\left(\alpha L(X)\right)\right]\] \[=e^{\alpha D(f||g)}\mathbb{E}_{f}\left[\exp\left(\alpha(L(X)-D( f||g))\right)\right]\] \[=e^{\alpha D(f||g)}\mathbb{E}_{f}\left(e^{\alpha\bar{Z}_{1}}e^{ \alpha\bar{Z}_{2}}\right).\]
Let \(p=\alpha^{\prime}/\alpha\) and \(q=p/(p-1)=\alpha^{\prime}/(\alpha^{\prime}-\alpha)\): then \(1/p+1/q=1\) and so Holder's inequality gives
\[\mathbb{E}_{f}\left[\left(\frac{f(X)}{g(X)}\right)^{\alpha}\right]\leq e^{ \alpha D(f||g)}\left\{\mathbb{E}_{f}\left(e^{p\alpha\bar{Z}_{1}}\right) \right\}^{1/p}\left\{\mathbb{E}_{f}\left(e^{q\alpha\bar{Z}_{2}}\right)\right\} ^{1/q}.\]
Recall that \(\bar{Z}_{2}=-X^{\top}\Sigma^{-1}\mu\): since \(\mathbb{E}_{f}(e^{x^{\top}X})=e^{\frac{1}{2}\|x\|^{2}}\) for any \(x\in\mathbb{R}^{d}\), we obtain
\[\left\{\mathbb{E}_{f}(e^{q\alpha\bar{Z}_{2}})\right\}^{1/q}=\left\{\mathbb{E}_ {f}(e^{-q\alpha\mu^{\top}\Sigma^{-1}X})\right\}^{1/q}=e^{\frac{1}{2q}\|q\alpha \mu^{\top}\Sigma^{-1}\|^{2}}=e^{\frac{1}{2}q\alpha^{2}\mu^{\top}\Sigma^{-2}\mu}.\]
Let us now control the exponential moment of \(\bar{Z}_{1}\). We have
\[\mathbb{E}_{f}(e^{p\alpha\bar{Z}_{1}}) =\mathbb{E}_{f}(e^{\alpha^{\prime}\bar{Z}_{1}})\] \[=e^{-\frac{1}{2}\alpha^{\prime}\text{tr}(\Sigma^{-1}-I)}\mathbb{E} _{f}(e^{\frac{1}{2}\alpha^{\prime}X^{\top}(\Sigma^{-1}-I)X})\] \[=e^{\frac{1}{2}\text{tr}(W-I)}\int\frac{1}{(2\pi)^{d/2}}e^{\frac {1}{2}\alpha^{\prime}x^{\top}(\Sigma^{-1}-I)x-\frac{1}{2}x^{\top}x}\] \[=e^{\frac{1}{2}\text{tr}(W-I)}\int\frac{1}{(2\pi)^{d/2}}e^{- \frac{1}{2}x^{\top}Wx}\text{\raisebox{-2.15pt}{$\chi$}}.\]
Since we have seen that \(W\in\mathcal{S}_{d}\), we have
\[\int\frac{\text{det}(W)^{1/2}}{(2\pi)^{d/2}}e^{-\frac{1}{2}x^{\top}Wx}\text{ \raisebox{-2.15pt}{$\chi$}}=1\]
and so
\[\left\{\mathbb{E}(e^{p\alpha\bar{Z}_{1}})\right\}^{1/p}=\exp\left(\frac{1}{2p }\text{tr}(W-I)-\frac{1}{2p}\log\text{det}(W)\right)=e^{\frac{1}{p}\Psi(W)}.\]
Gathering the previous bounds leads to the desired result.
### A sufficient condition for high-dimensional efficiency
The following result identifies conditions under which (2) and (3) hold for a Gaussian density \(g=N(\mu,\Sigma)\). It shows in particular that (3) is slightly more demanding than (2): for (2), it is enough that \(\Psi(\Sigma)\) and \(\|\mu\|\) are bounded whp (note in particular that this condition does not depend on \(A\)), and for (3), one needs in addition that \(1/p_{f}(A)\) is bounded.
An intuitive interpretation of these conditions is as follows. Since
\[D(f||g)=\Psi(\Sigma^{-1})+\frac{1}{2}\mu^{\top}\Sigma^{-1}\mu, \tag{22}\]
the assumption that \(\Psi(\Sigma)\) and \(\|\mu\|\) are bounded means that \(g\) remains close to \(f\). On the other hand, since \(D(f|_{A}||f)=-\log p_{f}(A)\), the assumption \(1/p_{f}(A)\) bounded means that \(f|_{A}\) remains close to \(f\).
**Proposition 2.14**.: _Let \(\mu\in\mathbb{R}^{d}\), \(\Sigma\in\mathcal{S}_{d}\) and \(B\subset\mathbb{R}^{d}\) measurable (\(\mu\), \(\Sigma\) and \(B\) may be random) and \(g=N(\mu,\Sigma)\). Then the following holds:_
* _if_ \(\Psi(\Sigma)\) _and_ \(\|\mu\|\) _are bounded whp, then (_2_) holds;_
* _if_ \(\Psi(\Sigma)\)_,_ \(\|\mu\|\) _and_ \(1/p_{f}(B)\) _are bounded whp, then (_3_) holds._
_In particular, if \(\Psi(\Sigma)\), \(\|\mu\|\) and \(1/p_{f}(B)\) are bounded whp, then \(g=N(\mu,\Sigma)\) is efficient in high dimension for \(B\)._
Proof.: As before, it is enough to prove the result for deterministic \(\mu\), \(\Sigma\) and \(B\), and by replacing bounded whp with bounded. So assume in the rest of the proof that \(\Psi(\Sigma)\) and \(\|\mu\|\) are bounded: we first prove that (2) holds, and then that (3) holds under the additional assumption that \(1/p_{f}(B)\) is bounded. The boundedness of \(\Psi(\Sigma)\) implies by Lemma 2.7 that \(1/\lambda_{1}(\Sigma)\), \(\lambda_{d}(\Sigma)\), \(\Psi(\Sigma^{-1})\) and \(\|\Sigma^{-1}-I\|\) are bounded, which will be used without further notice in the rest
of proof. Recall that \(L=\log(f/g)\) and that \(L_{B}=\log(f|_{B}/g)\), with respective means \(\mathbb{E}_{f}(L(X))=D(f||g)\) and \(\mathbb{E}_{f|_{B}}(L_{B}(X))=D(f|_{B}||g)\).
Proof of (2).: According to Lemma 2.2 it is enough to prove that \(D(f||g)\) is bounded and that \(\mathbb{P}_{f}(L(X)-\mathbb{E}_{f}(L(X))\geq t)\to 0\) for any sequence \(t\to\infty\). Since \(\mu^{\top}\Sigma^{-1}\mu\leq\|\mu\|^{2}/\lambda_{1}(\Sigma)\), it follows from 22 that \(D(f||g)\) is bounded. Let us now control the tail of \(L\). Using (19) with \(B=\mathbb{R}^{d}\), we get
\[\mathbb{P}_{f}(L(X)-\mathbb{E}_{f}(L(X))\geq t)\leq\frac{4}{t^{2}}\left(2\| \Sigma^{-1}-I\|^{2}+\frac{1}{\lambda_{1}(\Sigma)^{2}}\|\mu\|^{2}\right).\]
The upper bound is thus of the form \(C/t\) with \(\sup_{d}C<\infty\), which implies as desired that \(\mathbb{P}_{f}(L(X)-\mathbb{E}_{f}(L(X))\geq t)\to 0\) as \(d\to\infty\) for any sequence \(t\to\infty\).
Proof of (3).: According to Lemma 2.2 it is enough to prove that \(D(f|_{B}|||g)\) is bounded and that \(\mathbb{P}_{f|_{B}}(L_{B}(X)-\mathbb{E}_{f|_{B}}(L_{B}(X))\geq t)\to 0\) for any sequence \(t\to\infty\). If we apply (15) with \(\mu^{\prime}=\mu_{B}^{g}\) and \(\Sigma^{\prime}=\Sigma_{B}^{g}\), we obtain
\[D(g|_{B}||g_{B})=-\log p_{g}(B)-\Psi(\Sigma^{-1}\Sigma_{B}^{g})-\frac{1}{2} \left(\mu-\mu_{B}^{g}\right)^{\top}\Sigma^{-1}(\mu-\mu_{B}^{g})\]
and so for any \(g^{\prime}=N(\mu^{\prime},\Sigma^{\prime})\), (15) can be rewritten as
\[D(g|_{B}||g^{\prime})=D(g|_{B}||g_{B})+\Psi(\Sigma^{\prime-1}\Sigma_{B}^{g})+ \frac{1}{2}\left(\mu^{\prime}-\mu_{B}^{g}\right)^{\top}\Sigma^{\prime-1}(\mu^ {\prime}-\mu_{B}^{g}). \tag{23}\]
Plugging \(g=f\) and \(g^{\prime}=g\) in this relation, we get
\[D(f|_{B}||g)=D(f|_{B}||f_{B})+\Psi(\Sigma^{-1}\Sigma_{B})+\frac{1}{2}\left( \mu-\mu_{B}\right)^{\top}\Sigma^{-1}(\mu-\mu_{B}).\]
By Corollary 2.9 (with \(g=f\), needing \(\inf_{d}p_{f}(B)>0\)), we see that \(D(f|_{B}||f_{B})\), \(\Psi(\Sigma_{B})\) and \(\|\mu_{B}\|\) are bounded. Combining the results from Lemmas 2.7 and 2.8, this implies the boundedness of \(\Psi(\Sigma^{-1}\Sigma_{B})\) and of \(\left(\mu-\mu_{B}\right)^{\top}\Sigma^{-1}(\mu-\mu_{B})\) which proves that \(D(f|_{B}||g)\) is bounded. Let us not turn to controlling the tail of \(L_{B}\). Using (19), we get
\[\mathbb{P}_{f|_{B}}(L_{B}(X)-\mathbb{E}_{f|_{B}}(L_{B}(X))\geq t)\leq\frac{4}{ t^{2}}\left(\frac{2\|\Sigma^{-1}-I\|^{2}}{p_{f}(B)}+\frac{\lambda_{d}(\Sigma_{B}) }{\lambda_{1}(\Sigma)^{2}}\|\mu\|^{2}\right)\]
which implies as above that \(\mathbb{P}_{f|_{B}}(L_{B}(X)-\mathbb{E}_{f|_{B}}(L_{B}(X))\geq t)\to 0\) as \(d\to\infty\) for any sequence \(t=t(d)\to\infty\). This concludes the proof of the lemma.
### Quantiles
Let us finally mention a last result which will be needed to study the CE scheme. Recall the assumption in Theorems 1.2 and 1.5 that \(\varphi:\mathbb{R}^{d}\to\mathbb{R}\) has no atom, i.e., for every \(x\in\mathbb{R}\) the set \(\varphi^{-1}(\{x\})\subset\mathbb{R}^{d}\) has zero Lebesgue measure.
**Lemma 2.15**.: _Let \(\varphi:\mathbb{R}^{d}\to\mathbb{R}\) measurable, \(g\) a \(d\)-dimensional Gaussian distribution and \(F(x)=\mathbb{P}_{g}(\varphi(X)\leq x)\). If \(\varphi\) has no atom, then \(F\) is continuous and \(F(F^{-1}(x))=x\) for every \(x\in(0,1)\)._
Proof.: We have \(F(x)-F(x-)=\mathbb{P}_{g}(\varphi(X)=x)=\mathbb{P}_{g}(\varphi^{-1}(\{x\})=0\) by assumption on \(\varphi\) (and since \(g\) is absolutely continuous with respect to Lebesgue measure). The continuity of \(F\) then implies the relation \(F(F^{-1}(x))=x\), see for instance [66, Lemma 13.6.4, Equation (6.6)].
Proof of Theorem 1.2: study of the deterministic target densities
### High-dimensional efficiency of \(g_{A}\) and \(g_{\mathrm{proj}}\)
In the rest of this section, we fix the notation as in the statement of Theorem 1.2. According to Lemma 2.14, it is enough to prove that \(\|\mu_{A}\|\), \(\Psi(\Sigma_{A})\) and \(\Psi(\Sigma_{\mathrm{proj}})\) are bounded. The following lemma will be needed in order to control \(\Psi(\Sigma_{\mathrm{proj}})\).
**Lemma 3.1**.: _Let \(\Sigma\in\mathcal{S}_{d}\), \(r\leq d\), \((d_{k},k=1,\ldots,r)\) an orthonormal family and \(\Sigma^{\prime}\in\mathcal{S}_{d}\) defined by_
\[\Sigma^{\prime}=\sum_{k=1}^{r}(v_{k}-1)d_{k}d_{k}^{\top}+I\ \ \text{with}\ \ v_{k}=d_{k}^{\top}\Sigma d_{k}.\]
_Then we have_
\[\lambda_{1}(\Sigma^{\prime})\geq\min(1,\lambda_{1}(\Sigma)),\ \lambda_{d}( \Sigma^{\prime})\leq\max(1,\lambda_{d}(\Sigma))\]
_and \(\|\Sigma^{\prime}-I\|\leq\|\Sigma-I\|\)._
Proof.: Complete the \((d_{k},k=1,\ldots,r)\) into an orthonormal basis \((d_{k},k=1,\ldots,d)\). By construction, the eigenvalues of \(\Sigma^{\prime}\) are the \(v_{k}\)'s (associated to the \(d_{k}\) for \(k=1,\ldots,r\)) and \(1\) (associated to the \(d_{k}\) for \(k=r+1,\ldots,d\)). For any \(x\in\mathbb{R}^{d}\) with \(\|x\|=1\), we have
\[\lambda_{1}(\Sigma)\leq x^{\top}\Sigma x\leq\lambda_{d}(\Sigma)\]
and since \(v_{k}=d_{k}^{\top}\Sigma d_{k}\) for \(k=1,\ldots,r\), this gives \(\lambda_{1}(\Sigma)\leq v_{k}\leq\lambda_{d}(\Sigma)\) for \(k=1,\ldots,r\). Let us show the inequality \(\lambda_{1}(\Sigma^{\prime})\geq\min(1,\lambda_{1}(\Sigma))\) by distinguishing two cases:
**Case 1:**: if all the \(v_{k}\)'s are \(\geq 1\), then \(\lambda_{1}(\Sigma^{\prime})=1\) and so \(\lambda_{1}(\Sigma^{\prime})=1\geq\min(1,\lambda_{1}(\Sigma))\) as desired;
**Case 2:**: otherwise, there is some \(v_{k}<1\), in which case \(\lambda_{1}(\Sigma^{\prime})=v_{i}\) for some \(i\). But since \(v_{i}\geq\lambda_{1}(\Sigma)\), the inequality \(\lambda_{1}(\Sigma^{\prime})\geq\min(1,\lambda_{1}(\Sigma))\) is also satisfied in this case.
A similar discussion shows that \(\lambda_{d}(\Sigma^{\prime})\leq\max(1,\lambda_{d}(\Sigma))\). Let us now show that \(\|\Sigma^{\prime}-I\|\leq\|\Sigma-I\|\). Since the eigenvalues of \(\Sigma^{\prime}\) are the \(v_{k}\)'s and \(1\), we have
\[\|\Sigma^{\prime}-I\|^{2}=\sum_{i}(\lambda_{i}(\Sigma^{\prime})-1)^{2}=\sum_{ k=1}^{r}(v_{k}-1)^{2}.\]
By definition of \(v_{k}\),
\[\sum_{k=1}^{r}(v_{k}-1)^{2} =\sum_{k=1}^{r}(d_{k}^{\top}\Sigma d_{k}-1)^{2}\] \[=\sum_{k=1}^{r}(d_{k}^{\top}(\Sigma-I)d_{k})^{2}\] \[\leq\sum_{k=1}^{d}(d_{k}^{\top}(\Sigma-I)d_{k})^{2}.\]
Let \(U\) orthonormal such that \(\Sigma=U^{\top}\Lambda U\) with \(\Lambda\) the diagonal matrix with diagonal elements the \(\lambda_{i}(\Sigma)\)'s. Then \(d_{k}^{\top}(\Sigma-I)d_{k}=\tilde{d}_{k}^{\top}(\Lambda-I)\tilde{d}_{k}\) with \(\tilde{d}_{k}=Ud_{k}\). We then have
\[\sum_{k=1}^{r}(v_{k}-1)^{2} \leq\sum_{k=1}^{d}(\tilde{d}_{k}^{\top}(\Lambda-I)\tilde{d}_{k})^ {2}\] \[=\sum_{k=1}^{d}\left(\sum_{i=1}^{d}(\tilde{d}_{k}(i))^{2}\lambda_ {i}(\Sigma-I)\right)^{2}\] \[\leq\sum_{k=1}^{d}\left(\sum_{i=1}^{d}(\tilde{d}_{k}(i))^{2} \right)\left(\sum_{i=1}^{d}(\tilde{d}_{k}(i))^{2}\lambda_{i}(\Sigma-I)^{2}\right)\]
using Cauchy-Schwarz for the last inequality (with \(\tilde{d}_{k}(i)\) on the one hand, and \(\tilde{d}_{k}(i)\lambda_{i}(\Sigma-I)\) on the other hand). Since \(U\) is orthonormal and the \(d_{k}\)'s form an orthonormal basis, the \(\tilde{d}_{k}\) also form an orthonormal basis, in particular \(\sum_{i=1}^{d}(\tilde{d}_{k}(i))^{2}=1\) and so continuing the previous derivation leads to
\[\sum_{k=1}^{r}(v_{k}-1)^{2} \leq\sum_{k=1}^{d}\left(\sum_{i=1}^{d}(\tilde{d}_{k}(i))^{2} \lambda_{i}(\Sigma-I)^{2}\right)\] \[=\sum_{i=1}^{d}\lambda_{i}(\Sigma-I)^{2}\sum_{k=1}^{d}(\tilde{d}_ {k}(i))^{2}\] \[=\sum_{i=1}^{d}\lambda_{i}(\Sigma-I)^{2}\]
using \(\sum_{k=1}^{d}(\tilde{d}_{k}(i))^{2}=1\) to derive the last equality, which holds because the \(\tilde{d}_{k}\)'s form an orthonormal basis. Since this last quantity is equal to \(\|\Sigma-I\|^{2}\), this gives the result.
We get the following corollary, whose first part proves the part of Theorem 1.2 related to \(g_{A}\) and \(g_{\mathrm{proj}}\).
**Corollary 3.2**.: _If \(1/p_{f}(A)\) is bounded, then \(g_{A}\) and \(g_{\mathrm{proj}}\) are efficient in high dimension for \(A\)._
_More precisely, if \(1/p_{f}(A)\) is bounded, then \(\|\mu_{A}\|\), \(\Psi(\Sigma_{A})\), \(\Psi(\Sigma_{\mathrm{proj}})\), \(1/\lambda_{1}(\Sigma_{\mathrm{proj}})\), \(\|\Sigma_{\mathrm{proj}}-I\|\) and \(\lambda_{d}(\Sigma_{\mathrm{proj}})\) are bounded._
Proof.: The boundedness of \(\|\mu_{A}\|\) and \(\Psi(\Sigma_{A})\) is a direct consequence of Corollary 2.9 with \(g=f\) and \(B=A\). Proposition 2.14 then implies that \(g_{A}\) is efficient in high dimension for \(A\). Moreover, this also implies by Lemma 2.7 that \(1/\lambda_{1}(\Sigma_{A})\), \(\lambda_{d}(\Sigma_{A})\) and \(\|\Sigma_{A}-I\|\) are bounded, which implies the boundedness of \(1/\lambda_{1}(\Sigma_{\mathrm{proj}})\), \(\lambda_{d}(\Sigma_{\mathrm{proj}})\) and \(\|\Sigma_{\mathrm{proj}}-I\|\) by Lemma 3.1 (applied with \(\Sigma=\Sigma_{A}\), so that \(\Sigma^{\prime}=\Sigma_{\mathrm{proj}}\)). In turn, this implies the boundedness of \(\Psi(\Sigma_{\mathrm{proj}})\) by Lemma 2.7: thus, \(\|\mu_{A}\|\) and \(\Psi(\Sigma_{\mathrm{proj}})\) are bounded, which implies by the same arguments that \(g_{\mathrm{proj}}\) is efficient in high dimension for \(A\) and that \(1/\lambda_{1}(\Sigma_{\mathrm{proj}})\), \(\|\Sigma_{\mathrm{proj}}-I\|\) and \(\lambda_{d}(\Sigma_{\mathrm{proj}})\) are bounded.
It is clear that the arguments developed above apply when bounded is replaced with bounded whp. For the record, we state the generalization of the previous result that we will need later.
**Corollary 3.3**.: _For each \(d\), let \(B\subset\mathbb{R}^{d}\) be a random measurable set. If \(1/p_{f}(B)\) is bounded whp, then \(\|\mu_{B}\|\) and \(\Psi(\Sigma_{B})\) are bounded whp._
### High-dimensional efficiency of \(g_{t}\)
Let us now turn to the high-dimensional efficiency of \(g_{t}\). We use throughout the notation introduced before Theorem 1.2. We proceed by induction on \(t\geq 0\), working with the following induction hypothesis.
**Deterministic induction hypothesis**.: _For \(t\geq 0\), \(\Psi(\Sigma_{t})\), \(\|\mu_{t}\|\) and \(1/p_{f}(A_{t})\) are bounded._
Note that if \(\Psi(\Sigma_{t})\) and \(\|\mu_{t}\|\) are bounded, then \(g_{t}=N(\mu_{t},\Sigma_{t})\) is efficient in high dimension by Proposition 2.14. The additional requirement that \(1/p_{f}(A_{t})\) is bounded is here to pass through the induction.
**Lemma 3.4**.: _If for every \(d\), \(\varphi\) has no atom and \(\inf_{d}\rho>0\), then the deterministic induction hypothesis holds for \(t=0\)._
Proof.: Let \(F(x)=\mathbb{P}_{f}(\varphi(X)\leq x)\). Since \(g_{0}=f\) we have by definition of \(A_{0}=\{x:\varphi(x)>q_{0}\}\) and \(q_{0}=F^{-1}(1-\rho)\)
\[p_{f}(A_{0})=\mathbb{P}_{g_{0}}(X\in A_{0})=\mathbb{P}_{g_{0}}(\varphi(X)>q_{0 })=1-F(F^{-1}(1-\rho))=\rho\]
using Lemma 2.15 for the last equality. Since \(\Sigma_{0}=I\) and \(\mu_{0}=0\) and we assume \(\inf_{d}\rho>0\), we get that \(\Psi(\Sigma_{0})\), \(\|\mu_{0}\|\) and \(1/p_{f}(A_{0})\) are bounded, i.e., the deterministic induction hypothesis holds for \(t=0\).
We now prove the induction.
**Lemma 3.5**.: _Assume that for every \(d\), \(\varphi\) has no atom and that \(\inf_{d}\rho>0\). If the deterministic induction hypothesis holds for some \(t\geq 0\), then it holds at \(t+1\)._
Proof.: Assume that the deterministic induction hypothesis holds for some \(t\geq 0\), i.e., \(\Psi(\Sigma_{t})\), \(\|\mu_{t}\|\) and \(1/p_{f}(A_{t})\) are bounded, and let us show that this continues to hold for \(t+1\). The boundedness of \(1/p_{f}(A_{t})\) implies by Corollary 2.9 with \(g=f\) and \(B=A_{t}\) that \(\Psi(\Sigma_{A_{t}})\) and \(\|\mu_{A_{t}}\|\) are bounded. Since \(\mu_{t+1}=\mu_{A_{t}}\) and \(\Sigma_{t+1}=\Sigma_{A_{t}}\), it remains to prove that \(1/p_{f}(A_{t+1})\) is bounded. Using Corollary 2.5 with \(B=A_{t+1}\) and \(g=g_{t+1}\), we obtain
\[p_{f}(A_{t+1})\geq p_{g_{t+1}}(A_{t+1})\exp\left(-\Psi(\Sigma_{A_{t+1}}^{g_{t+ 1}})-\frac{1}{2}\|\mu_{A_{t+1}}^{g_{t+1}}\|^{2}\right). \tag{24}\]
Recall that by definition of the CE scheme and Lemma 2.15, we have
\[p_{g_{t+1}}(A_{t+1})=\mathbb{P}_{g_{t+1}}(\varphi(X)>q_{t+1})=1-F(F^{-1}(1- \rho))=\rho.\]
Since we assume \(\inf_{d}\rho>0\), it remains only in view of (24) to prove that \(\Psi(\Sigma_{A_{t+1}}^{g_{t+1}})\) and \(\|\mu_{A_{t+1}}^{g_{t+1}}\|\) are bounded. But since \(\|\mu_{t+1}\|\), \(\Psi(\Sigma_{t+1})\) and \(1/p_{g_{t+1}}(A_{t+1})\) are bounded, this follows precisely from Corollary 2.9 with \(g=g_{t+1}\) and \(B=A_{t+1}\). Thus, the deterministic induction hypothesis holds at \(t+1\).
We can now prove the part of Theorem 1.2 that relates to \(g_{t}\).
**Proposition 3.6**.: _If \(1/p_{f}(A)\) and \(1/\rho\) are bounded, and if for every \(d\), \(\varphi\) has no atom, then for every \(t\geq 0\), \(g_{t}\) is efficient in high dimension for \(A\)._
Proof.: Combining Lemmas 3.4 and 3.5, we get that \(\|\mu_{t}\|\) and \(\Psi(\Sigma_{t})\) are bounded for every \(t\geq 0\). Combined with the assumption \(\inf_{d}p_{f}(A)>0\), this gives the result in view of Proposition 2.14.
## 4 Proof of Theorem 1.7
From (15), one can derive the following two identities:
\[D(f|_{A}||g_{A})=-\log p_{f}(A)-\Psi(\Sigma_{A})-\frac{1}{2}\|\mu_{A}\|^{2} \tag{25}\]
and
\[D(f||g_{A})=\Psi(\Sigma_{A}^{-1})+\frac{1}{2}\mu_{A}^{\top}\Sigma_{A}^{-1}\mu_ {A}. \tag{26}\]
Assume now that \(p_{f}(A)\to 0\) and that \(\sup_{d}D(f||g_{A})<\infty\): in order to prove Theorem 1.7, it is enough to prove that \(D(f|_{A}||g_{A})\to\infty\). In view of (26), the boundedness of \(D(f||g_{A})\) implies that of \(\Psi(\Sigma_{A}^{-1})\) and of \(\mu_{A}^{\top}\Sigma_{A}^{-1}\mu_{A}\). The boundedness of \(\Psi(\Sigma_{A}^{-1})\) implies by Lemma 2.7 that of \(\Psi(\Sigma_{A})\) and of \(\lambda_{d}(\Sigma_{A})\). Since
\[\mu_{A}^{\top}\Sigma_{A}^{-1}\mu_{A}\geq\frac{\|\mu_{A}\|^{2}}{\lambda_{d}( \Sigma_{A})},\]
this implies the boundedness of \(\|\mu_{A}\|\). Thus, we have proved that the sequences \(\Psi(\Sigma_{A})\) and \(\|\mu_{A}\|\) are bounded: but \(-\log p_{f}(A)\to+\infty\), and so \(D(f|_{A}||g_{A})\to\infty\) in view of (25) which proves the result.
## 5 Proof of Theorem 1.5
### High-dimensional efficiency of \(\hat{g}_{A}\) and \(\hat{g}_{\rm proj}\)
According to Proposition 2.14, and recalling that \(\hat{g}_{A}\) is a special case of \(\hat{g}_{\rm proj}\) with \(r=d\), we have to prove that \(\|\hat{\mu}_{A}\|\) and \(\Psi(\hat{\Sigma}_{\rm proj})\) are bounded whp: we prove this for \(\|\hat{\mu}_{A}\|\) in Section 5.1.1, and for \(\Psi(\hat{\Sigma}_{\rm proj})\) in Section 5.1.2.
#### 5.1.1 High-probability boundedness of \(\|\hat{\mu}_{A}\|\)
**Lemma 5.1**.: _We have_
\[\mathbb{E}\left(\|\hat{\mu}_{A}-\mu_{A}\|^{2}\right)\leq\frac{d}{n_{g}} \lambda_{d}(\Sigma_{A}). \tag{27}\]
_In particular, if \(\inf_{d}p_{f}(A)>0\) and \(n_{g}\gg d\), then \(\|\hat{\mu}_{A}-\mu_{A}\|\Rightarrow 0\) and \(\|\hat{\mu}_{A}\|\) is bounded whp._
Proof.: Let us first prove (27). Recall the definition (5) of \(\hat{\mu}_{A}=\frac{1}{n_{g}}\sum_{i}Y_{A,i}\) with the \(Y_{A,i}\)'s i.i.d. distributed according to \(f|_{A}\), so that
\[\mathbb{E}\left(\|\hat{\mu}_{A}-\mu_{A}\|^{2}\right)=\frac{1}{n_{g}}\mathbb{E }\left(\sum_{i,j}(Y_{A,i}-\mu_{A})^{\top}(Y_{A,j}-\mu_{A})\right).\]
Since the \(Y_{A,i}-\mu_{A}\)'s are i.i.d. and centered, we obtain
\[\mathbb{E}\left(\|\hat{\mu}_{A}-\mu_{A}\|^{2}\right) =\frac{1}{n_{g}}\mathbb{E}\left((Y_{A,1}-\mu_{A})^{\top}(Y_{A,1}- \mu_{A})\right)\] \[=\frac{1}{n_{g}}\mathbb{E}\left(\operatorname{tr}((Y_{A,1}-\mu_{ A})(Y_{A,1}-\mu_{A})^{\top})\right)\]
which gives
\[\mathbb{E}\left(\|\hat{\mu}_{A}-\mu_{A}\|^{2}\right)=\frac{1}{n_{g}} \operatorname{tr}\left(\Sigma_{A}\right)\]
by commuting the trace and expectation operators. Since \(\operatorname{tr}(\Sigma_{A})\leq\lambda_{d}(\Sigma_{A})d\) this gives (27). Let us now assume that \(\inf_{d}p_{f}(A)>0\) and \(n_{g}\gg d\). Then \(\lambda_{d}(\Sigma_{A})\) is bounded by Corollary 3.2, and so we obtain the result.
#### 5.1.2 High-probability boundedness of \(\Psi(\hat{\Sigma}_{\operatorname{proj}})\)
To prove the fact that \(\Psi(\hat{\Sigma}_{\operatorname{proj}})\) is bounded whp, we need to study the spectrum of \(\hat{\Sigma}_{A}\). Let in the sequel \(\tilde{Y}_{A,i}=\Sigma_{A}^{-1/2}(Y_{A,i}-\mu_{A})\) and \(M\) be the \(n\times d\) matrix with rows the \(\tilde{Y}_{A,i}^{\top}\): then one can check that
\[\hat{\Sigma}_{A}=\Sigma_{A}^{1/2}\hat{S}\Sigma_{A}^{1/2}-(\hat{\mu}_{A}-\mu_{ A})(\hat{\mu}_{A}-\mu_{A})^{\top}\ \ \text{with}\ \ \hat{S}=\frac{1}{n_{g}}M^{\top}M. \tag{28}\]
We will use results from [64] in the area of non-asymptotic random matrix theory. The next lemma controls the subgaussian norm of the \(\tilde{Y}_{A,i}\)'s. According to the definitions 5.7 and 5.22 in [64], the sub-gaussian norm \(\|Z\|_{\psi_{2}}\) of a \(d\)-dimensional random vector \(Z\) is given by
\[\|Z\|_{\psi_{2}}=\sup_{x:\|x\|=1}\sup_{q\geq 1}q^{-1/2}\left(\mathbb{E}|Z^{ \top}x|^{q}\right)^{1/q}=\sup_{x:\|x\|=1}\|x^{\top}Z\|_{\psi_{2}}.\]
In the sequel, we denote by \(Y_{A}\) and \(\tilde{Y}_{A}\) random variables distributed as \(Y_{A,i}\) and \(\tilde{Y}_{A,i}\), respectively.
**Lemma 5.2**.: _If \(\inf_{d}p_{f}(A)>0\), then \(\sup_{d}\|\tilde{Y}_{A}\|_{\psi_{2}}<\infty\)._
Proof.: Using the triangle inequality and the fact that the subgaussian norm of a constant vector is its norm, we obtain
\[\|\tilde{Y}_{A}\|_{\psi_{2}}=\|\Sigma_{A}^{-1/2}(Y_{A}-\mu_{A})\|_{\psi_{2}} \leq\|\Sigma_{A}^{-1/2}Y_{A}\|_{\psi_{2}}+\|\Sigma_{A}^{-1/2}\mu_{A}\|.\]
Note that \(\|\Sigma_{A}^{-1/2}\mu_{A}\|=(\mu_{A}^{\top}\Sigma_{A}^{-1}\mu_{A})^{1/2}\leq \|\mu_{A}\|/\lambda_{1}(\Sigma_{A})^{1/2}\). Further, let \(x\in\mathbb{R}^{d}\) with \(\|x\|=1\) and \(Y\sim f\): then by definition of \(Y_{A}\), for any \(q\geq 1\) we have
\[\mathbb{E}|x^{\top}\Sigma_{A}^{-1/2}Y_{A}|^{q}=\mathbb{E}\left(|x^{\top} \Sigma_{A}^{-1/2}Y|^{q}\mid Y\in A\right)\leq\frac{1}{p_{f}(A)}\mathbb{E} \left(|x^{\top}\Sigma_{A}^{-1/2}Y|^{q}\right)\]
and so (using \(1/p_{f}(A)^{1/q}\leq 1/p_{f}(A)\) for \(q\geq 1\))
\[\|x^{\top}\Sigma_{A}^{-1/2}Y_{A}\|_{\psi_{2}}\leq\frac{1}{p_{f}(A)}\|x^{\top} \Sigma_{A}^{-1/2}Y\|_{\psi_{2}}.\]
For any centered Gaussian random variable \(Z\), we have \(\|Z\|_{\psi_{2}}\leq C\forall\mathrm{ar}(Z)^{1/2}\) for some absolute constant \(C\) (see for instance [64, Example 5.8]). Applying this to \(Z=x^{\top}\Sigma_{A}^{-1/2}Y\), we obtain
\[\|x^{\top}\Sigma_{A}^{-1/2}Y\|_{\psi_{2}}\leq C\forall\mathrm{ar}(x^{\top} \Sigma_{A}^{-1/2}Y)^{1/2}=C\sqrt{x^{\top}\Sigma_{A}^{-1}x}\leq\frac{C}{\lambda _{1}(\Sigma_{A})^{1/2}}.\]
Gathering the previous bounds, we therefore obtain
\[\|\Sigma_{A}^{-1/2}(Y_{A}-\mu_{A})\|_{\psi_{2}}\leq\frac{1}{\lambda_{1}(\Sigma _{A})^{1/2}}\left(\frac{C}{p_{f}(A)}+\|\mu_{A}\|\right).\]
Under the assumption \(\inf_{d}p_{f}(A)>0\), Corollary 3.2 implies that this upper bound is bounded, which proves the result.
**Lemma 5.3**.: _Let \(\delta=\max(|\lambda_{1}(\hat{S})-1|,|\lambda_{d}(\hat{S})-1|)\). If \(\inf_{d}p_{f}(A)>0\) and \(n_{g}\gg d\), then \(\frac{n_{g}}{d}\delta^{2}\) is bounded whp. In particular, \(\delta\Rightarrow 0\)._
Proof.: By definition, the \(\tilde{Y}_{A,i}\)'s are i.i.d. centered random vectors. Moreover, they are isotropic, meaning that their covariance matrix is equal to the identity [64, Definition 5.19], and they are subgaussian since their subgaussian norm is finite by Lemma 5.2. If \(s_{1}\) and \(s_{d}\) are the smallest and largest singular values of \(M\), then Theorem 5.39 in [64] implies that for any \(t\geq 0\),
\[\mathbb{P}\left(\sqrt{n_{g}}-C^{\prime}\sqrt{d}-t\leq s_{1}\leq s_{d}\leq \sqrt{n_{g}}+C^{\prime}\sqrt{d}+t\right)\geq 1-2e^{-ct^{2}}\]
where the constants \(c\) and \(C^{\prime}\) only depend on the sub-gaussian norm of \(\tilde{Y}_{A}\). But since \(\sup_{d}\|\tilde{Y}_{A}\|_{\psi_{2}}<\infty\) by Lemma 5.2, it follows that the constants \(c\) and \(C^{\prime}\) can be chosen independent of \(d\). Moreover, since \(\hat{S}=\frac{1}{n_{g}}M^{\top}M\), we have
\[\lambda_{1}(\hat{S})=\frac{1}{n_{g}}s_{1}^{2}\ \ \text{and}\ \ \lambda_{d}(\hat{S})=\frac{1}{n_{g}}s_{d}^{2},\]
and so for \(t=\sqrt{d}\), we obtain
\[\mathbb{P}\left(\left(1-(C^{\prime}+1)\sqrt{d/n_{g}}\right)^{2} \leq\lambda_{1}(\hat{S})\leq\lambda_{d}(\hat{S})\leq\left(1+(C^{\prime}+1) \sqrt{d/n_{g}}\right)^{2}\right)\\ \geq 1-2e^{-cd}\]
From there, one can easily derive the result through elementary manipulation.
**Corollary 5.4**.: _If \(\inf_{d}p_{f}(A)>0\) and \(n_{g}\gg d\), then the sequences \(1/\lambda_{1}(\hat{\Sigma}_{\mathrm{proj}})\) and \(\lambda_{d}(\hat{\Sigma}_{\mathrm{proj}})\) are bounded whp._
Proof.: Lemma 3.1 with \(\Sigma=\hat{\Sigma}_{A}\) and \(d_{k}=\hat{d}_{k}\) (so that \(\Sigma^{\prime}=\hat{\Sigma}_{\mathrm{proj}}\)) give
\[\lambda_{1}(\hat{\Sigma}_{\mathrm{proj}})\geq\min(1,\lambda_{1}(\hat{\Sigma} _{A})),\ \lambda_{d}(\hat{\Sigma}_{\mathrm{proj}})\leq\max(1,\lambda_{d}(\hat{\Sigma}_{A }))\]
and \(\|\hat{\Sigma}_{\mathrm{proj}}-I\|\leq\|\hat{\Sigma}_{A}-I\|\). Therefore, it is enough to show that \(1/\lambda_{1}(\hat{\Sigma}_{A})\) and \(\lambda_{d}(\hat{\Sigma}_{A})\) are bounded whp. Since \(\delta\Rightarrow 0\) by Lemma 5.3, we have \(\lambda_{1}(\hat{S})\Rightarrow 1\)
and \(\lambda_{d}(\hat{S})\Rightarrow 1\), and so the sequences \(1/\lambda_{1}(\hat{S})\) and \(\lambda_{d}(\hat{S})\) are bounded whp. Thus, we only have to transfer this result to \(\hat{\Sigma}_{A}\). Let \(x\in\mathbb{R}^{d}\) with \(\|x\|=1\), and \(y=\Sigma_{A}^{1/2}x\): then by definition (see (28)), we have
\[x^{\top}\hat{\Sigma}_{A}x=y^{\top}\hat{S}y-(x^{\top}(\hat{\mu}_{A}-\mu_{A}))^ {2}.\]
In particular,
\[x^{\top}\hat{\Sigma}_{A}x\leq\lambda_{d}(\hat{S})\|y\|^{2}=\lambda_{d}(\hat{S} )x^{\top}\Sigma_{A}x\leq\lambda_{d}(\hat{S})\lambda_{d}(\Sigma_{A})\]
and so
\[\lambda_{d}(\hat{\Sigma}_{A})\leq\lambda_{d}(\hat{S})\lambda_{d}(\Sigma_{A}).\]
Since \(\lambda_{d}(\hat{S})\) is bounded whp and \(\lambda_{d}(\Sigma_{A})\) is bounded, this that \(\lambda_{d}(\hat{\Sigma}_{A})\) is bounded whp. We show that \(1/\lambda_{1}(\hat{\Sigma}_{A})\) is bounded whp with similar arguments: we have
\[x^{\top}\hat{\Sigma}_{A}x\geq\lambda_{1}(\hat{S})x^{\top}\Sigma_{A}x-\|\hat{ \mu}_{A}-\mu_{A}\|^{2}\geq\lambda_{1}(\hat{S})\lambda_{1}(\Sigma_{A})-\|\hat{ \mu}_{A}-\mu_{A}\|^{2}\]
and so
\[\lambda_{1}(\hat{\Sigma}_{A})\geq\lambda_{1}(\hat{S})\lambda_{1}(\Sigma_{A})- \|\hat{\mu}_{A}-\mu_{A}\|^{2}.\]
Since \(\|\hat{\mu}_{A}-\mu_{A}\|\Rightarrow 0\) when \(n_{g}\gg d\) by Lemma 5.1, \(1/\lambda_{1}(\hat{S})\) is bounded whp by Lemma 5.3 and \(1/\lambda_{1}(\Sigma_{A})\) is bounded by Corollary 3.2, the previous inequality gives that \(1/\lambda_{1}(\hat{\Sigma}_{A})\) is bounded whp.
**Lemma 5.5**.: _If \(\inf_{d}p_{f}(A)>0\) and \(n_{g}\gg rd\), then \(\Psi(\hat{\Sigma}_{\mathrm{proj}})\) is bounded whp._
Proof.: According to Corollary 5.4, \(1/\lambda_{1}(\hat{\Sigma}_{\mathrm{proj}})\) and \(\lambda_{d}(\hat{\Sigma}_{\mathrm{proj}})\) are bounded whp. Thus, in order to show that \(\Psi(\hat{\Sigma}_{\mathrm{proj}})\) is bounded whp, it remains to show in view of Lemma 2.7 that \(\|\hat{\Sigma}_{\mathrm{proj}}-I\|\) is bounded whp. Define
\[\Sigma^{\prime}_{\mathrm{proj}}=\sum_{k=1}^{r}(v_{k}-1)\hat{d}_{k}\bar{d}_{k}^ {\top}+I\ \ \text{with}\ \ v_{k}=\bar{d}_{k}^{\top}\Sigma_{A}\hat{d}_{k}.\]
According to Lemma 3.1, we have that \(\|\Sigma^{\prime}_{\mathrm{proj}}-I\|\leq\|\Sigma_{A}-I\|\). Since \(\|\Sigma_{A}-I\|\) is bounded by Corollary 3.2, we obtain that \(\|\Sigma^{\prime}_{\mathrm{proj}}-I\|\) is bounded. By the triangle inequality, it is therefore enough to prove that \(\|\hat{\Sigma}_{\mathrm{proj}}-\Sigma^{\prime}_{\mathrm{proj}}\|\) is bounded whp. By definition we have
\[\hat{\Sigma}_{\mathrm{proj}}-\Sigma^{\prime}_{\mathrm{proj}}=\sum_{k=1}^{r}( \hat{v}_{k}-v_{k})\hat{d}_{k}\bar{d}_{k}^{\top}.\]
Therefore, the eigenvalues of \(\hat{\Sigma}_{\mathrm{proj}}-\Sigma^{\prime}_{\mathrm{proj}}\) are the \(\hat{v}_{k}-v_{k}\) for \(k=1,\ldots,r\), and \(0\) with multiplicity \(d-r\). Since \(\hat{\Sigma}_{\mathrm{proj}}-\Sigma^{\prime}_{\mathrm{proj}}\) is symmetric, the square of its Frobenius norm is equal to the sum of the square of its eigenvalues, and since at most \(r\) of them are non-zero, we obtain
\[\|\hat{\Sigma}_{\mathrm{proj}}-\Sigma^{\prime}_{\mathrm{proj}}\|^{2}\leq r \varepsilon\ \ \text{with}\ \ \varepsilon=\max\left(\lambda_{1}(\hat{\Sigma}_{A}-\Sigma_{A})^{2},\lambda_{d}( \hat{\Sigma}_{A}-\Sigma_{A})^{2}\right).\]
By (28) we have
\[\hat{\Sigma}_{A}-\Sigma_{A}=\Sigma_{A}^{1/2}(\hat{S}-I)\Sigma_{A}^{1/2}-(\hat {\mu}_{A}-\mu_{A})(\hat{\mu}_{A}-\mu_{A})^{\top}\]
and so if we let \(\delta=\max(|\lambda_{1}(\hat{S})-1|,|\lambda_{d}(\hat{S})-1|)\) as in Lemma 5.3, we obtain for any \(x\in\mathbb{R}^{d}\) with \(\|x\|=1\)
\[\Big{|}x^{\top}(\hat{\Sigma}_{A}-\Sigma_{A})x\Big{|}\leq\lambda_{d}(\Sigma_{A}) \delta+\|\hat{\mu}_{A}-\mu_{A}\|^{2}.\]
By definition of \(\varepsilon\) and the variational characterization of eigenvalues, this implies that \(\varepsilon^{1/2}\leq\lambda_{d}(\Sigma_{A})\delta+\|\hat{\mu}_{A}-\mu_{A}\|^ {2}\) and since \(\|\hat{\Sigma}_{\mathrm{proj}}-\Sigma_{\mathrm{proj}}^{\prime}\|^{2}\leq r\varepsilon\), we finally get
\[\|\hat{\Sigma}_{\mathrm{proj}}-\Sigma_{\mathrm{proj}}^{\prime}\|^{2}\leq r \left(\lambda_{d}(\Sigma_{A})\delta+\|\hat{\mu}_{A}-\mu_{A}\|^{2}\right)^{2}.\]
Given that \(\lambda_{d}(\Sigma_{A})\) is bounded by Corollary 3.2, the proof will be complete if we prove that \(r\delta^{2}\Rightarrow 0\) and \(r\|\hat{\mu}_{A}-\mu_{A}\|^{4}\Rightarrow 0\), which is what we do in the rest of the proof.
The fact that \(r\delta^{2}\Rightarrow 0\) is a direct consequence of Lemma 5.3, which implies that \(\mathbb{P}(r\delta^{2}\leq C(rd/n_{g}))\to 1\) (which gives \(r\delta^{2}\Rightarrow 0\) since \(rd/n_{g}\to 0\)). On the other hand, (27) directly implies that \(r\|\hat{\mu}_{A}-\mu_{A}\|^{2}\Rightarrow 0\) when \(n_{g}\gg rd\), which implies that \(r\|\hat{\mu}_{A}-\mu_{A}\|^{4}\Rightarrow 0\). The proof is therefore complete.
### High-dimensional efficiency of \(\hat{g}_{t}\)
#### 5.2.1 Proof outline
Compared to \(\hat{g}_{\mathrm{proj}}\), analyzing the cross-entropy scheme (i.e., showing that \(\hat{g}_{t}\) is efficient in high-dimension) entails one significant additional difficulty which imposes the implicit growth rate \(n\gg d^{\kappa}\) in Theorem 1.5. In order to illustrate this difficulty, consider
\[\hat{\mu}_{t+1}^{\prime}=\frac{1}{n_{g}p_{f}(\hat{A}_{t})}\sum_{i=1}^{n_{g}} \ell(Y_{i})\xi_{\hat{A}_{t}}(Y_{i})Y_{i}\ \ \text{with}\ \ \ell=f/\hat{g}_{t}.\]
Compared to \(\hat{\mu}_{t+1}\) in (7), we have just replaced \(\hat{p}_{t}\) by \(p_{f}(\hat{A}_{t})\), but thanks to this mild modification, we can use the CD bound (13), conditional on \(\hat{g}_{t}\) and \(\hat{A}_{t}\), on every coordinate \(k=1,\ldots,d\) with \(\phi(x)=x(k)\xi_{\hat{A}_{t}}(x)/p_{f}(\hat{A}_{t})\), to get a bound on \(|\hat{\mu}_{t+1}^{\prime}-\mu_{\hat{A}_{t}}\). We will see below that this approach leads to a bound of the form
\[\widehat{\mathbb{E}}\left(|\hat{\mu}_{t+1}^{\prime}-\mu_{\hat{A}_{t}}|\right) \leq\frac{Z^{\prime}}{p_{f}(\hat{A}_{t})}dn^{-\alpha/4}\]
with \(Z^{\prime}\) bounded whp, see Lemma 5.11 and Lemma 5.13 below (\(\widehat{\mathbb{E}}\) will be introduced below also). What is important is that this bound holds for any \(\alpha<\alpha_{*}(\widehat{\Sigma}_{t})\) (recall the definition (20) of \(\alpha_{*}\)).
Thus, if we want to make this bound vanish (which is the first step toward the control of \(\mu_{t+1}\)), we need \(dn^{-\alpha/4}\to 0\) for some \(\alpha<\alpha_{*}(\widehat{\Sigma}_{t})\), i.e., \(n\gg d^{\kappa}\) for some \(\kappa>4/\alpha_{*}(\widehat{\Sigma}_{t})\). This approach ultimately gives a control on \(\hat{\mu}_{t+1}\), but at the expense of a _random_ growth rate for \(n\), which is unsatisfactory. As discussed in the end of the section 1.2.3, the intuition \(\hat{\Sigma}_{t}\approx\Sigma_{t}\) suggests to try and show that \(\alpha_{*}(\widehat{\Sigma}_{t})\approx\alpha_{*}(\Sigma_{t})\), which is tantamount to showing that \(\lambda_{1}(\hat{\Sigma}_{t})\approx\lambda_{1}(\Sigma_{t})\). However, controlling smallest eigenvalues of random matrices is a difficult problem, and it seems that justifying the approximation \(\lambda_{1}(\hat{\Sigma}_{t})\approx\lambda_{1}(\Sigma_{t})\) would require additional technical assumptions, e.g., on the growth rate of \(m\) and regularly properties for \(\varphi\). Here we adopt a different approach, and
just prove the existence of \(\underline{\alpha}>0\) such that \(\mathbb{P}(\alpha_{\kappa}(\hat{\Sigma}_{t})\geq\underline{\alpha})\to 1\). The approach outlined above then provides a control of \(\hat{\mu}_{t+1}\) provided \(n_{g}\gg d^{4/\underline{\alpha}}\).
As in the control of \(g_{t}\), the control of \(\hat{g}_{t}\) proceeds by induction. To that purpose we need the following stochastic version of the previous deterministic induction hypothesis.
**Stochastic induction hypothesis**.: _Let \(t\geq 0\). We say that the stochastic induction hypothesis holds at time \(t\) if \(\Psi(\hat{\Sigma}_{t})\), \(\|\hat{\mu}_{t}\|\) and \(1/p_{f}(\hat{A}_{t})\) are bounded whp._
The initialization of the induction will be carried out in Lemma 5.6, while the induction itself is treated in Theorem 5.7.
**Lemma 5.6**.: _Assume that:_
* \(\inf_{d}\rho>0\)_;_
* _for every_ \(d\)_,_ \(\varphi\) _has no atom;_
* \(m\to\infty\)_._
_Then for any \(t\geq 0\), \(1/p_{\hat{g}_{t}}(\hat{A}_{t})\) is bounded whp. In particular, the stochastic induction hypothesis holds at time \(t=0\)._
**Theorem 5.7**.: _Assume that:_
* \(\inf_{d}p_{f}(A)>0\)_;_
* \(\inf_{d}\rho>0\)_;_
* _for every_ \(d\)_,_ \(\varphi\) _has no atom;_
* \(m\to\infty\)_;_
_Under these assumptions, if the stochastic induction hypothesis holds at some time \(t\geq 0\), then there exists a constant \(\kappa>0\) such that if \(n\gg d^{\kappa}\), then the stochastic induction hypothesis holds at time \(t+1\)._
Before proceeding to the proof of this result, let us complete the proof of the part of Theorem 1.5 related to \(\hat{g}_{t}\) based on Lemma 5.6 and Theorem 5.7.
**Proposition 5.8**.: _Assume that:_
* \(\inf_{d}p_{f}(A)>0\)_;_
* \(\inf_{d}\rho>0\)_;_
* _for every_ \(d\)_,_ \(\varphi\) _has no atom;_
* \(m\to\infty\)_;_
_Then for any \(t\geq 0\) there exists a constant \(\kappa_{t}>0\) such that if \(n_{g}\gg d^{\kappa_{t}}\), then \(\hat{g}_{t}\) is efficient in high dimension for \(A\)._
Proof based on Lemma 5.6 and Theorem 5.7.: Lemma 5.6 implies that the stochastic induction hypothesis holds at time \(0\), and Theorem 5.7 then implies that it holds for every \(t\geq 0\). Thus, \(\|\hat{\mu}_{t}\|\) and \(\Psi(\hat{\Sigma}_{t})\) are bounded whp, and \(1/p_{f}(A)\) is bounded: Proposition 2.14 then implies that \(\hat{g}_{t}\) is efficient in high dimension for \(A\).
#### 5.2.2 Control of \(p_{\hat{g}_{t}}(\hat{A}_{t})\) and induction initialization
In this section we prove Lemma 5.6.
**Lemma 5.9**.: _For each \(d\), let:_
* \(U_{1},\ldots,U_{m}\) _be_ \(m=m(d)\) _i.i.d. real-valued random variables with cumulative distribution function_ \(F(u)=\mathbb{P}(U\leq u)\)_, that may depend on_ \(d\)_;_
* _for each_ \(d\)_,_ \(F\) _is continuous;_
* \(\varrho\in(0,1)\) _and_ \(q=F^{-1}(1-\varrho)\)_;_
* \(\hat{q}=U_{([(1-\varrho)m])}\) _the empirical estimation of_ \(q\)_, with_ \(U_{(1)}\leq\cdots\leq U_{(m)}\)_._
_Assume that \(m\to\infty\) and \(\inf_{d}\varrho>0\). Then \((1-F(\hat{q}))/\varrho\Rightarrow 1\), and in particular, \(1/(1-F(\hat{q}))\) is bounded whp._
Proof.: We have
\[\mathbb{P}((1-F(\hat{q}))/\varrho\leq x)=\mathbb{P}(F(\hat{q})\geq 1-\varrho x )=\mathbb{P}(\hat{q}\geq F^{-1}(1-\varrho x))\]
with the second equality coming from the fact that \(F^{-1}\) is the left-continuous inverse, so \(F(x)\geq t\Leftrightarrow x\geq F^{-1}(t)\). Let \(n=[(1-\varrho)m]\): by definition of \(\hat{q}\), we have
\[\mathbb{P}(\hat{q}\geq F^{-1}(1-\varrho x))=\mathbb{P}(U_{(n)}\geq F^{-1}(1- \varrho x)).\]
Since \(U_{(k)}\) is the \(k\)th largest sample among the \(U_{i}\)'s, we have
\[U_{(n)}\geq F^{-1}(1-\varrho x)\Longleftrightarrow\#\{i:U_{i}\geq F^{-1}(1- \varrho x)\}\geq m-n+1.\]
Since the \(U_{i}\)'s are i.i.d., the random variable \(\#\{i:U_{i}\geq F^{-1}(1-\varrho x)\}\) follows a binomial random variable with parameter \(m\) and
\[\mathbb{P}(U_{1}\geq F^{-1}(1-\varrho x))=1-F(F^{-1}(1-\varrho x))=\varrho x,\]
both equalities coming from the fact that \(F\) is continuous. Thus if \(B_{q}\) denotes a binomial random variable with parameter \(m\) and \(q\), we obtain
\[\mathbb{P}(\hat{q}\geq F^{-1}(1-\varrho x))=\mathbb{P}(B_{\varrho x}\geq m-n+ 1).\]
By considering Laplace transforms, one easily sees that \(B_{\varrho x}/(\varrho m)\Rightarrow x\). Since \((m-n+1)/(\varrho m)\to 1\) (using \(\inf_{d}\varrho>0\)), we obtain
\[\mathbb{P}(\hat{q}\geq F^{-1}(1-\varrho x))\to\mathds{1}\left(x\geq 1\right)\]
for \(x\neq 1\), which implies the desired convergence \((1-F(\hat{q}))/\varrho\Rightarrow 1\). As this clearly implies \(1/(1-F(\hat{q}))\) that is bounded whp, this concludes the proof.
Proof of Lemma 5.6.: Let \(F(u)=\mathbb{P}_{\hat{g}_{t}}(\varphi(X)\leq u)\): by definition of \(\hat{A}_{t}\) we have \(p_{\hat{g}_{t}}(\hat{A}_{t})=1-F(\hat{q}_{t})\). Moreover, since \(\varphi\) has no atom, Lemma 2.15 implies that \(F\) is continuous, and so Lemma 5.9 (applied with \(U_{k}=\varphi(Y_{k}^{\prime})\)) implies that \(p_{\hat{g}_{t}}(\hat{A}_{t})/\rho\Rightarrow 1\), where the convergence holds under \(\mathbb{P}_{\hat{g}_{t}}\), so conditionally on \(\hat{g}_{t}\). In particular, for \(x>1/\inf_{d}\rho\) we have
\[\mathbb{P}_{\hat{g}_{t}}(1/p_{\hat{g}_{t}}(\hat{A}_{t})\geq x)=\mathbb{P}_{ \hat{g}_{t}}(p_{\hat{g}_{t}}(\hat{A}_{t})/\rho\leq 1/(\rho x))\Rightarrow 0\]
and so \(\mathbb{P}(1/p_{\hat{g}_{t}}(\hat{A}_{t})\geq x)\to 0\) as well, by the bounded convergence theorem, which proves that \(p_{\hat{g}_{t}}(\hat{A}_{t})\) is bounded whp.
Concerning the stochastic induction hypothesis at time \(t=0\), note that for \(t=0\) we have \(\hat{\mu}_{0}=\mu_{0}=0\) and \(\hat{\Sigma}_{0}=I=\Sigma_{0}\), which readily entails that \(\Psi(\hat{\Sigma}_{0})\) and \(\|\hat{\mu}_{0}\|\) are bounded. Further, since \(\hat{g}_{0}=f\) we have \(1/p_{f}(\hat{A}_{0})=1/p_{\hat{g}_{0}}(\hat{A}_{0})\) which was just proved to be bounded whp.
#### 5.2.3 Additional notation and preliminary results
Before proceeding to the proof of Theorem 5.7, let us establish some preliminary results and introduce additional notation.
**Lemma 5.10**.: _Assume that the stochastic induction hypothesis holds at time \(t\). Then \(D(f||\hat{g}_{t})\) and \(1/\alpha_{*}(\hat{\Sigma}_{t})\) are bounded whp. In particular, there exists \(\underline{\alpha}>0\) such that the event \(\mathcal{E}\) defined by_
\[\mathcal{E}=\{D(f||\hat{g}_{t})\leq\log n\}\cap\{\underline{\alpha}<\alpha_{* }(\hat{\Sigma}_{t})\}\]
_holds with high probability, i.e., \(\mathbb{P}(\mathcal{E})\to 1\)._
Proof.: From (22), we obtain
\[D(f||\hat{g}_{t})=\Psi(\hat{\Sigma}_{t}^{-1})+\frac{1}{2}\hat{\mu}_{t}^{\top} \hat{\Sigma}_{t}^{-1}\hat{\mu}_{t}\leq\Psi(\hat{\Sigma}_{t}^{-1})+\frac{1}{2 \lambda_{1}(\hat{\Sigma}_{t})}\|\hat{\mu}_{t}\|^{2}.\]
Since \(\Psi(\hat{\Sigma}_{t})\) and \(\|\hat{\mu}_{t}\|\) are bounded whp by assumption, Lemma 2.7 implies that \(D(f||\hat{g}_{t})\) is bounded whp by the inequality of the previous display. Moreover, since \(\Psi(\hat{\Sigma}_{t})\) is bounded whp by the stochastic induction hypothesis, this implies that \(1/\lambda_{1}(\hat{\Sigma}_{t})\) is bounded whp by Lemma 2.7, which implies that \(1/\alpha_{*}(\hat{\Sigma}_{t})\) is bounded whp by definition of \(\alpha_{*}\) in (20).
In the sequel, we assume that the stochastic induction hypothesis holds at time \(t\). We fix a constant \(\underline{\alpha}\) given by the previous lemma and we consider the event \(\mathcal{E}\) defined there. Let in the sequel
\[\widehat{\mathbb{P}}=\mathbb{P}(\,\cdot\mid\hat{g}_{t},\hat{A}_{t},\mathcal{E})\]
be the random distribution conditional on \(\hat{g}_{t}\), \(\hat{A}_{t}\) and the event \(\mathcal{E}\). The motivation for introducing \(\widehat{\mathbb{P}}\) is that conditioning \(\hat{g}_{t}\), \(\hat{A}_{t}\) and the event \(\mathcal{E}\) will allow us to use the CD bound (13).
We consider an additional constant \(\alpha<\underline{\alpha}\), and we define \(Z=0\) if \(\alpha_{*}(\hat{\Sigma}_{t})\leq\underline{\alpha}\), and
\[Z=\exp\left(\alpha D(f||\hat{g}_{t})+\frac{1}{2}q\alpha^{2}\|\hat{\Sigma}_{t} ^{-1}\hat{\mu}_{t}\|^{2}+\frac{\alpha}{2\underline{\alpha}}\Psi((\underline{ \alpha}+1)I-\underline{\alpha}\hat{\Sigma}_{t}^{-1})\right) \tag{29}\]
if \(\underline{\alpha}<\alpha_{*}(\hat{\Sigma}_{t})\), with \(q=\underline{\alpha}/(\underline{\alpha}-\alpha)\). Note that \(Z\) is the bound (21) of the \(\alpha\)th-moment of the likelihood ratio between \(f\) and \(\hat{g}_{t}\). We also define
\[Z^{\prime}=3e^{\alpha D(f||\hat{g}_{t})}Z^{1/2}.\]
We will use the following result on \(Z\) and \(Z^{\prime}\).
**Lemma 5.11**.: _If the stochastic induction hypothesis holds at time \(t\), then \(Z\) and \(Z^{\prime}\) are bounded whp._
Proof.: Recall that \(Z^{\prime}=3e^{\alpha D(f||\hat{g}_{t})}Z^{1/2}\) with \(Z\) defined in (29): since \(D(f||\hat{g}_{t})\), \(\Psi(\hat{\Sigma}_{t})\) and \(\|\hat{\mu}_{t}\|\) are bounded whp by the stochastic induction hypothesis and Lemma 5.10, it is enough in view of (29) to show that \(\Psi((\alpha+1)I-\alpha\hat{\Sigma}_{t}^{-1})\) is bounded whp. For \(i\in\{1,d\}\), we have
\[\lambda_{i}\left((\alpha+1)I-\alpha\hat{\Sigma}_{t}^{-1}\right) =\alpha+1+\lambda_{i}(-\alpha\hat{\Sigma}_{t}^{-1})\] \[=\alpha+1-\alpha\lambda_{d+1-i}(\hat{\Sigma}_{t}^{-1})\] \[=\alpha+1-\frac{\alpha}{\lambda_{i}(\hat{\Sigma}_{t})}\] \[=1-\frac{1-\lambda_{i}(\hat{\Sigma}_{t})}{\lambda_{i}(\hat{ \Sigma}_{t})}\alpha.\]
Since \(\Psi(\hat{\Sigma}_{t})\) is bounded whp by the stochastic induction hypothesis, \(\lambda_{d}(\hat{\Sigma}_{t})\) and \(1/\lambda_{1}(\hat{\Sigma}_{t})\) are bounded whp by Lemma 2.7, and so the previous display implies that \(1/\lambda_{1}((\alpha+1)I-\alpha\hat{\Sigma}_{t}^{-1})\) and \(\lambda_{d}((\alpha+1)I-\alpha\hat{\Sigma}_{t}^{-1})\) are also bounded whp. Moreover,
\[\|(\alpha+1)I-\alpha\hat{\Sigma}_{t}^{-1}-I\|=\alpha\|\hat{\Sigma}_{t}^{-1}-I\|\]
which is bounded whp, again as a consequence of the assumption that \(\Psi(\hat{\Sigma}_{t})\) is bounded whp and Lemma 2.7. Invoking Lemma 2.7, we obtain \(\Psi((\alpha+1)I-\alpha\hat{\Sigma}_{t}^{-1})\) which concludes the proof.
#### 5.2.4 Induction
We now prove Theorem 5.7 that the induction goes through. So in the rest of this section, we assume that the assumptions of Theorem 5.7 hold: in particular, the stochastic induction hypothesis holds at time \(t\). We identify growth rates for \(n\) that guarantee that \(\Psi(\hat{\Sigma}_{t+1})\), \(\|\hat{\mu}_{t+1}\|\) and \(1/p_{f}(\hat{A}_{t+1})\) are bounded whp. We begin with the following lemma, which follows by combining the CD bound (13) and the bound (21) on the exponential moments of the log-likelihood. In the sequel, define
\[\ell=\frac{f}{\hat{g}_{t}}.\]
**Lemma 5.12**.: _Let \(Y_{i}\) i.i.d. \(\sim\hat{g}_{t}\), \(Y\sim f\), \(d^{\prime}\in\mathbb{N}\setminus\{0\}\), and \(\phi:\mathbb{R}^{d}\to\mathbb{R}^{d^{\prime}}\) measurable written \(\phi(x)=(\phi_{1}(x),\ldots,\phi_{d^{\prime}}(x))\) for \(x\in\mathbb{R}^{d}\), with \(\phi_{k}:\mathbb{R}^{d}\to\mathbb{R}\). Then_
\[\widehat{\mathbb{E}}\left(\left|\frac{1}{n_{g}}\sum_{i=1}^{n_{g}}\ell(Y_{i}) \phi(Y_{i})-\widehat{\mathbb{E}}(\phi(Y))\right|\right)\leq Z^{\prime}\left( \sum_{k=1}^{d^{\prime}}\sqrt{\widehat{\mathbb{E}}\left(\phi_{k}(Y)^{2}\right) }\right)n^{-\alpha/4}. \tag{30}\]
Proof.: Since \(\widehat{\mathbb{P}}(\mathcal{E})=1\), we can use (13) with \(g=\hat{g}_{t}\) and \(\phi=\phi_{k}\) to obtain
\[\widehat{\mathbb{E}}\left(\left|\frac{1}{n}\sum_{i=1}^{n}\ell(Y_{ i})\phi_{k}(Y_{i})-\widehat{\mathbb{E}}(\phi_{k}(Y))\right|\right)\leq\left( \widehat{\mathbb{E}}\left(\phi_{k}(Y)^{2}\right)\right)^{1/2}\times\\ \left[\left(\frac{e^{D(f||\hat{g}_{t})}}{n}\right)^{1/4}+2\left( \widehat{\mathbb{P}}\left(L(Y)\geq\frac{1}{2}\log n+\frac{1}{2}D(f||\hat{g}_{ t})\right)\right)^{1/2}\right]\]
with \(L=\log(f/\hat{g}_{t})\). Concerning the tail of the log-likelihood, we have
\[\widehat{\mathbb{P}}\left(L(Y)\geq\frac{1}{2}\log n+\frac{1}{2}D(f|| \hat{g}_{t})\right) =\widehat{\mathbb{P}}\left(e^{\alpha L(Y)}\geq(ne^{D(f||\hat{g}_{t })})^{\alpha/2}\right)\] \[\leq(ne^{D(f||\hat{g}_{t})})^{-\alpha/2}\widehat{\mathbb{E}} \left(e^{\alpha L(Y)}\right)\] \[\leq(ne^{D(f||\hat{g}_{t})})^{-\alpha/2}Z\]
using Lemma 2.13 for the last inequality (which we can invoke, since by definition of \(\alpha,\underline{\alpha}\) and \(\widehat{\mathbb{P}}\) we have \(\widehat{\mathbb{P}}(\alpha<\underline{\alpha}<\alpha_{*}(\hat{\Sigma}_{t}))=1\)). This leads to
\[\widehat{\mathbb{E}}\left(\left|\frac{1}{n}\sum_{i=1}^{n}\ell(Y_ {i})\phi_{k}(Y_{i})-\widehat{\mathbb{E}}(\phi_{k}(Y))\right|\right)\leq\left( \widehat{\mathbb{E}}\left(\phi_{k}(Y)^{2}\right)\right)^{1/2}\times\\ \left[\left(\frac{e^{D(f||\hat{g}_{t})}}{n}\right)^{1/4}+2\left( \frac{e^{D(f||\hat{g}_{t})}}{n}\right)^{\alpha/4}Z^{1/2}\right].\]
Since \(e^{D(f||\hat{g}_{t})}/n\leq 1\) (since we are in the event \(\{D(f||\hat{g}_{t})\leq\log n\}\)) and \(\alpha<1\), we have
\[\left(\frac{e^{D(f||\hat{g}_{t})}}{n}\right)^{1/4}\leq\left(\frac{e^{D(f||\hat {g}_{t})}}{n}\right)^{\alpha/4}\]
and so we get
\[\widehat{\mathbb{E}}\left(\left|\frac{1}{n}\sum_{i=1}^{n}\ell(Y_ {i})\phi_{k}(Y_{i})-\widehat{\mathbb{E}}(\phi_{k}(Y))\right|\right)\\ \leq\left(\widehat{\mathbb{E}}\left(\phi_{k}(Y)^{2}\right) \right)^{1/2}\left(1+2Z^{1/2}\right)e^{\alpha D(f||\hat{g}_{t})/4}n^{-\alpha/4}.\]
Using \((1+2Z^{1/2})e^{\alpha D(f||\hat{g}_{t})/4}\leq Z^{\prime}\) (since \(Z^{1/2}\geq 1\)) and summing over \(k\) gives the result.
The gist of CE is that \(\hat{\mu}_{t+1}\) and \(\hat{\Sigma}_{t+1}\) are thought of IS estimators of \(\mu_{t+1}=\mu_{A_{t}}\) and \(\Sigma_{t+1}=\Sigma_{A_{t}}\), which suggests to use the bound of the previous display to control them. However, a close inspection of their definitions (7) and (8) shows that \(\hat{\mu}_{t+1}\) and \(\hat{\Sigma}_{t+1}\) are not exactly IS estimators of \(\mu_{A_{t}}\) and \(\Sigma_{A_{t}}\) for two reasons:
1. they are self-normalized through the estimator \(\hat{p}_{t}\);
2. they are IS estimators of \(\mu_{\hat{A}_{t}}\) and \(\Sigma_{\hat{A}_{t}}\), rather than \(\mu_{A_{t}}\) and \(\Sigma_{A_{t}}\).
The first point prevents from directly using the bound of the previous display. For this reason, we start by analyzing the following quantities:
\[w_{t}=\frac{\hat{p}_{t}}{p_{f}(\hat{A}_{t})},\ \hat{\mu}^{\prime}_{t+1}=\frac{1 }{n_{g}p_{f}(\hat{A}_{t})}\sum_{i=1}^{n_{g}}\ell(Y_{i})\xi_{\hat{A}_{t}}(Y_{i} )Y_{i}\]
and
\[\hat{\Sigma}^{\prime}_{t+1}=\frac{1}{n_{g}p_{f}(\hat{A}_{t})}\sum_{i=1}^{n_{g} }\ell(Y_{i})\xi_{\hat{A}_{t}}(Y_{i})(Y_{i}-\mu_{\hat{A}_{t}})(Y_{i}-\mu_{\hat{A }_{t}})^{\top}\]
where here and in the sequel, \(\ell=f/\hat{g}_{t}\) and, under \(\widehat{\mathbb{P}}\), the \(Y_{i}\)'s are i.i.d. drawn according to \(\hat{g}_{t}\). Then \(\hat{\mu}^{\prime}_{t+1}\) and \(\hat{\Sigma}^{\prime}_{t+1}\) are the IS estimators of \(\mu_{\hat{A}_{t}}\) and \(\Sigma_{\hat{A}_{t}}\), respectively, with the IS density \(\hat{g}_{t}\). In particular, we can apply the previous lemma to control them, which leads to the following bounds.
**Lemma 5.13**.: _With the notation introduced above, we have_
\[\widehat{\mathbb{E}}\left(|w_{t}-1|\right)\leq\frac{Z^{\prime}}{(p_{f}(\hat{A }_{t}))^{1/2}}n^{-\alpha/4}, \tag{31}\]
\[\widehat{\mathbb{E}}\left(|\hat{\mu}^{\prime}_{t+1}-\mu_{\hat{A}_{t}}|\right) \leq\frac{Z^{\prime}}{p_{f}(\hat{A}_{t})}dn^{-\alpha/4} \tag{32}\]
_and_
\[\widehat{\mathbb{E}}\left(|\hat{\Sigma}^{\prime}_{t+1}-\Sigma_{\hat{A}_{t}}| \right)\leq\frac{(4+2\|\mu_{\hat{A}_{t}}\|^{2})Z^{\prime}}{p_{f}(\hat{A}_{t} )}d^{2}n^{-\alpha/4}. \tag{33}\]
Proof.: Recall that \(\hat{p}_{t}=\frac{1}{n}\sum_{i}\ell(Y_{i})\xi_{\hat{A}_{t}}(Y_{i})\): applying (30) with \(\phi=\xi_{\hat{A}_{t}}\), we obtain
\[\widehat{\mathbb{E}}\left(\left|\hat{p}_{t}-p_{f}(\hat{A}_{t})\right|\right) \leq Z^{\prime}p_{f}(\hat{A}_{t})^{1/2}n^{-\alpha/4}\]
which gives (31) by dividing by \(p_{f}(\hat{A}_{t})\). For the second bound (32), we use (30) with \(\phi(x)=\xi_{\hat{A}_{t}}(x)x\), corresponding to \(\phi_{k}(x)=\xi_{\hat{A}_{t}}(x)x(k)\): then
\[\widehat{\mathbb{E}}\left(\left|\frac{1}{n_{g}}\sum_{k=1}^{n_{g}} \ell(Y_{i})\xi_{\hat{A}_{t}}(Y_{i})Y_{i}-\widehat{\mathbb{E}}(Y\xi_{\hat{A}_{ t}}(Y))\right|\right)\leq Z^{\prime}\\ \times\left(\sum_{k=1}^{d}\sqrt{\widehat{\mathbb{E}}\left(Y(k)^{ 2}\xi_{\hat{A}_{t}}(Y)\right)}\right)n^{-\alpha/4}.\]
Since \(Y\sim f\), we have \(\widehat{\mathbb{E}}\left(Y(k)^{2}\xi_{\hat{A}_{t}}(Y)\right)\leq\widehat{ \mathbb{E}}(Y(k)^{2})=1\) and so using this bound and dividing by \(p_{f}(\hat{A}_{t})\), we obtain
\[\widehat{\mathbb{E}}\left(\left|\frac{1}{n_{g}p_{f}(\hat{A}_{t})} \sum_{i=1}^{n_{g}}\ell(Y_{i})\xi_{\hat{A}_{t}}(Y_{i})Y_{i}-\widehat{\mathbb{E }}\left(Y\mid Y\in\hat{A}_{t}\right)\right|\right) \leq \frac{Z^{\prime}}{p_{f}(\hat{A}_{t})}dn^{-\alpha/4}.\]
Recalling the definitions
\[\hat{\mu}^{\prime}_{t+1}=\frac{1}{n_{g}p_{f}(\hat{A}_{t})}\sum_{i=1}^{n_{g}} \ell(Y_{i})\xi_{\hat{A}_{t}}(Y_{i})Y_{i}\ \ \text{and}\ \ \mu_{\hat{A}_{t}}=\widehat{\mathbb{E}}\left(Y\mid Y\in\hat{A}_{t}\right)\]
we see that this exactly (32). The bound (33) for the variance follows along similar lines by considering \(\phi(x)=(x-\mu_{\hat{A}_{t}})(x-\mu_{\hat{A}_{t}})^{\top}\xi_{\hat{A}_{t}}(x)\). For this choice of \(\phi\), starting from (30) and dividing by \(p_{f}(\hat{A}_{t})\), we obtain similarly as above
\[\widehat{\mathbb{E}}\left(\left|\hat{\Sigma}^{\prime}_{t+1}-\Sigma_{\hat{A}_{t }}\right|\right)\leq Z^{\prime}\left(\sum_{1\leq i,j\leq d}\sqrt{\widehat{ \mathbb{E}}\left(Z_{i}Z_{j}\right)}\right)n^{-\alpha/4} \tag{34}\]
with \(Z_{i}=(Y(i)-\mu_{\hat{A}_{t}}(i))^{2}\). Since \(Z_{i}\) and \(Z_{j}\) are independent under \(\widehat{\mathbb{P}}\) for \(i\neq j\), we have
\[\sum_{i,j}\sqrt{\widehat{\mathbb{E}}(Z_{i}Z_{j})} =\sum_{i=1}^{d}\sqrt{\widehat{\mathbb{E}}(Z_{i}^{2})}+\sum_{i\neq j }\sqrt{\widehat{\mathbb{E}}(Z_{i})\widehat{\mathbb{E}}(Z_{j})}\] \[\leq\sum_{i=1}^{d}\sqrt{\widehat{\mathbb{E}}(Z_{i}^{2})}+\left( \sum_{i=1}^{d}\sqrt{\widehat{\mathbb{E}}(Z_{i})}\right)^{2}.\]
using for the last inequality that the \(Z_{i}\)'s are non-negative. Using that \(\widehat{\mathbb{E}}(Y(i)^{k})=0\) for \(k=1,3\) and \(5\), that \(\widehat{\mathbb{E}}(Y(k)^{2})=1\) and that \(\widehat{\mathbb{E}}(Y(k)^{4})=3\) (because \(Y\sim f\)), we can compute (bounds on) the first and second moments of the \(Z_{i}\)'s. For the first moment, we have
\[\widehat{\mathbb{E}}\left(Z_{i}\right)=\widehat{\mathbb{E}}\left((Y(i)-\mu_ {\hat{A}_{t}}(i))^{2}\right)=1+\mu_{\hat{A}_{t}}(i)^{2}\leq 1+\|\mu_{\hat{A}_{t }}\|^{2}\]
and for the second moment, we have
\[\widehat{\mathbb{E}}\left(Z_{i}^{2}\right)=\widehat{\mathbb{E}}\left((Y(i)- \mu_{\hat{A}_{t}}(i))^{4}\right)=\mu_{\hat{A}_{t}}(i)^{4}+6\mu_{\hat{A}_{t}}( i)^{2}+3\leq(\mu_{\hat{A}_{t}}(i)^{2}+3)^{2}\]
and so \(\widehat{\mathbb{E}}\left(Z_{i}^{2}\right)\leq(\|\mu_{\hat{A}_{t}}\|^{2}+3)^{2}\). This gives
\[\sum_{i,j}\sqrt{\widehat{\mathbb{E}}(Z_{i}Z_{j})}\leq d(\|\mu_{\hat{A}_{t}}\|^ {2}+3)+d^{2}(1+\|\mu_{\hat{A}_{t}}\|^{2})\leq d^{2}(4+2\|\mu_{\hat{A}_{t}}\|^ {2}).\]
Plugging in this inequality into (34) gives the result.
**Corollary 5.14**.: _Assume that:_
* _the stochastic induction hypothesis holds at time_ \(t\)_;_
* \(\inf_{d}\rho>0\)_;_
* _for every_ \(d\)_,_ \(\varphi\) _has no atom;_
* \(m\to\infty\)_;_
* \(n\gg d^{8/\alpha}\)_._
_Then \(\|\hat{\mu}_{t+1}\|\), \(\Psi(\hat{\Sigma}_{t+1})\) and \(1/p_{f}(\hat{A}_{t+1})\) are bounded whp, i.e., the stochastic induction hypothesis holds at time \(t+1\)._
Proof.: Stochastic induction holding at time \(t\) gives \(\mathbb{P}(\mathcal{E})\to 1\) by Lemma 5.10, so we can assume without loss of generality that the event \(\mathcal{E}\) holds almost surely. Let us first prove that \(\|\hat{\mu}_{t+1}\|\) is bounded whp. By the stochastic induction hypothesis, \(1/p_{f}(\hat{A}_{t})\) is bounded whp, so \(\|\mu_{\hat{A}_{t}}\|\) and \(\Psi(\Sigma_{\hat{A}_{t}})\) are bounded whp by Corollary 3.3. Therefore, it is enough to prove that \(\|\hat{\mu}_{t+1}-\mu_{\hat{A}_{t}}\|\Rightarrow 0\). By definition we have
\[\hat{\mu}_{t+1}^{\prime}=\frac{1}{n_{g}p_{f}(\hat{A}_{t})}\sum_{i=1}^{n_{g}} \ell(Y_{i})\xi_{\hat{A}_{t}}(Y_{i})Y_{i}=w_{t}\hat{\mu}_{t+1}\]
and so
\[\|\hat{\mu}_{t+1}-\mu_{\hat{A}_{t}}\| \leq\|\hat{\mu}_{t+1}-\hat{\mu}^{\prime}_{t+1}\|+\|\hat{\mu}^{ \prime}_{t+1}-\mu_{\hat{A}_{t}}\|\] \[=\frac{|w_{t}-1|}{w_{t}}\|\hat{\mu}^{\prime}_{t+1}\|+\|\hat{\mu}^{ \prime}_{t+1}-\mu_{\hat{A}_{t}}\|\] \[\leq\frac{|w_{t}-1|}{w_{t}}\|\mu_{\hat{A}_{t}}\|+\left(1+\frac{|w _{t}-1|}{w_{t}}\right)\|\hat{\mu}^{\prime}_{t+1}-\mu_{\hat{A}_{t}}\|\] \[\leq\frac{|w_{t}-1|}{w_{t}}\|\mu_{\hat{A}_{t}}\|+\left(1+\frac{|w _{t}-1|}{w_{t}}\right)|\hat{\mu}^{\prime}_{t+1}-\mu_{\hat{A}_{t}}|.\]
By the stochastic induction hypothesis and Lemma 5.11, \(1/p_{f}(\hat{A}_{t})\) and \(Z^{\prime}\) are bounded whp: therefore, we get \(\widehat{\mathbb{E}}(|w_{t}-1|)\Rightarrow 0\) and \(\widehat{\mathbb{E}}(\|\hat{\mu}^{\prime}_{t+1}-\mu_{\hat{A}_{t}}|)\Rightarrow 0\) by Lemma 32 which implies that \(w_{t}\Rightarrow 1\) and \(|\hat{\mu}^{\prime}_{t+1}-\mu_{\hat{A}_{t}}|\Rightarrow 0\). Since \(\|\mu_{\hat{A}_{t}}\|\) is bounded whp, the last bound of the previous display implies that \(\|\hat{\mu}_{t+1}-\mu_{\hat{A}_{t}}\|\Rightarrow 0\) as desired.
Let us now prove that \(\Psi(\hat{\Sigma}_{t+1})\) is bounded whp. Since \(\Psi(\Sigma_{\hat{A}_{t}})\) is bounded whp, \(\|\Sigma_{\hat{A}_{t}}-I\|\), \(\lambda_{d}(\Sigma_{\hat{A}_{t}})\) and \(1/\lambda_{1}(\Sigma_{\hat{A}_{t}})\) are bounded whp by Lemma 2.7. Moreover,
\[\|\hat{\Sigma}_{t+1}-I\|\leq\|\hat{\Sigma}_{t+1}-\hat{\Sigma}^{\prime}_{t+1} \|+\|\hat{\Sigma}^{\prime}_{t+1}-\Sigma_{\hat{A}_{t}}\|+\|\Sigma_{\hat{A}_{t} }-I\|. \tag{35}\]
We have just seen that the last term \(\|\Sigma_{\hat{A}_{t}}-I\|\) of the right-hand side of the previous inequality is bounded whp; the second term \(\|\hat{\Sigma}^{\prime}_{t+1}-\Sigma_{\hat{A}_{t}}\|\) converges to \(0\) (in distribution) because \(\|\hat{\Sigma}^{\prime}_{t+1}-\Sigma_{\hat{A}_{t}}\|\leq|\hat{\Sigma}^{\prime} _{t+1}-\Sigma_{\hat{A}_{t}}|\), the latter vanishing in view of (33) (again, \(Z^{\prime}\), \(\|\mu_{\hat{A}_{t}}\|\) and \(1/p_{f}(\hat{A}_{t})\) are bounded whp). Finally, the definition (8) of \(\hat{\Sigma}_{t+1}\) can be rewritten as
\[\hat{\Sigma}_{t+1}=\frac{1}{n_{g}\hat{p}_{t}}\sum_{i=1}^{n_{g}}\ell(Y_{i}) \xi_{\hat{A}_{t}}(Y_{i})(Y_{i}-\hat{\mu}_{t+1})(Y_{i}-\hat{\mu}_{t+1})^{\top}.\]
Recalling that \(\hat{p}_{t}=\frac{1}{n}\sum_{i=1}^{n}\ell(Y_{i})\xi_{\hat{A}_{t}}(Y_{i})\) and that \(\hat{\mu}_{t+1}=\frac{1}{n\hat{p}_{t}}\sum_{i=1}^{n}\ell(Y_{i})\xi_{\hat{A}_{t }}(Y_{i})Y_{i}\), we get
\[\frac{1}{n_{g}\hat{p}_{t}}\sum_{i=1}^{n_{g}}\ell(Y_{i})\xi_{\hat{A}_{t}}(Y_{i} )(Y_{i}-\mu_{\hat{A}_{t}})=\hat{\mu}_{t+1}-\mu_{\hat{A}_{t}}.\]
Starting from the previous expression of \(\hat{\Sigma}_{t+1}\), writing \(Y_{i}-\hat{\mu}_{t+1}=a+b\) with \(a=Y_{i}-\mu_{\hat{A}_{t}}\) and \(b=\mu_{\hat{A}_{t}}-\hat{\mu}_{t+1}\) and expanding the product, we get
\[\hat{\Sigma}_{t+1}=\frac{1}{n_{g}\hat{p}_{t}}\sum_{i=1}^{n_{g}} \ell(Y_{i})\xi_{\hat{A}_{t}}(Y_{i})(Y_{i}-\mu_{\hat{A}_{t}})(Y_{i}-\mu_{\hat{A }_{t}})^{\top}\\ -(\mu_{\hat{A}_{t}}-\hat{\mu}_{t+1})(\mu_{\hat{A}_{t}}-\hat{\mu}_ {t+1})^{\top}\]
which finally leads to
\[\hat{\Sigma}_{t+1}=\frac{1}{w_{t}}\hat{\Sigma}^{\prime}_{t+1}-(\mu_{\hat{A}_{t }}-\hat{\mu}_{t+1})(\mu_{\hat{A}_{t}}-\hat{\mu}_{t+1})^{\top}.\]
Since \(\|xx^{\top}\|=\|x\|^{2}\), we get
\[\|\hat{\Sigma}_{t+1}-\hat{\Sigma}^{\prime}_{t+1}\| \leq\frac{|w_{t}-1|}{w_{t}}\|\hat{\Sigma}^{\prime}_{t+1}\|+\|\mu_{ \hat{A}_{t}}-\hat{\mu}_{t+1}\|^{2}\] \[\leq\frac{|w_{t}-1|}{w_{t}}\|\hat{\Sigma}^{\prime}_{t+1}-\Sigma_{ \hat{A}_{t}}\|+\frac{d|w_{t}-1|}{w_{t}}\lambda_{d}(\Sigma_{\hat{A}_{t}})+\|\mu_ {\hat{A}_{t}}-\hat{\mu}_{t+1}\|^{2},\]
using the triangle inequality and \(\|\Sigma_{\hat{A}_{t}}\|\leq d\lambda_{d}(\Sigma_{\hat{A}_{t}})\) for the last inequality. We have argued that \(|w_{t}-1|\), \(\|\hat{\Sigma}^{\prime}_{t+1}-\Sigma_{\hat{A}_{t}}\|\) and \(\|\hat{\mu}_{t+1}-\mu_{\hat{A}_{t}}\|\Rightarrow 0\); moreover, \(\lambda_{d}(\Sigma_{\hat{A}_{t}})\) and \(1/w_{t}\) are bounded whp; finally, the convergence \(|w_{t}-1|\Rightarrow 0\) can actually be strengthened to \(d|w_{t}-1|\Rightarrow 0\) in view of (31), because we have chosen \(\alpha\) such that \(dn^{-\alpha/4}\to 0\). Therefore, all the terms in the upper bound of the previous display vanish, which implies that \(\|\hat{\Sigma}_{t+1}-\hat{\Sigma}^{\prime}_{t+1}\|\Rightarrow 0\). Going back to (35) we see that this implies that \(\|\hat{\Sigma}_{t+1}-I\|\) is bounded whp, which directly implies that \(\lambda_{d}(\hat{\Sigma}_{t+1})\) is also bounded whp since
\[\|\hat{\Sigma}_{t+1}-I\|^{2}=\sum_{i}(\lambda_{i}(\hat{\Sigma}_{t+1})-1)^{2} \geq(\lambda_{d}(\hat{\Sigma}_{t+1})-1)^{2}.\]
Furthermore,
\[\lambda_{1}(\hat{\Sigma}_{t+1})\geq\lambda_{1}(\Sigma_{\hat{A}_{t}})-\|\hat{ \Sigma}_{t+1}-\Sigma_{\hat{A}_{t}}\|\]
by Lemma 2.1. Since \(1/\lambda_{1}(\Sigma_{\hat{A}_{t}})\) is bounded whp and \(\|\hat{\Sigma}_{t+1}-\Sigma_{\hat{A}_{t}}\|\Rightarrow 0\), the inequality of the previous display implies that \(1/\lambda_{1}(\hat{\Sigma}_{t+1})\) is bounded whp. Thus, we have proved that \(\lambda_{d}(\hat{\Sigma}_{t+1})\), \(1/\lambda_{1}(\hat{\Sigma}_{t+1})\) and \(\|\hat{\Sigma}_{t+1}-I\|\) are bounded whp, which implies that \(\Psi(\hat{\Sigma}_{t+1})\) is bounded whp by Lemma 2.7. This achieves to prove that \(\Psi(\hat{\Sigma}_{t+1})\) is bounded whp.
In order to conclude the proof, it remains to prove that \(1/p_{f}(\hat{A}_{t+1})\) is bounded whp. Using Corollary 2.5 with \(B=\hat{A}_{t+1}\) and \(g=\hat{g}_{t+1}\), we obtain
\[p_{f}(\hat{A}_{t+1})\geq p_{\hat{g}_{t+1}}(\hat{A}_{t+1})\exp\left(-\Psi( \Sigma_{\hat{A}_{t+1}}^{\hat{g}_{t+1}})-\frac{1}{2}\|\mu_{\hat{A}_{t+1}}^{\hat {g}_{t+1}}\|^{2}\right).\]
By Lemma 5.6, \(1/p_{\hat{g}_{t+1}}(\hat{A}_{t+1})\) is bounded whp, and so we only have to prove that \(\Psi(\Sigma_{\hat{A}_{t+1}}^{\hat{g}_{t+1}})\) and \(\|\mu_{\hat{A}_{t+1}}^{\hat{g}_{t+1}}\|\) are bounded whp. But since \(\|\hat{\mu}_{t+1}\|\), \(\hat{\Psi}(\Sigma_{t+1})\) and \(1/p_{\hat{g}_{t+1}}(\hat{A}_{t+1})\) are bounded whp, this follows precisely from Corollary 2.9 with \(g=\hat{g}_{t+1}\) and \(B=\hat{A}_{t+1}\).
## Proof of Proposition 1.6
We will first prove that
\[\mathbb{E}[D(f||\hat{g}_{A})] =D(f||g_{A})\] \[+\frac{1}{2}\biggl{[}\sum_{i=1}^{d}\left(\psi\biggl{(}\frac{n_{g} -i}{2}\biggr{)}+\log\biggl{(}\frac{2}{n_{g}}\biggr{)}\right)\] \[+\frac{d+2}{n_{g}-d-2}\mathrm{tr}(\Sigma_{A}^{-1})+\frac{d}{n_{g} -d-2}+\frac{d+2}{n_{g}-d-2}\mu_{A}^{\top}\Sigma_{A}^{-1}\mu_{A}\biggr{]}\]
with \(\psi\) the digamma function. Using Lemma 2.4 with \(g|_{A}=f\) and \(g^{\prime}=\mathring{g}_{A}\),
\[\mathbb{E}[D(f||\mathring{g}_{A})]=\frac{1}{2}\bigg{[}\mathbb{E}(\log|\hat{ \Sigma}_{A}|)+\mathbb{E}(\operatorname{tr}(\hat{\Sigma}_{A}^{-1}))+\mathbb{E}( \hat{\mu}_{A}^{\top}\hat{\Sigma}_{A}^{-1}\hat{\mu}_{A})-d\bigg{]}.\]
According to [16, pg40 and 108], the law of \(n_{g}\hat{\Sigma}_{A}\) is the Wishart distribution with the parameters \(\Sigma_{A}\) et \((n_{g}-1)\): \(W_{d}(\Sigma_{A},n_{g}-1)\). From [10],
\[\mathbb{E}(n_{g}\hat{\Sigma}_{A})=(n_{g}-1)\Sigma_{A}\text{ and}\] \[\mathbb{E}(\log|n_{g}\hat{\Sigma}_{A}|)=\sum_{i=1}^{d}\biggl{(} \psi\biggl{(}\frac{n_{g}-1}{2}+\frac{1-i}{2}\biggr{)}\biggr{)}+d\log(2)+\log| \Sigma_{A}|).\]
Moreover, the law of \(\frac{1}{n_{g}}\hat{\Sigma}_{A}^{-1}\) is the inverse-Wishart distribution with parameters \(\Sigma_{A}^{-1}\) et \((n_{g}-1)\): \(W_{d}^{-1}(\Sigma_{A}^{-1},n_{g}-1)\)[41]. We have
\[\mathbb{E}\biggl{(}\frac{1}{n_{g}}\hat{\Sigma}_{A}^{-1}\biggr{)}=\frac{1}{(n_ {g}-1)-d-1}\Sigma_{A}^{-1},\]
\[\mathbb{E}(\log|\hat{\Sigma}_{A}|)=\mathbb{E}(\log|n_{g}\hat{\Sigma}_{A}|)- \mathbb{E}(\log(n_{g})^{d})=\sum_{i=1}^{d}\biggl{(}\psi\biggl{(}\frac{n_{g}-i }{2}\biggr{)}+d\log\biggl{(}\frac{2}{n_{g}}\biggr{)}\biggr{)}+\log|\Sigma_{A}|\]
and \(\mathbb{E}(\operatorname{tr}(\hat{\Sigma}_{A}^{-1}))=\operatorname{tr}( \mathbb{E}(\hat{\Sigma}_{A}^{-1}))=\frac{n_{g}}{n_{g}-d-2}\operatorname{tr}( \Sigma_{A}^{-1})\).
Since \(\hat{\mu}_{A}\) and \(\hat{\Sigma}_{A}\) are the sample mean and sample covariance matrix of normally distributed samples respectively, they are independent, so
\[\mathbb{E}(\hat{\mu}_{A}^{\top}\hat{\Sigma}_{A}^{-1}\hat{\mu}_{A })=\operatorname{tr}(\mathbb{E}(\hat{\Sigma}_{A}^{-1}\hat{\mu}_{A}\hat{\mu}_{ A}^{\top}))=\operatorname{tr}(\mathbb{E}(\hat{\Sigma}_{A}^{-1})\,\mathbb{E}( \hat{\mu}_{A}\hat{\mu}_{A}^{\top}))\] \[=\operatorname{tr}\left(\frac{n_{g}}{n_{g}-d-2}\Sigma_{A}^{-1} \mathbb{E}(\hat{\mu}_{A}\hat{\mu}_{A}^{\top})\right).\]
From the equality \(\mathbb{E}((\hat{\mu}_{A}-\mu_{A})(\hat{\mu}_{A}-\mu_{A})^{\top})=\mathbb{E} (\hat{\mu}_{A}\hat{\mu}_{A}^{\top})-\mu_{A}\mu_{A}^{\top}\), and by denoting \(\hat{S}=\frac{1}{n_{g}}\sum_{k=1}^{n_{g}}(Y_{A,k}-\mu_{A})(Y_{A,k}-\mu_{A})^{\top}\), it can be shown that
\[\mathbb{E}(\hat{\mu}_{A}\hat{\mu}_{A}^{\top})=\frac{1}{n_{g}}\mathbb{E}(\hat{ S})+\mu_{A}\mu_{A}^{\top}\]
Since \(nS\sim W_{d}(\Sigma,n)\), we have
\[\mathbb{E}(\hat{\mu}_{A}^{\top}\hat{\Sigma}_{A}^{-1}\hat{\mu}_{A})=\frac{n_{g }}{n_{g}-d-2}\frac{d}{n_{g}}+\frac{n_{g}}{n_{g}-d-2}\mu_{A}^{\top}\Sigma_{A}^ {-1}\mu_{A}\]
Assembling the previous expressions gives the announced expression of \(\mathbb{E}[D(f||\mathring{g}_{A})]\). Let us now discuss the how each term scales with \(d\). The digamma function \(\psi\) has the following bounds [2]:
\[\forall x>0,\,\log x-\frac{1}{x}\leq\psi(x)\leq\log x-\frac{1}{2x}\]
We have then
\[K+K^{\prime}\leq\sum_{i=1}^{d}\biggl{(}\log\biggl{(}\frac{n_{g}}{2}\biggr{)}- \psi\left(\frac{n_{g}-i}{2}\right)\biggr{)}\leq K+2K^{\prime}\]
with
\[K=\sum_{i=1}^{d}\biggl{(}\log\biggl{(}\frac{n_{g}}{2}\biggr{)}-\log\biggl{(}\frac{n _{g}-i}{2}\biggr{)}\biggr{)}=\sum_{i=1}^{d}\biggl{(}-\log\biggl{(}1-\frac{i}{n_{ g}}\biggr{)}\biggr{)}\]
and
\[K^{\prime}=\sum_{i=1}^{d}\frac{1}{n_{g}-i}\]
In the case \(n_{g}\gg d\), \(K=\frac{d^{2}}{2n_{g}}+o\left(\frac{d^{2}}{n_{g}}\right)\) and \(K^{\prime}=\frac{d}{n_{g}}+o\left(\frac{d}{n_{g}}\right)=o\left(\frac{d^{2}}{n _{g}}\right)\). So,
\[\sum_{i=1}^{d}\left(\psi\biggl{(}\frac{n_{g}-i}{2}\biggr{)}+\log\biggl{(}\frac {2}{n_{g}}\biggr{)}\right)=-\frac{d^{2}}{2n_{g}}+o\left(\frac{d^{2}}{n_{g}} \right).\]
Moreover, since
\[\frac{d}{\lambda_{d}(\Sigma_{A})}\leq\mathrm{tr}(\Sigma_{A}^{-1})\leq\frac{d} {\lambda_{1}(\Sigma_{A})}\]
and that \(1/\lambda_{1}(\Sigma_{A})\) is bounded by Corollary 3.2, there exists \(C>0\) such that \(\mathrm{tr}(\Sigma_{A}^{-1})=Cd+o(d)\). In addition, \(\mu_{A}^{\top}\Sigma_{A}^{-1}\mu_{A}\leq\|\mu_{A}\|^{2}/\lambda_{1}(\Sigma_{A})\) which is bounded by the same Corollary. Therefore,
\[\frac{d+2}{n_{g}-d-2}\mathrm{tr}(\Sigma^{-1}) =C\frac{d^{2}}{n_{g}}+o\left(\frac{d^{2}}{n_{g}}\right),\] \[\frac{d}{n_{g}-d-2} =\frac{d}{n_{g}}+o\left(\frac{d}{n_{g}}\right)=o\left(\frac{d^{2 }}{n_{g}}\right),\] \[\text{and }\frac{d+2}{n_{g}-d-2}\mu_{A}^{\top}\Sigma_{A}^{-1}\mu_{A} =\mu_{A}^{\top}\Sigma_{A}^{-1}\mu_{A}\left(\frac{d}{n_{g}} \right)+o\left(\frac{d}{n_{g}}\right)=o\left(\frac{d^{2}}{n_{g}}\right).\]
Therefore,
\[\mathbb{E}[D(f||\hat{g}_{A})]=D(f||g_{A})+\frac{1}{2}\left(C-\frac{1}{2} \right)\frac{d^{2}}{n_{g}}+o\left(\frac{d^{2}}{n_{g}}\right).\]
So \(\sup_{d}\mathbb{E}(D(f||\hat{g}_{A}))<\infty\) if \(n_{g}\gg d^{2}\), and \(\mathbb{E}(D(f||\hat{g}_{A}))\to\infty\) if \(n_{g}\ll d^{2}\).
**Acknowledgments**. The first author J. Beh is enrolled in a Ph.D. program co-funded by ONERA - The French Aerospace Lab and the University Research School EUR-MINT (State support managed by the National Research Agency for Future Investments program bearing the reference ANR-18-EURE-0023). Their financial supports are gratefully acknowledged. The authors would also like to thank Jerome Morio for his precious support and his feedback on a preliminary version of the paper.
|
2305.19616 | Rodrigues formula and linear independence for values of hypergeometric
functions with parameters vary | In this article, we prove a generalized Rodrigues formula for a wide class of
holonomic Laurent series, which yields a new linear independence criterion
concerning their values at algebraic points. This generalization yields a new
construction of Pad\'e approximations including those for Gauss hypergeometric
functions. In particular, we obtain a linear independence criterion over a
number field concerning values of Gauss hypergeometric functions, allowing the
parameters of Gauss hypergeometric functions to vary. | Makoto Kawashima | 2023-05-31T07:34:16Z | http://arxiv.org/abs/2305.19616v2 | # Rodrigues formula and linear independence for values
###### Abstract
In this article, we prove a generalized Rodrigues formula for a wide class of holonomic Laurent series, to give a new linear independence criterion of their values at algebraic points. This generalization yields a new construction of Pade approximations including those for Gauss hypergeometric functions. In particular, we obtain a linear independence criterion over a number field, of values of Gauss hypergeometric functions, allowing _the parameters of Gauss hypergeometric functions vary_.
_Key words_: Pade approximants, Rodrigues formula, linear independence, Gauss hypergeometric functions.
## 1 Introduction
We give here a linear independence criterion for values over number fields, by using Pade approximation, for a certain class of holonomic Laurent series with algebraic coefficients.
As consequence, we show a linear independence criterion over a number field of values of Gauss hypergeometric functions, where we let the parameters vary, which is the novel part.
Pade approximation has been appeared as one of major methods in Diophantine problems since Ch. Hermite and H. Pade. To solve number theoretical program by Pade approximation, we usually need to construct a system of Pade approximants in an explicit form. Pade approximants can be constructed by linear algebra with estimates by Siegel's lemma via Dirichlet's box principle. However, it is not enough to establish arithmetic applications such as the linear independence criterion. Indeed, we are obliged to explicitly construct Pade approximants to provide sufficiently sharp estimates instead. In general, it is known that this step can be performed for specific functions only.
In this article, we succeed in proving a generalized Rodrigues formula, which gives an explicit construction of Pade approximations for a new and wide class of holonomic Laurent series. We introduce a linear map \(\varphi_{f}\) (see Eq. (1)) with respect to a given holonomic Laurent series \(f(z)\), which describes a necessary and sufficient condition to explicitly construct Pade approximants by studying \(\ker\varphi_{f}\). We state necessary properties of \(\ker\varphi_{f}\) by looking at related differential operators.
Construction of Pade approximants for Laurent series dates back to the classical works of A. M. Legendre and O. Rodrigues. In 1782, Legendre discovered a system of orthogonal polynomials so-called Legendre polynomials. In 1816, Rodrigues established a simple expression of the Legendre polynomials, called Rodrigues formula by Hermite. See [5], where R. Askey described a short history of Rodrigues formula. It is known that Legendre polynomials provide Pade approximants of the logarithmic function. After Legendre and Rodrigues, various kinds of Pade approximants of Laurent series have been developed by R. Rasala [25], A. I. Aptekarev, A. Branquinho and W. Van Assche [4], T. Rivoal [27] and V. N. Sorokin [31, 32, 33]. We note that K. Alladi and M. L. Robinson [1], also F. Beukers [6, 7, 8]
applied the Legendre polynomials to solve central irrationality questions, and many results were shown in the sequel by G. Rhin - P. Toffin [26], M. Hata [17, 18, 19] and R. Marcovecchio [21]. The author together with S. David and N. Hirata-Kohno [10, 11, 12, 13] also proved the linear independence criterion concerning with certain specific functions in a different setting.
By trying a new approach, distinct from those in [13], the author shows how to construct new generalized Pade approximants of Laurent series. This way allows to show a linear independence criterion for Gauss hypergeometric functions, letting the parameters vary. The case has not been dealt among known results before, although Gauss hypergeometric functions is a well-known classical function.
The ingredient relies on the concept noted \(\varphi_{f}\) (see Eq. (1)) to construct the Pade approximants in _an explicit but formal manner_. This idea has been partly used but in a different expression in [10, 11, 12, 13], as well as in [20] by A. Poels and the author.
The main point in this article is that we re-describe Rodrigues formula itself from a formal point of view, to find out suitable differential operators which enable us to construct Pade approximants themselves, _instead of Pade-type approximants_. This part is done for the functions whose Pade approximants have never been explicitly given before.
Consequently, our corollary provides arithmetic applications, _e.g._ the linear independence of the concerned values at different points for a wider class of functions, which was not achieved in [4].
In the first part of this article, we discuss an explicit construction of Pade approximants. Our final aim is to find a general method to explicitly obtain Pade approximants for given Laurent series. Here, we partly succeed in giving a solution to this fundamental question on the Rodrigues formula for specific Laurent series, which can be transformed to polynomials by the differential operator of order \(1\). Precisely speaking, we indeed generalize the Rodrigues formula to a new class of holonomic series (see Theorem 4.2).
In the second part, we apply our explicit Pade approximants of holonomic Laurent series for the linear independence problems of their values. As a corollary, we show below a new linear independence criterion for values of Gauss hypergeometric function, letting the parameters vary. We shall recall the Gauss hypergeometric function. For a rational number \(x\) and a non-negative integer \(k\), we denote the \(k\)-th Pochhammer symbol: \((x)_{0}=1\), \((x)_{k}=x(x+1)\cdots(x+k-1)\). For \(a,b,c\in\mathbb{Q}\) which are not negative integers, we define
\[{}_{2}F_{1}(a,b,c\,|z)=\sum_{k=0}^{\infty}\frac{(a)_{k}(b)_{k}}{(c)_{k}k!}z^{ k}\enspace.\]
We can now state :
Theorem 1.1.: _Let \(u,\alpha\) be integers with \(u\geq 2\) and \(|\alpha|\geq 2\). Assume_
\[V(\alpha):=\log|\alpha|-\log 2-\left(2-\frac{1}{u}\right)\left(\log u+\sum_{ \begin{subarray}{c}q:\text{prime}\\ q|u\end{subarray}}\frac{\log q}{q-1}\right)-\frac{u-1}{\varphi(u)}>0\enspace,\]
_where \(\varphi\) is the Euler's totient function. Then the real numbers_ :
\[1,{}_{2}F_{1}\left(\frac{1+l}{u},1,\frac{u+l}{u}\,\bigg{|}\,\frac{1}{\alpha^{ u}}\right)\quad(0\leq l\leq u-2)\]
_are linearly independent over \(\mathbb{Q}\)._
The following table gives suitable data for \(u\) and \(\alpha\) so as to \(V(\alpha)>0\).
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \(u\) & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 \\ \hline \(\alpha\geq\) & \(e^{3.78}\) & \(e^{4.44}\) & \(e^{5.84}\) & \(e^{5.32}\) & \(e^{8.76}\) & \(e^{5.91}\) & \(e^{7.65}\) & \(e^{7.22}\) & \(e^{9.40}\) & \(e^{6.73}\) & \(e^{10.59}\) & \(e^{7.04}\) & \(e^{9.92}\) & \(e^{9.52}\) \\ \hline \end{tabular}
The present article is organized as follows. In Section 2, we collect basic notions and recall the Pade type approximants of Laurent series. To archive an explicit construction of Pade approximants, which is of particular interest, we shall introduce a morphism \(\varphi_{f}\) associated with a Laurent series \(f(z)\). To analyze the structure of \(\ker\varphi_{f}\) is a crucial point for our program (_see_ Proposition 2.3). Indeed, we provide a proper subspace, in some case this is the whole space, of \(\ker\varphi_{f}\) derived from the differential operator which annihilates \(f\) (_see_ Corollary 2.6). This is the key ingredient to generalize the Rodrigues formula.
In Section 3, we shall introduce the weighted Rodrigues operator which is firstly defined in [4] as well as basic properties that are going to come in the course of the proof.
In Section 4, we shall give a generalization of the Rodrigues formula to Pade approximants of certain holonomic series, by using the weighted Rodrigues operators (_see_ Theorem 4.2). In Section 5, let us introduce the determinants associated with the Pade approximants obtained in Theorem 4.2. To prove the non-vanishing of these determinants is one of the most crucial step to obtain irrationality as well as linear independence results. We shall expose some examples of Theorem 4.2 and Proposition 5.2 in Section 6. Example 6.1 is the particular example concerning in Theorem 1.1. In Section 7, let us state the more precise theorem than Theorem 1.1 (see Theorem 7.1). This section is devoted to the proof of Theorem 7.1. Section 8 is an appendix devoted to describe a result due to S. Fischler and Rivoal in [15]. They gave a condition on the differential operator of order \(1\) with polynomial coefficients so as to be a \(G\)-operator. Indeed, this result is crucial to apply Theorem 4.2 to \(G\)-functions. More precisely, whenever the operator is a \(G\)-operator, then the Laurent series considered in Theorem 7.1 turn out to be \(G\)-functions.
## 2 Pade type approximants of Laurent series
Throughout this section, we fix a field \(K\) of characteristic \(0\). We define the order function at \(z=\infty\) by
\[\operatorname{ord}_{\infty}:K((1/z))\longrightarrow\mathbb{Z}\cup\{\infty\}; \ \sum_{k}\frac{a_{k}}{z^{k}}\mapsto\min\{k\in\mathbb{Z}\cup\{\infty\}\mid a_{k} \neq 0\}\enspace.\]
We recall without proof the following elementary fact :
Lemma 2.1.: _Let \(m\) be a non-negative integer, \(f_{1}(z),\ldots,f_{m}(z)\in(1/z)\cdot K[[1/z]]\) and \(\boldsymbol{n}=(n_{1},\ldots,n_{m})\in\mathbb{N}^{m}\). Put \(N=\sum_{j=1}^{m}n_{j}\). For a non-negative integer \(M\) with \(M\geq N\), there exist polynomials \((P,Q_{1},\ldots,Q_{m})\in K[z]^{m+1}\setminus\{\boldsymbol{0}\}\) satisfying the following conditions \(:\)_
\[(i)\ \deg P\leq M\enspace,\] \[(ii)\ \operatorname{ord}_{\infty}\left(P(z)f_{j}(z)-Q_{j}(z) \right)\geq n_{j}+1\ \text{ for }\ 1\leq j\leq m\enspace.\]
Definition 2.2.: We say that a vector of polynomials \((P,Q_{1},\ldots,Q_{m})\in K[z]^{m+1}\) satisfying the properties \((i)\) and \((ii)\) as weight \(\boldsymbol{n}\) and degree \(M\) Pade type approximants of \((f_{1},\ldots,f_{m})\). For such approximants \((P,Q_{1},\ldots,Q_{m})\) of \((f_{1},\ldots,f_{m})\), we call the formal Laurent series \((P(z)f_{j}(z)-Q_{j}(z))_{1\leq j\leq m}\), _id est_ remainders, as weight \(\boldsymbol{n}\) degree \(M\) Pade type approximations of \((f_{1},\ldots,f_{m})\).
Let \(f(z)=\sum_{k=0}^{\infty}f_{k}/z^{k+1}\in(1/z)\cdot K[[1/z]]\). We define a \(K\)-homomorphism \(\varphi_{f}\in\operatorname{Hom}_{K}(K[t],K)\) by
\[\varphi_{f}:K[t]\longrightarrow K;\quad t^{k}\mapsto f_{k}\quad(k\geq 0)\enspace. \tag{1}\]
The above homomorphism extends naturally in a \(K[z]\)-homomorphism \(\varphi_{f}:K[z,t]\to K[z]\), and then to a \(K[z][[1/z]]\)-homomorphism \(\varphi_{f}:K[z,t][[1/z]]\to K[z][[1/z]]\). With this notation, the formal Laurent series \(f(z)\) satisfies the following crucial identities:
\[f(z)=\varphi_{f}\left(\frac{1}{z-t}\right)\enspace,\quad P(z)f(z)-\varphi_{f} \left(\frac{P(z)-P(t)}{z-t}\right)\in(1/z)\cdot K[[1/z]]\enspace\text{for any}\enspace P(z)\in K[z]\enspace.\]
**Lemma 2.3**.: _Let \(m\) be a non-negative integer, \(f_{1}(z),\ldots,f_{m}(z)\in(1/z)\cdot K[[1/z]]\) and \(\boldsymbol{n}=(n_{1},\ldots,n_{m})\in\mathbb{N}^{m}\). Let \(M\) be a positive integer and \(P(z)\in K[z]\) a non-zero polynomial with \(M\geq\sum_{j=1}^{m}n_{j}\) and \(\deg P\leq M\). Put \(Q_{j}(z)=\varphi_{f_{j}}\left(\frac{P(z)-P(t)}{z-t}\right)\in K[z]\) for \(1\leq j\leq m\)._
_Then the followings are equivalent._
\((i)\) _The vector of polynomials \((P,Q_{1},\ldots,Q_{m})\) is a weight \(\boldsymbol{n}\) Pade type approximants of \((f_{1},\ldots,f_{m})\)._
\((ii)\) _We have \(t^{k}P(t)\in\ker\varphi_{f_{j}}\) for \(1\leq j\leq m\), \(0\leq k\leq n_{j}-1\)._
Proof.: By the definition of \(Q_{j}(z)\), we have
\[P(z)f_{j}(z)-Q_{j}(z)=\varphi_{f_{j}}\left(\frac{P(t)}{z-t}\right)\in(1/z) \cdot K[[1/z]]\enspace.\]
Above equality yields that the vector of polynomials \((P,Q_{1},\ldots,Q_{m})\) is a weight \(\boldsymbol{n}\) Pade type approximants of \((f_{1},\ldots,f_{m})\) is equivalent to the order of the Laurent series
\[\varphi_{f_{j}}\left(\frac{P(t)}{z-t}\right)=\sum_{k=0}^{\infty}\frac{ \varphi_{f_{j}}\left(t^{k}P(t)\right)}{z^{k+1}}\]
is greater than or equal to \(n_{j}+1\) for \(1\leq j\leq m\). This shows the equivalence of \((i)\) and \((ii)\).
Lemma 2.3 indicates that it is useful to study \(\ker\varphi_{f}\) for explicit construction of Pade type approximants of Laurent series. We are now going to investigate \(\ker\varphi_{f}\) for a holonomic Laurent series \(f\in(1/z)\cdot K[[1/z]]\). We shall denote the differential operator \(\frac{d}{dz}\) (resp. \(\frac{d}{dt}\)) by \(\partial_{z}\) (resp. \(\partial_{t}\)). We describe the action of a differential operator \(D\) on a function \(f\) by \(D\cdot f\) and denote \(D\cdot f\) by \(f^{\prime}\).
To begin with, let us introduce a map
\[\iota:K(z)[\partial_{z}]\longrightarrow K(t)[\partial_{t}];\ \ \sum_{j}P_{j}(z)\partial_{z}^{j}\mapsto\sum_{j}(-1)^{j}\partial_{t}^{j}P_{j}(t)\enspace. \tag{2}\]
For \(D\in K(z)[\partial_{z}]\), we denote \(\iota(D)\) by \(D^{*}\). Notice that we have \((DE)^{*}=E^{*}D^{*}\) for any \(D,E\in K(z)[\partial_{z}]\).
**Lemma 2.4**.: _For \(D\in K[z,\partial_{z}]\), there exists a polynomial \(P(t,z)\in K[t,z]\) satisfying_
\[D\cdot\frac{1}{z-t}=P(t,z)+D^{*}\cdot\frac{1}{z-t}\enspace.\]
Proof.: Let \(n,m\) be non-negative integers. It suffices to prove the case \(D=z^{m}\partial_{z}^{n}\). Then we have
\[D\cdot\frac{1}{z-t}=\frac{(-1)^{n}n!z^{m}}{(z-t)^{n+1}}=(-1)^{n}\sum_{k=0}^{ \infty}\frac{(n+k)!}{k!}\frac{t^{k}}{z^{k+1+n-m}}\enspace. \tag{3}\]
We define a polynomial \(P(t,z)\) by \(0\) if \(m\leq n\) and
\[P(t,z)=(-1)^{n}\sum_{k=0}^{m-n-1}\frac{(n+k)!}{k!}t^{k}z^{m-n-k-1}\]
for \(m>n\). Eq. (3) implies
\[D\cdot\frac{1}{z-t}-P(t,z) =(-1)^{n}\sum_{k=\max(m-n,0)}^{\infty}\frac{(n+k)!}{k!}\frac{t^{k}} {z^{k+1+n-m}}\] \[=(-1)^{n}\sum_{k=0}^{\infty}(k+1+m-n)\cdots(m+k)\frac{t^{k+m-n}}{z ^{k+1}}\enspace.\]
On the other hand, we have
\[D^{*}\cdot\frac{1}{z-t} =(-1)^{n}\partial_{t}^{n}\cdot\frac{t^{m}}{z-t}=(-1)^{n}\sum_{k=0 }^{\infty}\partial_{t}^{n}\cdot\frac{t^{m+k}}{z^{k+1}}\] \[=(-1)^{n}\sum_{k=0}^{\infty}(k+1+m-n)\cdots(m+k)\frac{t^{k+m-n}}{ z^{k+1}}\enspace.\]
Above equalities yield
\[D\cdot\frac{1}{z-t}-P(t,z)=D^{*}\cdot\frac{1}{z-t}\enspace.\]
This completes the proof of Lemma 2.4.
We introduce the projection morphism \(\pi\) by
\[\pi:K[z][[1/z]]\longrightarrow K[z][[1/z]]/K[z]\cong(1/z)\cdot K[[1/z]];\quad f (z)=P(z)+\tilde{f}(z)\mapsto\tilde{f}(z)\enspace,\]
where \(P(z)\in K[z]\) and \(\tilde{f}(z)\in(1/z)\cdot K[[1/z]]\). Lemma 2.4 leads us to show the following key proposition.
**Proposition 2.5**.: _Let \(D\in K[z,\partial_{z}]\) and \(f(z)\in(1/z)\cdot K[[1/z]]\). We have \(\varphi_{\pi(D\cdot f)}=\varphi_{f}\circ D^{*}\)._
Proof.: Lemma 2.4 implies that there exists a polynomial \(P(z)\) with
\[D\cdot f=P(z)+\varphi_{f}\left(D^{*}\cdot\frac{1}{z-t}\right)=P(z)+\sum_{k=0 }^{\infty}\frac{\varphi_{f}(D^{*}\cdot t^{k})}{z^{k+1}}\enspace.\]
This shows that \(\pi(D\cdot f)=\sum_{k=0}^{\infty}\varphi_{f}(D^{*}\cdot t^{k})/z^{k+1}\) and therefore
\[\varphi_{\pi(D\cdot f)}(t^{k})=\varphi_{f}\circ D^{*}(t^{k})\quad\text{for all}\quad k\geq 0\enspace.\]
This concludes the proof of Proposition 2.5.
As a corollary of Proposition 2.5, the following crucial equivalence relations hold.
**Corollary 2.6**.: _Let \(f(z)\in(1/z)\cdot K[[1/z]]\) and \(D\in K[z,\partial_{z}]\)._
_The followings are equivalent._
\((i)\)__\(D\cdot f\in K[z]\)_._
\((ii)\)__\(D^{*}(K[t])\subseteq\ker\varphi_{f}\)_._
Proof.: The conditions \((i)\), \((ii)\) are equivalent to \(\pi(D\cdot f)=0\) and \(\varphi_{f}\circ D^{*}=0\) respectively. Therefore by Proposition 2.5, we obtain the assertion.
**Remark 2.7**.: We can verify the similar statements of Lemma 2.4, Proposition 2.5 and Corollary 2.6 in the following situations, instead of \(\partial_{z}\), \(\partial_{t}\) and the map (2).
\((i)\) Let \(q\in K\setminus\{0\}\). Define the \(K\)-isomorphisms by
\[\sigma_{q}:K[z]\longrightarrow K[z];\;P(z)\mapsto P(qz),\;\;\boldsymbol{ \sigma}_{q}:K[t]\longrightarrow K[t];\;P(t)\mapsto P(qt)\]
and the map
\[\iota:K[z,\sigma_{q}]\longrightarrow K[t,\mathbf{\sigma}_{q^{-1}}];\ \sum_{j}a_{j}(z)\sigma_{q}^{j}\mapsto\sum_{j}q^{-j}\mathbf{\sigma}_{q^{-1 }}^{j}a_{j}(t)\enspace.\]
\((ii)\) Let \(q\in K\setminus\{0\}\). Define the \(K\)-morphisms by
\[\delta_{q}:K[z]\longrightarrow K[z];\ P(z)\mapsto\frac{P(qz)-P(z)}{(q-1)z},\ \ \mathbf{\delta}_{q}:K[t]\longrightarrow K[t];\ P(t)\mapsto\frac{P(qt)-P(t)}{(q- 1)t}\]
and the map
\[\iota:K[z,\delta_{q}]\longrightarrow K[t,\mathbf{\delta}_{q^{-1}}]; \ \sum_{j}a_{j}(z)\delta_{q}^{j}\mapsto\sum_{j}(-q^{-1})^{j}\mbox{\boldmath$\delta$ }_{q^{-1}}^{j}a_{j}(t)\enspace.\]
\((iii)\) Let \(\alpha\in K\setminus\{0\}\). Define the \(K\)-isomorphisms by
\[S_{\alpha}:K[z]\longrightarrow K[z];\ P(z)\mapsto P(z+\alpha),\ \ \mbox{$ \cal S$}_{\alpha}:K[t]\longrightarrow K[t];\ P(t)\mapsto P(t+\alpha)\]
and the map
\[\iota:K[z,S_{\alpha}]\longrightarrow K[t,\mathbf{\cal S}_{-\alpha}]; \ \sum_{j}a_{j}(z)S_{\alpha}^{j}\mapsto\sum_{j}a_{j}(t)\mathbf{\cal S}_{- \alpha}^{j}\enspace.\]
The proofs of analogues of Lemma 2.4, Proposition 2.5 and Corollary 2.6 for \((i)\), \((ii)\), \((iii)\) and their applications for explicit construction of Pade approximants would be considered in forthcoming papers.
## 3 Weighted Rodrigues operators
Let \(K\) be a field of characteristic \(0\). Let us introduce the weighted Rodrigues opeartor which is firstly defined by A. I. Aptekarev, A. Branquinho and W. Van Assche in [4].
**Definition 3.1**: (confer [4, (2.5)]) Let \(l\in\mathbb{N}\), \(a_{1}(z),\ldots,a_{l}(z)\in K[z]\setminus\{0\}\), \(b(z)\in K[z]\). Put \(a(z)=a_{1}(z)\cdots a_{l}(z)\), \(D=-a(z)\partial_{z}+b(z)\). For \(n\in\mathbb{N}\) and a weight \(\vec{r}=(r_{1},\ldots,r_{l})\in\mathbb{Z}^{l}\) with \(r_{i}\geq 0\), we define the weighted Rodrigues operator associated with \(D\) by
\[R_{D,n,\vec{r}}=\frac{1}{n!}\left(\partial_{z}+\frac{b(z)}{a(z)}\right)^{n}a( z)^{n}\prod_{v=1}^{l}a_{v}(z)^{-r_{v}}\in K(z)[\partial_{z}]\enspace.\]
In the case of \(\vec{r}=(0,\ldots,0)\), we denote \(R_{D,n,\vec{r}}=R_{D,n}\) and call this operator as the \(n^{\rm th}\) Rodrigues operator associated with \(D\).
We denote the generalized Rodrigue operator associated with \(D\) with respect to the parameter \(t\) by
\[{\cal R}_{D,n,\vec{r}}=\frac{1}{n!}\left(\partial_{t}+\frac{b(t)}{a(t)}\right) ^{n}a(t)^{n}\prod_{v=1}^{l}a_{v}(t)^{-r_{v}}\in K(t)[\partial_{t}]\enspace,\]
and \({\cal R}_{D,n,\vec{r}}={\cal R}_{D,n}\) in the case of \(\vec{r}=(0,\ldots,0)\).
Let us show some basic properties of weighted Rodrigues operator to obtain a generalization of Rodrigues formula of Pade approximants of holonomic Laurent series. In the following, for \(a(z)\in K[z]\) (resp. \(a(t)\in K[t]\)), we denote the ideal of \(K[z]\) (resp. \(K[t]\)), generated by \(a(z)\) (resp. \(a(t)\)) by \((a(z))\) (resp. \((a(t))\)).
**Proposition 3.2**.: _Let \(a(t),b(t)\in K[t]\) with \(a(t)\neq 0\). Put \(\mathcal{E}_{a,b}=\partial_{t}+b(t)/a(t)\in K(t)[\partial_{t}]\)._
\((i)\) _Let \(n,k\) be non-negative integers. Then there exists integers \((c_{n,k,l})_{0\leq l\leq\min(n,k)}\) with_
\[c_{n,k,\min(n,k)}=(-1)^{n}k(k-1)\cdots(k-n+1)\enspace,\]
\[t^{k}\mathcal{E}_{a,b}^{n}=\sum_{l=0}^{\min(n,k)}c_{n,k,l}\mathcal{E}_{a,b}^{ n-l}t^{k-l}\in K(t)[\partial_{t}]\enspace. \tag{4}\]
\((ii)\) _Assume there exist polynomials \(a_{1}(t),\ldots,a_{l}(t)\in K[t]\) with \(a(t)=a_{1}(t)\cdots a_{l}(t)\). For an \(l\)-tuple of non-negative integers \(\boldsymbol{s}:=(s_{1},\ldots,s_{l})\), we denote by \(\mathrm{I}(\boldsymbol{s})\) the ideal of \(K[t]\) generated by \(\prod_{v=1}^{l}a_{v}(t)^{s_{v}}\). Then for \(n\geq 1\) and \(F(t)\in\mathrm{I}(\boldsymbol{s})\), we have_
\[\mathcal{E}_{a,b}^{n}a(t)^{n}\cdot F(t)\in\mathrm{I}(\boldsymbol{s})\enspace. \tag{5}\]
Proof.: \((i)\) We prove the assertion by induction on \((n,k)\in\mathbb{Z}^{2}\) with \(n,k\geq 0\). In the case of \((n,k)=(0,0)\), we have \(c_{0,0,0}=1\). Let \(n,k\) be non-negative integers with \(n\geq 1\) or \(k\geq 1\). We assume that the assertion holds for any pairs \((\tilde{n},\tilde{k})\in\{(\tilde{n},\tilde{k})\in\mathbb{Z}^{2}\mid 0\leq \tilde{n},\tilde{k}\ \ \text{and}\ \ \tilde{n}<n\ \text{and}\ \ \tilde{k}\leq k\}\). The equality \(t^{k}\mathcal{E}_{a,b}=\mathcal{E}_{a,b}t^{k}-kt^{k-1}\) in \(K[t,\partial_{t}]\) implies that we have
\[t^{k}\mathcal{E}_{a,b}^{n} =(\mathcal{E}_{a,b}t^{k}-kt^{k-1})\mathcal{E}_{a,b}^{n-1}\] \[=\mathcal{E}_{a,b}\sum_{l=0}^{\min(n-1,k)}c_{n-1,k,l}\mathcal{E} _{a,b}^{n-1-l}t^{k-l}-k\sum_{l=0}^{\min(n-1,k-1)}c_{n,k-1,l}\mathcal{E}_{a,b}^ {n-1-l}t^{k-1-l}\] \[=\sum_{l=0}^{\min(n-1,k)}c_{n-1,k,l}\mathcal{E}_{a,b}^{n-l}t^{k-l }-\sum_{l=0}^{\min(n-1,k-1)}kc_{n-1,k-1,l}\mathcal{E}_{a,b}^{n-1-l}t^{k-1-l}\enspace. \tag{6}\]
Note that we use the induction hypothesis in Eq. (6). This concludes the assertion for \((n,k)\).
\((ii)\) Let us prove the statement by induction on \(n\). In the case of \(n=1\), since we have
\[\mathcal{E}_{a,b}a(t)\cdot F(t)=(\partial_{t}a(t)+b(t))\cdot F(t)=a^{\prime}( t)F(t)+a(t)F^{\prime}(t)+b(t)F(t)\enspace,\]
using the Leibniz formula, we obtain (5). We assume (5) holds for \(n\geq 1\). In the case of \(n+1\), we have
\[\mathcal{E}_{a,b}^{n+1}a(t)^{n+1}\cdot F(t)=\mathcal{E}_{a,b}\mathcal{E}_{a,b }^{n}a(t)^{n}\cdot a(t)F(t)\enspace. \tag{7}\]
Note that we have \(a(t)F(t)\in\mathrm{I}(\boldsymbol{s}+\boldsymbol{1})\) where \(\boldsymbol{s}+\boldsymbol{1}:=(s_{1}+1,\ldots,s_{d}+1)\in\mathbb{N}^{d}\). Relying on induction hypothesis, we deduce \(\mathcal{E}_{a,b}^{n}a(t)^{n}\cdot a(t)F(t)\in\mathrm{I}(\boldsymbol{s}+ \boldsymbol{1})\). Thus there exists a polynomial \(\tilde{F}(t)\in\mathrm{I}(\boldsymbol{s})\) with \(\mathcal{E}_{a,b}^{n}a(t)^{n}\cdot a(t)F(t)=a(t)\tilde{F}(t)\). Substituting this equality to Eq. (7), according to the similar argument in the case of \(n=1\), we conclude \(\mathcal{E}_{a,b}^{n+1}a(t)^{n+1}\cdot F(t)\in\mathrm{I}(\boldsymbol{s})\).
**Corollary 3.3**.: \((i)\) _Let \(a(z)\in K[z]\setminus\{0\}\) and \(b(z)\in K[z]\). We put \(D=-a(z)\partial_{z}+b(z)\). Let \(f(z)\in(1/z)\cdot K[[1/z]]\setminus\{0\}\) with \(D\cdot f(z)\in K[z]\). Put \(\mathcal{E}_{a,b}=\partial_{t}+b(t)/a(t)\in K(t)[\partial_{t}]\). Then, for \(n,k\in\mathbb{Z}\) with \(0\leq k<n\), we have_
\[t^{k}\mathcal{E}_{a,b}^{n}\cdot(a(t)^{n})\subseteq\ker\varphi_{f}\enspace.\]
\((ii)\) _Let \(d,l\in\mathbb{N}\), \((n_{1},\ldots,n_{d})\in\mathbb{N}^{d}\) and \(a_{1}(t),\ldots,a_{l}(t)\in K[t]\setminus\{0\}\). Put \(a(t)=a_{1}(t)\cdots a_{l}(t)\). For \(b_{1}(t),\ldots,b_{d}(t)\in K[t]\) and \(l\)-tuple of non-negative integers \(\vec{r}_{j}=(r_{j,1},\ldots,r_{j,l})\)\((1\leq j\leq d)\), we put \(D_{j}=-a(z)\partial_{z}+b_{j}(z)\) and_
\[\mathcal{R}_{D_{j},n_{j},\vec{r}_{j}}=\mathcal{R}_{j,n_{j}}=\frac{1}{n_{j}!} \mathcal{E}_{a,b_{j}}^{n_{j}}a(t)^{n_{j}}\prod_{v=1}^{l}a_{v}(t)^{-r_{j,v}}\in K (t)[\partial_{t}]\enspace.\]
_We assume_
\[{\cal R}_{j_{1},n_{j_{1}}}{\cal R}_{j_{2},n_{j_{2}}}={\cal R}_{j_{2},n_{j_{2}}}{ \cal R}_{j_{1},n_{j_{1}}}\ \ \mbox{for}\ \ 1\leq j_{1},j_{2}\leq d\enspace.\]
_Let \(s_{1},\ldots,s_{d}\) be non-negative integers and \(F(t)\in\left(\prod_{v=1}^{l}\!a_{v}(t)^{s_{v}+\sum_{j=1}^{d}r_{j,v}}\right)\). Then we have_
\[\prod_{j=1}^{d}{\cal R}_{j,n_{j}}\cdot F(t)\in\left(\prod_{v=1}^{l}\!a_{v}(t)^ {s_{v}}\right)\enspace.\]
Proof. \((i)\) By the definition of \(D\), we have \(D^{*}={\cal E}_{a,b}a(t)\). Since we have \({\cal E}_{a,b}\cdot(a(t))\subseteq\ker\varphi_{f}\), by Proposition 2.6, it suffices to show \(t^{k}{\cal E}_{a,b}^{n}\cdot(a(t)^{n})\subset{\cal E}_{a,b}\cdot(a(t))\). Relying on Proposition 3.2\((i)\), there are \(\{c_{n,k,l}\}_{0\leq l\leq k}\subset K\) with
\[t^{k}{\cal E}_{a,b}^{n}=\sum_{l=0}^{k}c_{n,k,l}{\cal E}_{a,b}^{n-l}t^{k-l}\enspace. \tag{8}\]
For an integer \(l\) with \(0\leq l\leq k\), we obtain
\[{\cal E}_{a,b}^{n-l}t^{k-l}\cdot(a(t)^{n})\subset{\cal E}_{a,b}{\cal E}_{a,b}^ {n-l-1}\cdot(a(t)^{n})\enspace.\]
The Leibniz formula allows us to get \({\cal E}_{a,b}^{n-l-1}\cdot(a(t)^{n})\subset(a(t))\). Combining Eq. (8) and above relation concludes
\[t^{k}{\cal E}_{a,b}^{n}\cdot(a(t)^{n})\subset{\cal E}_{a,b}\cdot(a(t))\enspace.\]
This completes the proof of \((i)\).
\((ii)\) By the commutativity of \({\cal R}_{j,n_{j}}\), it suffices to prove the assertion in the case of \(d=1\). By the definition of \({\cal R}_{1,n_{1}}\),we have
\[{\cal R}_{1,n_{1}}\cdot F(t)=\frac{1}{n_{1}!}{\cal E}_{a,b_{1}}^{n_{1}}a(t)^{n _{1}}\prod_{v=1}^{d}a_{v}(t)^{-r_{1,v}}\cdot F(t)\in{\cal E}_{a,b_{1}}^{n_{1}} a(t)^{n_{1}}\cdot\left(\prod_{v=1}^{l}a_{v}(t)^{s_{v}}\right)\enspace.\]
Using Proposition 3.2\((ii)\), we conclude that \({\cal E}_{a,b_{1}}^{n_{1}}a(t)^{n_{1}}\cdot\left(\prod_{v=1}^{l}a_{v}(t)^{s_{v} }\right)\subset\left(\prod_{v=1}^{l}a_{v}(t)^{s_{v}}\right)\). This completes the proof of \((ii)\).
## 4 Rodrigues formula of Pade approximants
Lemma 4.1.: _Let \(a(z),b(z)\in K[z]\) with \(a(z)\neq 0\), \(\deg a=u\) and \(\deg b=v\). Put_
\[D=-a(z)\partial_{z}+b(z)\in K[z,\partial_{z}],\ \ a(z)=\sum_{i=0}^{u}a_{i}z^{i},\ \ b(z)=\sum_{j=0}^{v}b_{j}z^{j}\enspace,\]
_and \(w=\max(u-2,v-1)\). Assume \(w\geq 0\). Then there exist \(f_{0}(z),\ldots,f_{w}(z)\in(1/z)\cdot K[[1/z]]\) which are linearly independent over \(K\) and satisfy \(D\cdot f_{l}(z)\in K[z]\) for \(0\leq l\leq w\)._
Proof. Let \(f(z)=\sum_{k=0}^{\infty}f_{k}/z^{k+1}\in(1/z)\cdot K[[1/z]]\) be a Laurent series. There exists a polynomial \(A(z)\in K[z]\) which depends on the operator \(D\) and \(f\) with \(\deg A\leq w\) and satisfying
\[D\cdot f(z)=A(z)+\sum_{k=0}^{\infty}\frac{\sum_{i=0}^{u}a_{i}(k+i)f_{k+i-1}+ \sum_{j=0}^{v}b_{j}f_{k+j}}{z^{k+1}}\enspace. \tag{9}\]
Put
\[\sum_{i=0}^{u}a_{i}(k+i)f_{k+i-1}+\sum_{j=0}^{v}b_{j}f_{k+j}=c_{k,0}f_{k-1}+\cdots+c _{k,w}f_{k+w}+c_{k,w+1}f_{k+w+1}\quad\text{for}\quad k\geq 0\enspace,\]
with \(c_{0,0}=0\). Notice that \(c_{k,w+1}\) is \(a_{u}(k+u)\) if \(u-2>v-1\), \(b_{v}\) if \(u-2<v-1\) and \(a_{u}(k+u)+b_{v}\) if \(u-2=v-1\). Put \(M=\min(k\geq 0\mid c_{k^{\prime},w+1}\neq 0\) for all \(k^{\prime}\geq k)\). For \(0\leq l\leq w\), we take a sequence \((f_{l,k})_{k\geq 0}\in K^{\mathbb{N}}\) with
\[f_{l,0}=\cdots=f_{l,M-1}=0,\ \ f_{l,M+k}=\delta_{l,k}\ \ \text{for}\ \ 0\leq k\leq w\enspace,\]
where \(f_{l,0}=\cdots=f_{l,M-1}=0\) is an empty condition if \(M=0\), \(\delta_{l,k}\) is the Kronecker symbol and
\[\sum_{i=0}^{u}a_{i}(k+i)f_{l,k+i-1}+\sum_{j=0}^{v}b_{j}f_{l,k+j}=0\ \ \text{for}\ \ k\geq M\enspace.\]
Put \(f_{l}(z)=\sum_{k=0}^{\infty}f_{l,k}/z^{k+1}\). Then by the definition of \(f_{l}(z)\), we have \(f_{0}(z),\ldots,f_{w}(z)\) are linearly independent over \(K\) and Eq. (9) implies \(D\cdot f_{l}(z)\in K[z]\). This completes the proof of Lemma 4.1.
Let us state a generalization of the Rodrigues formula of Legendre polynomials to Pade approximants of certain holonomic Laurent series which gives a generalization of [4, Theorem 1].
**Theorem 4.2**.: _Let \(l,d\in\mathbb{N}\), \((a_{1}(z),\ldots,a_{l}(z))\in(K[z]\setminus\{0\})^{l}\) and \((b_{1}(z),\ldots,b_{d}(z))\in K[z]^{d}\). Put \(a(z)=a_{1}(z)\cdots a_{l}(z)\). Put \(D_{j}=-a(z)\partial_{z}+b_{j}(z)\in K[z,\partial_{z}]\) and \(w_{j}=\max(\deg a-2,\deg b_{j}-1)\). Assume \(w_{j}\geq 0\) for \(1\leq j\leq d\). Let \(f_{j,0}(z),\ldots,f_{j,w_{j}}(z)\in(1/z)\cdot K[[1/z]]\) be formal Laurent series which are linearly independent over \(K\) satisfying_
\[D_{j}\cdot f_{j,u_{j}}(z)\in K[z]\ \ \text{for}\ \ 0\leq u_{j}\leq w_{j}\enspace.\]
_Let \((n_{1},\ldots,n_{d})\in\mathbb{N}^{d}\). For \(l\)-tuple of non-negative integers \(\vec{r}_{j}=(r_{j,1},\ldots,r_{j,l})\)\((1\leq j\leq d)\), we denote by \(R_{j,n_{j}}\) the weighted Rodrigues operator \(R_{D_{j},n_{j},\vec{r}_{j}}\) associated with \(D_{j}\). Assume_
\[R_{j_{1},n_{j_{1}}}R_{j_{2},n_{j_{2}}}=R_{j_{2},n_{j_{2}}}R_{j_{1},n_{j_{1}}} \ \ \text{for}\ \ 1\leq j_{1},j_{2}\leq d\enspace. \tag{10}\]
_Take a non-zero polynomial \(F(z)\) which is contained in the ideal \(\left(\prod_{v=1}^{l}a_{v}(z)^{\sum_{j=1}^{d}r_{j,v}}\right)\) and put_
\[P(z)=\prod_{j=1}^{d}R_{j,n_{j}}\cdot F(z)\enspace,\] \[Q_{j,u_{j}}(z)=\varphi_{f_{j,u_{j}}}\left(\frac{P(z)-P(t)}{z-t} \right)\ \ \text{for}\ \ 1\leq j\leq d,\ 0\leq u_{j}\leq w_{j}\enspace.\]
_Assume \(P(z)\neq 0\)*. Then the vector of polynomials \((P(z),Q_{j,u_{j}}(z))\underset{\begin{subarray}{c}1\leq j\leq d\\ 0\leq u_{j}\leq w_{j}\end{subarray}}{1\leq j\leq d\\ 0\leq u_{j}\leq w_{j}\end{subarray}}\) is a weight \((\boldsymbol{n}_{1},\ldots,\boldsymbol{n}_{m})\in\mathbb{N}^{\sum_{j=1}^{d}(w_ {j}+1)}\) Pade type approximants of \((f_{j,u_{j}}(z))\underset{\begin{subarray}{c}1\leq j\leq d\\ 0\leq u_{j}\leq w_{j}\end{subarray}}{1\leq j\leq d}\) where \(\boldsymbol{n}_{j}=(n_{j},\ldots,n_{j})\in\mathbb{N}^{w_{j}+1}\) for \(1\leq j\leq d\)._
Footnote *: We need to assume \(P(z)\neq 0\). For example, in the case of \(d=1\), \(D=-\partial_{z}z^{2}=-z^{2}\partial-2z\) and \(n=1\), we have \(P(z)=(\partial_{z}-2/z)z^{2}\cdot 1=0\).
Proof.: By Lemma 2.3, it suffices to prove that any triple \((j,u_{j},k)\) with \(1\leq j\leq d\), \(0\leq u_{j}\leq w_{j}\), \(0\leq k\leq n_{j}-1\) satisfy \(t^{k}P(t)\in\ker\varphi_{f_{j,u_{j}}}\). Put \(\mathcal{R}_{j,n_{j}}=\mathcal{R}_{D_{j},n_{j},\vec{r}_{j}}\). Then we have \(P(t)=\underset{j=1}{\overset{d}{\prod}}\mathcal{R}_{j,n_{j}}\cdot F(t)\) and thus
\[t^{k}P(t)=t^{k}\mathcal{R}_{j,n_{j}}\prod_{j^{\prime}\neq j}\mathcal{R}_{j^{ \prime},n_{j^{\prime}}}\cdot F(t)\enspace. \tag{11}\]
Since \(F(t)\in\left(\prod_{v=1}^{l}a_{v}(z)^{r_{j,v}+\sum_{j^{\prime}\neq j}^{l}r_{j^{ \prime},v}}\right)\), using Corollary 3.3\((ii)\), we obtain
\[\prod_{j^{\prime}\neq j}\mathcal{R}_{j^{\prime},n_{j^{\prime}}}\cdot F(t)\in \left(\prod_{v=1}^{l}a_{v}(t)^{r_{j,v}}\right)\enspace.\]
Combining Eq. (11) and above relation yields
\[t^{k}P(t)\in t^{k}\mathcal{R}_{j,n_{j}}\cdot\left(\prod_{v=1}^{l}a_{v}(t)^{r_{ j,v}}\right)\subseteq t^{k}\mathcal{E}_{a,b_{j}}^{n_{j}}\cdot(a(t)^{n_{j}}) \subseteq\ker\varphi_{f_{j,n_{j}}}\enspace.\]
Note that the last inclusion is obtained from Corollary 3.3\((i)\) for \(D_{j}\cdot f_{j,u_{j}}(z)\in K[z]\).
### Commutativity of differential operators
In this subsection, we will give a sufficient condition which weighted Rodrigues operators commute. We denote \(\partial_{z}\cdot c(z)\) by \(c^{\prime}(z)\) for any rational function \(c(z)\in K(z)\).
**Lemma 4.3**.: _Let \(a(z),b(z)\in K[z]\) and \(c(z)\in K(z)\) with \(a(z)c(z)\neq 0\). Let \(w(z)\) be a non-zero solution of \(-a(z)\partial_{z}+b(z)\) in some differential extension \(\mathcal{K}\) of \(K(z)\) and \(n\) a non-negative integer. Put_
\[R_{n}=\frac{1}{n!}\left(\partial_{z}+\frac{b(z)}{a(z)}\right)^{n}c(z)^{n}\in K (z)[\partial_{z}]\enspace.\]
_Then, in the ring \(\mathcal{K}[\partial_{z}]\), we have the equality \(:\)_
\[R_{n}=\frac{1}{n!}w(z)^{-1}\partial_{z}^{n}w(z)c(z)^{n}=\frac{1}{n!}R_{1}(R_{ 1}+c^{\prime}(z))\cdots(R_{1}+(n-1)c^{\prime}(z))\enspace.\]
Proof.: The first equality is readily obtained by the identity
\[\partial_{z}w(z)=w(z)\left(\partial_{z}+\frac{b(z)}{a(z)}\right)\enspace.\]
The second equality is proved by the identity
\[\left(\partial_{z}+\frac{b(z)}{a(z)}\right)c^{n}(z) =\left[c(z)^{n-1}\left(\partial_{z}+\frac{b(z)}{a(z)}\right)+(n-1 )c^{\prime}(z)c(z)^{n-2}\right]c(z)\] \[=c(z)^{n-1}(R_{1}+(n-1)c^{\prime}(z))\enspace.\]
This completes the proof of Lemma 4.3.
**Lemma 4.4**.: _Let \(a(z),,b_{1}(z),b_{2}(z),c(z)\in K[z]\) with \(a(z)c(z)\neq 0\). For a non-negative integer \(n\) and \(j=1,2\), we put_
\[R_{j,n}=\frac{1}{n!}\left(\partial_{z}+\frac{b_{j}(z)}{a(z)}\right)^{n}c(z)^{n }\enspace.\]
_Assume \(\deg c\leq 1\). Then the followings are equivalent._
\((i)\) _For any \(n_{1},n_{2}\in\mathbb{N}\), we have \(R_{1,n_{1}}R_{2,n_{2}}=R_{2,n_{2}}R_{1,n_{1}}\)._
\((ii)\) _We have \(\frac{b_{2}(z)-b_{1}(z)}{a(z)}c(z)\in K\)._
Proof.: Since \(\deg c\leq 1\) and therefore \(c^{\prime}(z)\in K\), using Lemma 4.3, we see that \((i)\) is equivalent to \(R_{1,1}R_{2,1}=R_{2,1}R_{1,1}\). Let us show that the commutativity of \(R_{j,1}\)\((j=1,2)\) is equivalent to \((ii)\). According to the identity,
\[R_{1,1}R_{2,1}=R_{2,1}R_{1,1}+(R_{2,1}-R_{1,1})c^{\prime}(z)+\left(\frac{b_{2 }(z)-b_{1}(z)}{a(z)}\right)^{\prime}c(z)\enspace,\]
the identity \(R_{1,1}R_{2,1}=R_{2,1}R_{1,1}\) is equivalent to
\[(R_{2,1}-R_{1,1})c^{\prime}(z)+\left(\frac{b_{2}(z)-b_{1}(z)}{a(z)} \right)^{\prime}c(z) =\frac{b_{2}(z)-b_{1}(z)}{a(z)}c^{\prime}(z)+\left(\frac{b_{2}(z)- b_{1}(z)}{a(z)}\right)^{\prime}c(z)\] \[=\left(\frac{b_{2}(z)-b_{1}(z)}{a(z)}c(z)\right)^{\prime}=0\enspace,\]
which means \((ii)\) holds. This completes the proof of Lemma 4.4.
## 5 Determinants associated with Pade approximants
Let \(f_{j,u_{j}}(z)\) be Laurent series in Theorem 4.2. To consider the linear independence results on the values of \(f_{j,u_{j}}(z)\) a la method of Siegel (confer [30]), we need to study non-vanishing of determinants of certain matrices. In this section, we compute the determinants of specific matrices whose entries are given by Pade approximants of \(f_{j,u_{j}}(z)\) obtained in Theroem 4.2.
We shall now treat the following case. Let \(d\) be a non-negative integer and \(a_{1}(z),a_{2}(z),b_{1}(z),\ldots,b_{d}(z)\in K[z]\). Put \(a(z)=a_{1}(z)a_{2}(z)\), \(w_{j}=\max\{\deg a-2,\deg b_{j}-1\}\) and \(W=w_{1}+\cdots+w_{d}+d\).
Assume \(w_{j}\geq 0\), \(\deg a_{1}\leq 1\), \(a_{1}\) is a monic polynomial and
\[\gamma_{j_{1},j_{2}}=\frac{b_{j_{1}}(z)-b_{j_{2}}(z)}{a_{2}(z)}\in K\setminus \{0\}\ \ \text{for}\ \ 1\leq j_{1}<j_{2}\leq d\enspace.\]
Denote \(D_{j}=-a(z)\partial_{z}+b_{j}(z)\in K[z,\partial_{z}]\). Let \(f_{j,0}(z),\ldots,f_{j,w_{j}}(z)\in(1/z)\cdot K[[1/z]]\) which are linearly independent over \(K\) and satisfy
\[D_{j}\cdot f_{j,u_{j}}(z)\in K[z]\ \ \text{for}\ \ 1\leq j\leq d,\ \ 0\leq u_{j}\leq w_{j}\enspace.\]
For \(n\in\mathbb{N}\), we denote the weighted Rodrigues operator associated with \(D_{j}\) by
\[R_{j,n}=\frac{1}{n!}\left(\partial_{z}+\frac{b_{j}(z)}{a(z)}\right)^{n}a_{1}(z )^{n}\ \ \text{for}\ \ 1\leq j\leq d\enspace.\]
Lemma 4.4 to the case of \(a(z)=a_{1}(z)a_{2}(z)\) and \(c(z)=a_{1}(z)\) asserts that the commutativity of the differential operators \(R_{j,n}\), namely
\[R_{j_{1},n}R_{j_{2},n}=R_{j_{2},n}R_{j_{1},n}\ \ \text{for}\ \ 1\leq j_{1},j_{2}\leq d\enspace.\]
Put \(\varphi_{j,u_{j}}=\varphi_{f_{j,u_{j}}}\). For \(0\leq h\leq W\), we define
\[P_{n,h}(z)=P_{h}(z)=\prod_{j=1}^{d}R_{j,n}\cdot[z^{h}a_{2}(z)^{dn }]\enspace,\] \[Q_{n,j,u_{j},h}(z)=Q_{j,u_{j},h}(z)=\varphi_{j,u_{j}}\left(\frac {P_{h}(z)-P_{h}(t)}{z-t}\right)\ \ \text{for}\ \ 1\leq j\leq d,\ 0\leq u_{j}\leq w_{j}\enspace,\] \[\mathfrak{R}_{n,j,u_{j},h}(z)=\mathfrak{R}_{j,u_{j},h}(z)=P_{h}(z )f_{j,u_{j}}(z)-Q_{j,u_{j},h}(z)\ \ \text{for}\ \ 1\leq j\leq d,\ 0\leq u_{j}\leq w_{j}\enspace.\]
Assume \(P_{h}(z)\neq 0\). Theorem 4.2 yields the vector of polynomials \((P_{h},Q_{j,u_{j},h})\underset{1\leq j\leq d}{1\leq u_{j}\leq w_{j}}\) is a weight \((n,\ldots,n)\in\mathbb{N}^{W}\) Pade type approximants of \((f_{j,u_{j}})\underset{0\leq u_{j}\geq w_{j}}{1\leq j\leq d}\).
First we compute the coefficients of \(1/z^{n+1}\) of \(\mathfrak{R}_{j,u_{j},h}(z)\).
**Lemma 5.1**: _Let notations be as above. For \(1\leq j\leq d\), \(0\leq u_{j}\leq w_{j}\) and \(0\leq h\leq W\), we have_
\[\mathfrak{R}_{j,u_{j},h}(z)=\sum_{k=n}^{\infty}\frac{\varphi_{j,u_{j}}(t^{k}P_{h }(t))}{z^{k+1}}\]
_and_
\[\varphi_{j,u_{j}}(t^{n}P_{h}(t))=\frac{(-1)^{n}}{(n!)^{d-1}}\prod_{ \begin{subarray}{c}1\leq j^{\prime}\leq d\\ j^{\prime}\neq j\end{subarray}}\left[\prod_{k=1}^{n}(\gamma_{j^{\prime},j}-k \varepsilon_{a_{1}})\right]\varphi_{j,u_{j}}(t^{h}a_{1}(t)^{n}\cdot a_{2}(t)^{ dn})\enspace,\]
_where \(\varepsilon_{a_{1}}=1\) if \(\deg a_{1}=1\) and \(\varepsilon_{a_{1}}=0\) if \(\deg a_{1}=0\)._
Proof.: Since \((\mathfrak{R}_{j,u_{j},h})_{j,u_{j}}\) is a weight \((n,\ldots,n)\in\mathbb{N}^{W}\) Pade type approximation of \((f_{j,u_{j}})_{j,u_{j}}\), we have \(\operatorname{ord}_{\infty}\mathfrak{R}_{j,u_{j},h}\geq n+1\) and the first equality is obtained by
\[\mathfrak{R}_{j,u_{j},h}(z)=\varphi_{j,u_{j}}\left(\frac{P_{h}(t)}{z-t}\right) =\sum_{k=n}^{\infty}\frac{\varphi_{j,u_{j}}(t^{k}P_{h}(t))}{z^{k+1}}\enspace.\]
We prove the second equality. Fix \(j\) and put \(\mathcal{E}_{a,b_{j^{\prime}}}=\partial_{t}+b_{j^{\prime}}(t)/a(t)\) for \(1\leq j^{\prime}\leq d\). Then we have
\[\mathcal{E}_{a,b_{j^{\prime}}}=\mathcal{E}_{a,b_{j}}+\frac{\gamma_{j^{\prime}, j}}{a_{1}(t)}\enspace, \tag{12}\]
and \(\mathcal{R}_{j^{\prime},n}=\frac{1}{n!}\mathcal{E}_{a,b_{j^{\prime}}}^{n}a_{1}(t)^{n}\). Using Proposition 3.2\((i)\), there exists a set of integers \(\{c_{j,l}\mid l=0,1,\ldots,n\}\) with \(c_{j,n}=(-1)^{n}n!\) and
\[t^{n}\mathcal{R}_{j,n}=\sum_{l=0}^{n}\frac{c_{j,l}}{n!}\mathcal{E}_{a,b_{j}}^{ n-l}t^{n-l}a_{1}(t)^{n}\enspace.\]
Note, by Leibniz formula, the polynomial \(\prod_{j^{\prime}\neq j}\mathcal{R}_{j^{\prime},n}\cdot[t^{h}a_{2}(t)^{dn}]\) is contained in the ideal \((a_{2}(t)^{n})\). By Corollary 3.3\((i)\), we have
\[\mathcal{E}_{a,b_{j}}^{n-l}a_{1}(t)^{n}\cdot(a_{2}(t)^{n})\subseteq\ker\varphi _{j,u_{j}}\enspace\text{for}\enspace 0\leq l\leq n-1\]
and thus we have
\[t^{n}P_{h}(t) =t^{n}\mathcal{R}_{j,n}\prod_{j^{\prime}\neq j}\mathcal{R}_{j^{ \prime},n}\cdot[t^{h}a_{2}(t)^{dn}]=t^{n}\mathcal{E}_{a,b_{j}}^{n}a_{1}(t)^{n} \prod_{j^{\prime}\neq j}\mathcal{R}_{j^{\prime},n}\cdot[t^{h}a_{2}(t)^{dn}]\] \[=\sum_{l=0}^{n}\frac{c_{j,l}}{n!}\mathcal{E}_{a,b_{j}}^{n-l}t^{n-l }a_{1}(t)^{n}\prod_{j^{\prime}\neq j}\mathcal{R}_{j^{\prime},n}\cdot[t^{h}a_{ 2}(t)^{dn}] \tag{13}\] \[\equiv(-1)^{n}a_{1}(t)^{n}\prod_{j^{\prime}\neq j}\mathcal{R}_{j^ {\prime},n}\cdot[t^{h}a_{2}(t)^{dn}]\enspace\text{mod}\enspace\ker\varphi_{j,u _{j}}\enspace.\]
Eq. (12) yields
\[a_{1}(t)^{n}\mathcal{R}_{j^{\prime},n} =\frac{a_{1}(t)^{n}}{n!}\left(\mathcal{E}_{a,b_{j}}+\frac{\gamma_ {j^{\prime},j}}{a_{1}(t)}\right)^{n}a_{1}(t)^{n}\] \[=\frac{1}{n!}\left(\mathcal{E}_{a,b_{j}}a_{1}(t)^{n}+(\gamma_{j^{ \prime},j}-n\varepsilon_{a_{1}})a_{1}(t)^{n-1}\right)\left(\mathcal{E}_{a,b_{j }}+\frac{\gamma_{j^{\prime},j}}{a_{1}(t)}\right)^{n-1}a_{1}(t)^{n}\] \[\equiv\frac{1}{n!}(\gamma_{j^{\prime},j}-n\varepsilon_{a_{1}})a_{ 1}(t)^{n-1}\left(\mathcal{E}_{a,b_{j}}+\frac{\gamma_{j^{\prime},j}}{a_{1}(t)} \right)^{n-1}a_{1}(t)^{n}\enspace\text{mod}\enspace\mathcal{E}_{a,b_{j}}a_{1}(t )\cdot K[t,\partial_{t}]\] \[\equiv\frac{1}{n!}\prod_{k=1}^{n}(\gamma_{j^{\prime},j}-k \varepsilon_{a_{1}})a_{1}(t)^{n}\enspace\text{mod}\enspace\mathcal{E}_{a,b_{j }}a_{1}(t)\cdot K[t,\partial_{t}]\enspace. \tag{14}\]
Remark that we use the assumption \(\deg a_{1}\leq 1\) in Eq. (14). Combining above equality and Eq. (13) yields
\[\varphi_{j,u_{j}}(t^{n}P_{h}(t))=\frac{(-1)^{n}}{(n!)^{d-1}}\prod_{ \begin{subarray}{c}1\leq j^{\prime}\leq d\\ j^{\prime}\neq j\end{subarray}}\left[\prod_{k=1}^{n}(\gamma_{j^{\prime},j}-k \varepsilon_{a_{1}})\right]\varphi_{j,u_{j}}(t^{h}a_{1}(t)^{n}\cdot a_{2}(t) ^{dn})\enspace.\]
This completes the proof of Lemma 5.1.
For a non-negative integer \(n\), we now consider the determinant of following \((W+1)\times(W+1)\) matrix
\[\Delta_{n}(z)=\Delta(z)=\det\begin{pmatrix}P_{0}(z)&P_{1}(z)&\ldots&P_{W}(z)\\ Q_{1,0,0}(z)&Q_{1,0,1}(z)&\ldots&Q_{1,0,W}(z)\\ \vdots&\vdots&\ddots&\vdots\\ Q_{1,w_{1},0}(z)&Q_{1,w_{1},1}(z)&\ldots&Q_{1,w_{1},W}(z)\\ \vdots&\vdots&\ddots&\vdots\\ Q_{d,0,0}(z)&Q_{d,0,1}(z)&\ldots&Q_{d,0,W}(z)\\ \vdots&\vdots&\ddots&\vdots\\ Q_{d,w_{d},0}(z)&Q_{d,w_{d},1}(z)&\ldots&Q_{d,w_{d},W}(z)\end{pmatrix}\enspace.\]
To compute \(\Delta(z)\), we define the determinant of following \(W\times W\) matrix
\[\Theta_{n}=\Theta=\det\begin{pmatrix}\varphi_{1,0}(a_{1}(t)^{n}a_{2}(t)^{dn}) &\varphi_{1,0}(ta_{1}(t)^{n}a_{2}(t)^{dn})&\ldots&\varphi_{1,0}(t^{W-1}a_{1}(t )^{n}a_{2}(t)^{dn})\\ \vdots&\vdots&\ddots&\vdots\\ \varphi_{1,w_{1}}(a_{1}(t)^{n}a_{2}(t)^{dn})&\varphi_{1,w_{1}}(ta_{1}(t)^{n}a_ {2}(t)^{dn})&\ldots&\varphi_{1,w_{1}}(t^{W-1}a_{1}(t)^{n}a_{2}(t)^{dn})\\ \vdots&\vdots&\ddots&\vdots\\ \varphi_{d,0}(a_{1}(t)^{n}a_{2}(t)^{dn})&\varphi_{d,0}(ta_{1}(t)^{n}a_{2}(t)^{ dn})&\ldots&\varphi_{d,0}(t^{W-1}a_{1}(t)^{n}a_{2}(t)^{dn})\\ \vdots&\vdots&\ddots&\vdots\\ \varphi_{d,w_{d}}(a_{1}(t)^{n}a_{2}(t)^{dn})&\varphi_{d,w_{d}}(ta_{1}(t)^{n}a_ {2}(t)^{dn})&\ldots&\varphi_{d,w_{d}}(t^{W-1}a_{1}(t)^{n}a_{2}(t)^{dn})\end{pmatrix}\enspace.\]
Then we have :
Proposition 5.2.
\[\Delta(z)=\left(\frac{(-1)^{(n+1)}}{(n!)^{d-1}}\right)^{W}\times\frac{1}{[(n+ 1)W]!}\partial_{z}^{(n+1)W}\cdot P_{W}(z)\times\prod_{j=1}^{d}\left[\prod_{ \begin{subarray}{c}1\leq j^{\prime}\leq d\\ j^{\prime}\neq j\end{subarray}}\prod_{k=1}^{n}(\gamma_{j^{\prime},j}-k \varepsilon_{a_{1}})\right]\times\Theta\in K\enspace,\]
_where \(\varepsilon_{a_{1}}\) is a real number defined in Lemma 5.1._
Proof.: First, by the definition of \(P_{l}(z)\), we have
\[\deg P_{l}\leq nW+l\enspace. \tag{15}\]
For the matrix in the definition of \(\Delta(z)\), adding \(-f_{j,u_{j}}(z)\) times first row to \((w_{1}+\cdots+w_{j-1})+u_{j}+1\)-th
row for \(1\leq j\leq d,0\leq u_{j}\leq w_{j}\), we obtain
\[\Delta(z)=(-1)^{W}\text{det}\begin{pmatrix}P_{0}(z)&\ldots&P_{W}(z)\\ \mathfrak{R}_{1,0,0}(z)&\ldots&\mathfrak{R}_{1,0,W}(z)\\ \vdots&\ddots&\vdots\\ \mathfrak{R}_{1,w_{1},0}(z)&\ldots&\mathfrak{R}_{1,w_{1},W}(z)\\ \vdots&\ddots&\vdots\\ \mathfrak{R}_{d,0,0}(z)&\ldots&\mathfrak{R}_{d,0,W}(z)\\ \vdots&\ddots&\vdots\\ \mathfrak{R}_{d,w_{d},0}(z)&\ldots&\mathfrak{R}_{d,w_{d},W}(z)\end{pmatrix}\.\]
We denote the \((s,t)\)-th cofactor of the matrix in the right hand side of above equality by \(\Delta_{s,t}(z)\). Then we have, developing along the first row
\[\Delta(z)=(-1)^{W}\left(\sum_{l=0}^{w+1}P(z)\Delta_{1,l+1}(z)\right). \tag{16}\]
Since
\[\text{ord}_{\infty}\,R_{l,h}(z)\geq n+1\ \ \text{for}\ \ 1\leq j\leq d,0\leq u _{j}\leq w_{j},0\leq h\leq W\ \,\]
we have
\[\text{ord}_{\infty}\,\Delta_{1,l+1}(z)\geq(n+1)W\ \ \text{for}\ \ 0\leq l\leq W\ . \tag{17}\]
Combining (15) and above inequality yields
\[P_{l}(z)\Delta_{1,l+1}(z)\in(1/z)\cdot K[[1/z]]\ \ \text{for}\ \ 0\leq l\leq W-1\ \,\]
and
\[P_{W}(z)\Delta_{1,W}(z)\in K[[1/z]]\.\]
Note that in above relation, the constant term of \(P_{W}(z)\Delta_{1,W}(z)\) is
(18) "Coefficient of
\[z^{(n+1)W}\]
of
\[P_{W}(z)\]
" \[\cdot\]
"Coefficient of
\[1/z^{(n+1)W}\]
of
\[\Delta_{1,W}(z)\]
" \[\cdot\]
Eq. (16) implies \(\Delta(z)\) is a polynomial in \(z\) with non-positive valuation with respect to \(\text{ord}_{\infty}\), it has to be a constant. At last, by Lemma 5.1, the coefficient of \(1/z^{(n+1)W}\) of \(\Delta_{1,W}(z)\) is
\[\text{det}\begin{pmatrix}(-1)^{n}\varphi_{1,0}(t^{n}P_{0}(t))&\ldots&(-1)^{ n}\varphi_{1,0}(t^{n}P_{W-1}(t))\\ \vdots&\ddots&\vdots\\ (-1)^{n}\varphi_{1,w_{1}}(t^{n}P_{0}(t))&\ldots&(-1)^{n}\varphi_{1,w_{1}}(t^{ n}P_{W-1}(t))\\ \vdots&\ddots&\vdots\\ (-1)^{n}\varphi_{d,0}(t^{n}P_{0}(t))&\ldots&(-1)^{n}\varphi_{d,0}(t^{n}P_{W-1} (t))\\ \vdots&\ddots&\vdots\\ (-1)^{n}\varphi_{d,w_{d}}(t^{n}P_{0}(t))&\ldots&(-1)^{n}\varphi_{d,w_{d}}(t^{ n}P_{W-1}(t))\end{pmatrix}=\left(\frac{(-1)^{n}}{(n!)^{d-1}}\right)^{W}\prod_{j=1}^{d} \left[\prod_{\begin{subarray}{c}1\leq j^{\prime}\leq d\\ j^{\prime}\neq j\end{subarray}}\prod_{\begin{subarray}{c}1\\ \sum_{j^{\prime}\neq j}^{n}\neq j\end{subarray}}^{n}(\gamma_{j^{\prime},j}-k \varepsilon_{a_{1}})\right]\cdot\Theta\.\]
Combining Eqns. (16), (18) and above equality yields the assertion. This completes the proof of Proposition 5.2.
Examples
In this section, let us expose some examples of Theorem 4.2 and Proposition 5.2.
Example 6.1.: Let us give a generalization of the Chevyshev polynomials (_confer_[3, 5.1]). Let \(u\geq 2\) be an integer. Put \(D=-(z^{u}-1)\partial_{z}-z^{u-1}\in K[z,\partial_{z}]\). The Laurent series
\[f_{l}(z)=\sum_{k=0}^{\infty}\frac{\left(\frac{1+l}{u}\right)_{k}}{\left(\frac {u+l}{u}\right)_{k}}\frac{1}{z^{uk+l+1}}=\frac{1}{z^{l+1}}\cdot{}_{2}F_{1} \left(\frac{1+l}{u},1,\,\frac{u+l}{u}\bigg{|}\,\frac{1}{z^{u}}\right)\ \ \mbox{for}\ \ 0\leq l\leq u-2\]
are linearly independent over \(K\) and satisfy \(D\cdot f_{l}(z)\in K[z]\). Note that \(f_{0}(z)=(z^{u}-1)^{-1/u}\). We denote \(\varphi_{f_{l}}=\varphi_{l}\). For \(h,n\in\mathbb{N}\) with \(0\leq h\leq u-1\), we define
\[P_{n,h}(z)=P_{h}(z)=\frac{1}{n!}\left(\partial_{z}-\frac{z^{u-1 }}{z^{u}-1}\right)^{n}(z^{u}-1)^{n}\cdot z^{h}\enspace,\] \[Q_{n,l,h}(z)=Q_{l,h}(z)=\varphi_{l}\left(\frac{P_{h}(z)-P_{h}(t )}{z-t}\right)\ \ \mbox{for}\ \ 0\leq l\leq u-2\enspace.\]
Theorem 4.2 yields the vector of polynomials \((P_{h},Q_{j,h})_{0\leq j\leq u-2}\) is a weight \((n,\ldots,n)\in\mathbb{N}^{u-1}\) Pade type approximants of \((f_{0},\ldots,f_{u-2})\). Define
\[\Delta_{n}(z)=\det\begin{pmatrix}P_{0}(z)&\cdots&P_{u-1}(z)\\ Q_{0,0}(z)&\cdots&Q_{0,u-1}(z)\\ \vdots&\ddots&\vdots\\ Q_{u-2,0}(z)&\cdots&Q_{u-2,u-1}(z)\end{pmatrix}\enspace.\]
The determinant \(\Delta_{n}(z)\) will be computed in Lemma 7.2.
Example 6.2.: In this example, we give a generalization of the Bessel polynomials (_confer_[16]). Let \(d,n\) be non-negative integers and \(\gamma_{1},\ldots,\gamma_{d}\in K\) which are not integers less than \(-1\) with
\[\gamma_{j_{2}}-\gamma_{j_{1}}\notin\mathbb{Z}\ \ \mbox{for}\ \ 1\leq j_{1}<j_{2}\leq d\enspace.\]
Put \(D_{j}=-z^{2}\partial_{z}+\gamma_{j}z-1\),
\[f_{j}(z)=\sum_{k=0}^{\infty}\frac{1}{(2+\gamma_{j})_{k}}\frac{1}{z^{k+1}}\]
and \(\varphi_{f_{j}}=\varphi_{j}\). The straight forward computation yields \(D_{j}\cdot f_{j}(z)\in K\). Put
\[R_{j,n}=\frac{1}{n!}\left(\partial_{z}+\frac{\gamma_{j}z-1}{z^{2}}\right)^{n}z ^{n}\enspace.\]
Lemma 4.4 yields
\[R_{j_{1},n_{1}}R_{j_{2},n_{2}}=R_{j_{2},n_{2}}R_{j_{1},n_{1}}\ \ \mbox{for}\ \ 1\leq j_{1},j_{2}\leq d\ \ \mbox{and}\ \ n_{j_{1}},n_{j_{2}}\in\mathbb{N}\enspace.\]
For \(h\in\mathbb{Z}\) with \(0\leq h\leq d\), we define
\[P_{n,h}(z)=P_{h}(z)=\prod_{j=1}^{d}R_{j,n}\cdot z^{dn+h}\enspace,\] \[Q_{n,j}(z)=Q_{j}(z)=\varphi_{j}\left(\frac{P_{h}(z)-P_{h}(t)}{z- t}\right)\ \ \mbox{for}\ \ 1\leq j\leq d\enspace.\]
Then Theorem 4.2 yields that the vector of polynomials \((P_{h},Q_{j,h})_{1\leq j\leq d}\) is a weight \((n,\ldots,n)\in\mathbb{N}^{d}\) Pade type approximants of \((f_{1},\ldots,f_{d})\). By the definition of \(P_{d}(z)\), we have
\[P_{d}(z)=\frac{\prod_{j=1}^{d}\gamma_{j}^{n}}{(n!)^{d}}z^{d(n+1)}+\text{(lower degree terms)}\enspace. \tag{19}\]
Define
\[\Delta_{n}(z)=\det\begin{pmatrix}P_{0}(z)&P_{1}(z)&\ldots&P_{d}(z)\\ Q_{1,0}(z)&Q_{1,1}(z)&\ldots&Q_{1,d}(z)\\ \vdots&\vdots&\ddots&\vdots\\ Q_{d,0}(z)&Q_{d,1}(z)&\ldots&Q_{d,d}(z)\end{pmatrix},\ \ \Theta_{n}=\det \begin{pmatrix}\varphi_{1}(t^{(d+1)n})&\ldots&\varphi_{1}(t^{(d+1)n+d-1})\\ \vdots&\ddots&\vdots\\ \varphi_{d}(t^{(d+1)n})&\ldots&\varphi_{d}(t^{(d+1)n+d-1})\end{pmatrix}\enspace.\]
By the definition of \(\varphi_{j}\), we have
\[\Theta_{n}=\det\begin{pmatrix}\frac{1}{(2+\gamma_{1})_{(d+1)n}}&\ldots&\frac{ 1}{(2+\gamma_{1})_{(d+1)n+d-1}}\\ \vdots&\ddots&\vdots\\ \frac{1}{(2+\gamma_{d})_{(d+1)n}}&\ldots&\frac{1}{(2+\gamma_{d})_{(d+1)n+d-1} }\end{pmatrix}=\prod_{j=1}^{d}\frac{1}{(2+\gamma_{j})_{(d+1)n+d-1}}\cdot(-1)^{d }\prod_{1\leq j_{1}<j_{2}\leq d}(\gamma_{j_{2}}-\gamma_{j_{1}})\enspace.\]
Proposition 5.2 and Eq. (19) conclude that
\[\Delta_{n}(z)=\left(\frac{(-1)^{n}}{(n!)^{d}}\right)^{d}\cdot\prod_{j=1}^{d} \left[\prod_{\begin{subarray}{c}1\leq j^{\prime}\leq d\\ j^{\prime}\neq j\end{subarray}}\prod_{k=1}^{n}(\gamma_{j^{\prime}}-\gamma_{j}- k)\right]\cdot\prod_{j=1}^{d}\frac{\gamma_{j}}{(2+\gamma_{j})_{(d+1)n+d-1}} \cdot\prod_{1\leq j_{1}<j_{2}\leq d}(\gamma_{j_{2}}-\gamma_{j_{1}})\in K \setminus\{0\}\enspace.\]
Example 6.3.: In this example, we give a generalization of the Laguerre polynomials (_confer_[3, 6.2]). Let \(d,n\in\mathbb{N}\), \(\gamma_{1},\ldots,\gamma_{d}\in K\setminus\{0\}\) which are pairwise distinct and \(\delta\in K\) which is not negative integer. Put \(D_{j}=-z\partial_{z}-\gamma_{j}z+\delta\),
\[f_{j}(z)=\sum_{k=0}^{\infty}(1+\delta)_{k}\left(\frac{1}{\gamma_{j}z}\right)^{ k+1}\]
and \(\varphi_{f_{j}}=\varphi_{j}\). The straight forward computation shows \(D_{j}\cdot f_{j}(z)\in K\). Put
\[R_{j,n}=\frac{1}{n!}\left(\partial_{z}-\frac{\gamma_{j}z-\delta}{z}\right)^{n}\enspace.\]
By Lemma 4.4, we have
\[R_{j_{1},n_{j_{1}}}R_{j_{2},n_{j_{2}}}=R_{j_{2},n_{j_{2}}}R_{j_{1},n_{j_{1}}} \ \ \text{for}\ \ 1\leq j_{1},j_{2}\leq d,\ \ n_{j_{1}},n_{j_{2}}\in\mathbb{N}\enspace.\]
For \(h\in\mathbb{Z}\) with \(0\leq h\leq d\), we define
\[P_{n,h}(z)=P_{h}(z)=\prod_{j=1}^{d}R_{j,n}\cdot z^{dn+h}\enspace,\] \[Q_{n,j}(z)=Q_{j}(z)=\varphi_{j}\left(\frac{P_{h}(z)-P_{h}(t)}{z-t }\right)\ \ \text{for}\ \ 1\leq j\leq d\enspace.\]
Then Theorem 4.2 yields that the vector of polynomials \((P_{h},Q_{j,h})_{1\leq j\leq d}\) is a weight \((n,\ldots,n)\in\mathbb{N}^{d}\) Pade type approximants of \((f_{j})_{1\leq j\leq d}\). By the definition of \(P_{d}(z)\), we have
\[P_{d}(z)=\frac{\prod_{j=1}^{d}\gamma_{j}^{n}}{(n!)^{d}}z^{d(n+1)}+\text{(lower degree terms)}\enspace. \tag{20}\]
Define
\[\Delta_{n}(z)=\det\begin{pmatrix}P_{0}(z)&P_{1}(z)&\dots&P_{d}(z)\\ Q_{1,0}(z)&Q_{1,1}(z)&\dots&Q_{1,d}(z)\\ \vdots&\vdots&\ddots&\vdots\\ Q_{d,0}(z)&Q_{d,1}(z)&\dots&Q_{d,d}(z)\end{pmatrix},\ \ \Theta_{n}=\det \begin{pmatrix}\varphi_{1}(t^{dn})&\dots&\varphi_{1}(t^{d(n+1)-1})\\ \vdots&\ddots&\vdots\\ \varphi_{d}(t^{dn})&\dots&\varphi_{d}(t^{d(n+1)-1})\end{pmatrix}\enspace.\]
Then by the definition of \(\varphi_{j}\), we have
\[\Theta_{n}=\det\begin{pmatrix}\frac{(1+\delta)_{dn}}{\gamma_{d}^{dn+1}}&\dots &\frac{(1+\delta)_{d(n+1)-1}}{\gamma_{1}^{d(n+1)}}\\ \vdots&\ddots&\vdots\\ \frac{(1+\delta)_{dn}}{\gamma_{d}^{dn+1}}&\dots&\frac{(1+\delta)_{d(n+1)-1} }{\gamma_{d}^{d(n+1)}}\end{pmatrix}=\prod_{j=1}^{d}\frac{(1+\delta)_{dn+j-1}} {\gamma_{j}^{d(n+1)}}\cdot\prod_{1\leq j_{1}<j_{2}\leq d}(\gamma_{j_{2}}- \gamma_{j_{1}})\enspace.\]
Proposition 5.2 and Eq. (20) conclude that
\[\Delta_{n}(z)=\left(\frac{(-1)^{n+1}}{(n!)^{d}}\right)^{d}\cdot\prod_{j=1}^{d} \left[\prod_{\begin{subarray}{c}1\leq j^{\prime}\leq d\\ j^{\prime}\neq j\end{subarray}}(\gamma_{j^{\prime}}-\gamma_{j})^{n}\right] \cdot\prod_{j=1}^{d}\frac{(1+\delta)_{dn+j-1}}{\gamma_{j}^{(d-1)n+d}}\times \prod_{1\leq j_{1}<j_{2}\leq d}(\gamma_{j_{2}}-\gamma_{j_{1}})\in K\setminus\{ 0\}\enspace.\]
**Example 6.4**.: Let us give an alternative generalization of the Laguerre polynomials. Let \(d,n\in\mathbb{N}\), \(\gamma\in K\setminus\{0\}\), and \(\delta_{1},\dots,\delta_{d}\in K\) which are not negative integers with
\[\delta_{j_{1}}-\delta_{j_{2}}\notin\mathbb{Z}\ \ \text{for}\ \ 1\leq j_{1}<j_{2}\leq d\enspace.\]
Put \(D_{j}=-z\partial_{z}-\gamma z+\delta_{j}\),
\[f_{j}(z)=\sum_{k=0}^{\infty}(1+\delta_{j})_{k}\left(\frac{1}{\gamma z}\right) ^{k+1}\enspace,\]
and \(\varphi_{f_{j}}=\varphi_{j}\). Then we have \(D_{j}\cdot f_{j}(z)\in K\). Put
\[R_{j,n}=\frac{1}{n!}\left(\partial_{z}-\frac{\gamma z-\delta_{j}}{z}\right)^{ n}z^{n}\enspace.\]
By Lemma 4.4, we have
\[R_{j_{1},n_{j_{1}}}R_{j_{2},n_{j_{2}}}=R_{j_{2},n_{j_{2}}}R_{j_{1},n_{j_{1}}}\ \ \text{for}\ \ 1\leq j_{1},j_{2}\leq d,\ \ n_{j_{1}},n_{j_{2}}\in\mathbb{N}\enspace.\]
For \(h\in\mathbb{Z}\) with \(0\leq h\leq d\), we define
\[P_{n,h}(z)=P_{h}(z)=\prod_{j=1}^{d}R_{j,n}\cdot z^{h}\enspace,\] \[Q_{n,j}(z)=Q_{j}(z)=\varphi_{j}\left(\frac{P_{h}(z)-P_{h}(t)}{z- t}\right)\ \ \text{for}\ \ 1\leq j\leq d\enspace.\]
Then Theorem 4.2 yields that the vector of polynomials \((P_{h},Q_{j,h})_{1\leq j\leq d}\) is a weight \((n,\dots,n)\in\mathbb{N}^{d}\) Pade type approximants of \((f_{j})_{1\leq j\leq d}\). By the definition of \(P_{d}(z)\), we have
\[P_{d}(z)=\frac{\gamma^{dn}}{(n!)^{d}}z^{d(n+1)}+\text{(lower degree terms)}\enspace. \tag{21}\]
Define
\[\Delta_{n}(z)=\det\begin{pmatrix}P_{0}(z)&P_{1}(z)&\ldots&P_{d}(z)\\ Q_{1,0}(z)&Q_{1,1}(z)&\ldots&Q_{1,d}(z)\\ \vdots&\vdots&\ddots&\vdots\\ Q_{d,0}(z)&Q_{d,1}(z)&\ldots&Q_{d,d}(z)\end{pmatrix},\ \ \Theta_{n}=\det \begin{pmatrix}\varphi_{1}(t^{n})&\ldots&\varphi_{1}(t^{d+n-1})\\ \vdots&\ddots&\vdots\\ \varphi_{d}(t^{n})&\ldots&\varphi_{d}(t^{d+n-1})\end{pmatrix}\enspace.\]
Then by the definition of \(\varphi_{j}\), we have
\[\Theta_{n}=\det\begin{pmatrix}\frac{(1+\delta_{1})_{n}}{\gamma^{n+1}}&\ldots &\frac{(1+\delta_{1})_{d+n}}{\gamma^{n+d}}\\ \vdots&\ddots&\vdots\\ \frac{(1+\delta_{d})_{n}}{\gamma^{n+1}}&\ldots&\frac{(1+\delta_{d})_{d+n}}{ \gamma^{n+d}}\end{pmatrix}=\prod_{j=1}^{d}\frac{(1+\delta_{j})_{n}}{\gamma^{n+ j}}\cdot\prod_{1\leq j_{1}<j_{2}\leq d}(\delta_{j_{2}}-\delta_{j_{1}})\enspace.\]
Proposition 5.2 and Eq. (21) conclude that
\[\Delta_{n}(z)=\left(\frac{(-1)^{n+1}}{(n!)^{d}}\right)^{d}\cdot\prod_{j=1}^{d }\left[\prod_{\begin{subarray}{c}1\leq j^{\prime}\leq d\\ j^{\prime}\neq j\end{subarray}}\prod_{k=1}^{n}(\delta_{j^{\prime}}-\delta_{j} -k)\right]\cdot\prod_{j=1}^{d}\frac{(1+\delta_{j})_{n}}{\gamma^{j}}\cdot\prod_ {1\leq j_{1}<j_{2}\leq d}(\delta_{j_{2}}-\delta_{j_{1}})\in K\setminus\left\{0 \right\}\enspace.\]
**Example 6.5**.: In this example, we give a generalization of the Hermite polynomials (_confer_[3, 6.1]). Let \(d,n\in\mathbb{N}\), \(\gamma\in K\setminus\left\{0\right\}\) and \(\delta_{1},\ldots,\delta_{d}\in K\) which are pairwise distinct. Put \(D_{j}=-\partial_{z}+\gamma z+\delta_{j}\),
\[f_{j}(z)=\sum_{k=0}^{\infty}\!\frac{f_{j,k}}{z^{k+1}}\enspace,\]
where \(f_{j,0}=1\), \(f_{j,1}=-\delta_{j}/\gamma\) and
\[f_{j,k+2}=-\frac{\delta_{j}f_{j,k+1}+(k+1)f_{j,k}}{\gamma}\ \ \text{for}\ \ k\geq 0\enspace, \tag{22}\]
and \(\varphi_{f_{j}}=\varphi_{j}\). Then we have \(D_{j}\cdot f_{j}(z)\in K\). Put
\[R_{j,n}=\frac{1}{n!}(\partial_{z}+\gamma z+\delta_{j})^{n}\enspace.\]
By Lemma 4.4, we have
\[R_{j_{1},n_{j_{1}}}R_{j_{2},n_{j_{2}}}=R_{j_{2},n_{j_{2}}}R_{j_{1},n_{j_{1}}} \ \ \text{for}\ \ 1\leq j_{1},j_{2}\leq d,\ \ n_{j_{1}},n_{j_{2}}\in\mathbb{N}\enspace.\]
For \(h\in\mathbb{Z}\) with \(0\leq h\leq d\), we define
\[P_{n,h}(z)=P_{h}(z)=\prod_{j=1}^{d}R_{j,n}\cdot z^{h}\enspace,\] \[Q_{n,j,h}(z)=Q_{j,h}(z)=\varphi_{j}\left(\frac{P_{h}(z)-P_{h}(t) }{z-t}\right)\ \ \text{for}\ \ 1\leq j\leq d\enspace.\]
Then Theorem 4.2 yields that the vector of polynomials \((P_{h},Q_{j,h})_{1\leq j\leq d}\) is a weight \((n,\ldots,n)\in\mathbb{N}^{d}\) Pade type approximants of \((f_{j})_{1\leq j\leq d}\). By the definition of \(P_{d}(z)\), we have
\[P_{d}(z)=\frac{\gamma^{dn}}{(n!)^{d}}z^{d(n+1)}+\text{(lower degree terms)}\enspace. \tag{23}\]
Define
\[\Delta_{n}(z)=\det\begin{pmatrix}P_{0}(z)&P_{1}(z)&\ldots&P_{d}(z)\\ Q_{1,0}(z)&Q_{1,1}(z)&\ldots&Q_{1,d}(z)\\ \vdots&\vdots&\ddots&\vdots\\ Q_{d,0}(z)&Q_{d,1}(z)&\ldots&Q_{d,d}(z)\end{pmatrix},\ \ \Theta_{n}=\det \begin{pmatrix}\varphi_{1}(1)&\ldots&\varphi_{1}(t^{d-1})\\ \vdots&\ddots&\vdots\\ \varphi_{d}(1)&\ldots&\varphi_{d}(t^{d-1})\end{pmatrix}\.\]
Then by the definition of \(\varphi_{j}\) and Eq. (22), we have
\[\Theta_{n}=\det\begin{pmatrix}f_{1,0}&f_{1,1}&\ldots&f_{1,d-1}\\ \vdots&\vdots&\ddots&\vdots\\ f_{d,0}&f_{d,1}&\ldots&f_{d,d-1}\end{pmatrix}=\left(\frac{-1}{\gamma}\right) ^{1+2+\cdots+(d-1)}\cdot\prod_{1\leq j_{1}<j_{2}\leq d}(\delta_{j_{2}}-\delta _{j_{1}})\.\]
Proposition 5.2 and Eq. (23) conclude that
\[\Delta_{n}(z)=\left(\frac{(-1)^{n+1}}{(n!)^{d}}\right)^{d}\cdot\prod_{j=1}^{d} \left[\prod_{\begin{subarray}{c}1\leq j^{\prime}\leq d\\ j^{\prime}\neq j\end{subarray}}(\delta_{j^{\prime}}-\delta_{j})^{n}\right] \cdot(-1)^{\frac{d(d-1)}{2}}\gamma^{dn-\frac{d(d-1)}{2}}\cdot\prod_{1\leq j_{1 }<j_{2}\leq d}(\delta_{j_{2}}-\delta_{j_{1}})\in K\setminus\left\{0\right\}\.\]
Example 6.6.: In this example, we consider a generalization of the Legendre polynomials (_confer_[3, Remark 5.3.1]). Let \(n,d,m\in\mathbb{N}\), \(\alpha_{1},\ldots,\alpha_{m}\in K\setminus\left\{0\right\}\) which are pairwise distinct and \(\gamma_{1},\ldots,\gamma_{d}\in K\) which are not negative integers satisfying \(\gamma_{j_{1}}-\gamma_{j_{2}}\not\in\mathbb{Z}\) for \(1\leq j_{1}<j_{2}\leq d\). Put \(a_{2}(z)=\prod_{i=1}^{m}(z-\alpha_{i})\), \(D_{j}=-za_{2}(z)\partial_{z}+\gamma_{j}a_{2}(z)\),
\[f_{i,j}(z)=\sum_{k=0}^{\infty}\frac{1}{k+1+\gamma_{j}}\left(\frac{\alpha_{i}}{ z}\right)^{k+1}\ \ \text{for}\ \ 1\leq i\leq m,\ \ 1\leq j\leq d\,\]
and \(\varphi_{f_{i,j}}=\varphi_{i,j}\). Then we have \(D_{j}\cdot f_{i,j}(z)\in K[z]\). Put
\[R_{j,n}=\frac{1}{n!}\left(\partial_{z}+\frac{\gamma_{j}}{z}\right)^{n}z^{n}\.\]
By Lemma 4.4, we have
\[R_{j_{1},n_{j_{1}}}R_{j_{2},n_{j_{2}}}=R_{j_{2},n_{j_{2}}}R_{j_{1},n_{j_{1}}} \ \ \text{for}\ \ 1\leq j_{1},j_{2}\leq d,\ \ n_{j_{1}},n_{j_{2}}\in\mathbb{N}\.\]
For \(h\in\mathbb{Z}\) with \(0\leq h\leq dm\), we define
\[P_{n,h}(z)=P_{h}(z)=\prod_{j=1}^{d}R_{j,n}\cdot\left[z^{h}a_{2}(z )^{dn}\right]\,\] \[Q_{n,i,j,h}(z)=Q_{i,j,h}(z)=\varphi_{i,j}\left(\frac{P_{h}(z)-P_{ h}(t)}{z-t}\right)\ \ \text{for}\ \ 1\leq i\leq m,\ \ 1\leq j\leq d\.\]
Then Theorem 4.2 yields that the vector of polynomials \((P_{h},Q_{i,j,h})_{\begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq d\end{subarray}}\) is a weight \((n,\ldots,n)\in\mathbb{N}^{md}\) Pade approximants of \((f_{i,j})_{\begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq d\end{subarray}}\). Define
\[\Delta_{n}(z)=\det\begin{pmatrix}P_{0}(z)&P_{1}(z)&\ldots&P_{dm}(z)\\ Q_{1,1,0}(z)&Q_{1,1,1}(z)&\ldots&Q_{1,1,dm}(z)\\ \vdots&\vdots&\ddots&\vdots\\ Q_{m,1,0}(z)&Q_{m,1,1}(z)&\ldots&Q_{m,1,dm}(z)\\ \vdots&\vdots&\ddots&\vdots\\ Q_{1,d,0}(z)&Q_{1,d,1}(z)&\ldots&Q_{1,d,dm}(z)\\ \vdots&\vdots&\ddots&\vdots\\ Q_{m,d,0}(z)&Q_{m,d,1}(z)&\ldots&Q_{m,d,dm}(z)\end{pmatrix}\.\]
The non-vanishing of \(\Delta_{n}(z)\) has been proven in [12, Proposition 4.1].
Remark 6.7.: We mention that Example 6.2, Example 6.3, Example 6.4 and Example 6.6 can be applicable to prove the linear independence of the values of the series which are dealt in each example. However such results have been obtained as follows.
In Example 6.2, for \(\gamma_{1},\ldots,\gamma_{d}\in\mathbb{Q}\), the series \(f_{j}(z)\) become \(E\)-functions in the sense of Siegel (_confer_[30]). The linear independence result on the values of these \(E\)-functions has been studied by K. Vaananen in [35]. In Example 6.3, for \(\delta\in\mathbb{Q}\) and \(\gamma_{1},\ldots,\gamma_{d}\in K\) for an algebraic number field \(K\), the series \(f_{j}(z)\) are Euler-type series. In the case of \(\delta=0\), the global relations among the values of these Euler-type series have been studied by T. Matala-aho and W. Zudilin for \(d=1\) in [22] and L. Seppala for general \(d\) in [29]. Likewise Example 6.4, for \(\delta_{1},\ldots,\delta_{d}\in\mathbb{Q}\) and \(\gamma=1\), treats Euler-type series. In [34], Vaananen studied the global relations among the values of these Euler-type series. In Example 6.6, for \(\gamma_{1},\ldots,\gamma_{d}\in\mathbb{Q}\) and \(\alpha_{1},\ldots,\alpha_{m}\in K\) for an algebraic number field \(K\), the series \(f_{i,j}(z)\) become \(G\)-functions in the sense of Siegel (_confer_[30]) called as the first Lerch function. The linear independence of values of these functions has been studied by David, Hirata-Kohno and the author in [12, Theorem 2.1].
## 7 Proof of Theorem 1.1
This section is devoted to the proof of Theorem 1.1. We show the more precise theorem which we will state below. To state the theorem, we prepare notations.
Let \(K\) an algebraic number field. We denote the set of places of \(K\) by \(\mathfrak{M}_{K}\). For \(v\in\mathfrak{M}_{K}\), we denote the completion of \(K\) with respect to \(v\) by \(K_{v}\) and define the normalized absolute value \(|\cdot|_{v}\) as follows :
\[|p|_{v}=p^{-\frac{[K_{v};\mathbb{Q}_{v}]}{[K;\mathbb{Q}]}}\ \text{if}\ \ v\mid p,\qquad|x|_{v}=|_{v}x|^{\frac{[K_{v};\mathbb{R}]}{[K;\mathbb{Q}]}} \ \text{if}\ \ v\mid\infty\enspace,\]
where \(p\) is a prime number and \(\iota_{v}\) the embedding \(K\hookrightarrow\mathbb{C}\) corresponding to \(v\).
Let \(\beta\in K\), we define the absolute Weil height of \(\beta\) as
\[\mathrm{H}(\beta)=\prod_{v\in\mathfrak{M}_{K}}\max\{1,|\beta|_{v}\}\enspace,\]
Let \(m\) be a positive integer and \(\boldsymbol{\beta}=(\beta_{0},\ldots,\beta_{m})\in\mathbb{P}_{m}(K)\). We define the absolute Weil height of \(\boldsymbol{\beta}\) by
\[\mathrm{H}(\boldsymbol{\beta})=\prod_{v\in\mathfrak{M}_{K}}\max\{|\beta_{0}| _{v},\ldots,|\beta_{m}|_{v}\}\enspace.\]
and logarithmic absolute Weil height by \(\mathrm{h}(\boldsymbol{\beta})=\log\mathrm{H}(\boldsymbol{\beta})\). Let \(v\in\mathfrak{M}_{K}\), then \(\mathrm{h}_{v}(\boldsymbol{\beta})=\log\|\boldsymbol{\beta}\|_{v}\) where \(\|\cdot\|_{v}\) is the sup \(v\)-adic norm. Then we have \(\mathrm{h}(\boldsymbol{\beta})=\sum_{v\in\mathfrak{M}_{K}}\mathrm{h}_{v}( \boldsymbol{\beta})\) and for \(\beta\in K\), \(\mathrm{h}(\beta)\) is the height of the point \((1,\beta)\in\mathbb{P}_{1}(K)\).
Let \(u\) be an integer with \(u\geq 2\). We put \(\nu(u)=u\prod_{q:\text{prime},q|u}q^{1/(q-1)}\). Let \(v_{0}\) be a place of \(K\), \(\alpha\in K\) with \(|\alpha|_{v_{0}}>2\). In the case of \(v_{0}\) is a non-archimedean place, we denote the prime number under \(v_{0}\) by \(p_{v_{0}}\) and put \(\varepsilon_{v_{0}}(u)=1\) if \(u\) is coprime with \(p_{v_{0}}\) and \(\varepsilon_{v_{0}}(u)=0\) if \(u\) is divisible by \(p_{v_{0}}\). We denote the Euler's totient function by \(\varphi\).
We define real numbers
\[\mathbb{A}_{v_{0}}(\alpha) =\mathrm{h}_{v_{0}}(\alpha)-\begin{cases}\frac{\mathrm{h}_{v_{0}}(2) }{\varepsilon_{v_{0}}(u)\log\,|v_{0}|}\quad\text{ if }v_{0}\nmid\infty\\ \frac{\varepsilon_{v_{0}}(u)\log\,|v_{0}|_{v_{0}}}{p_{v_{0}}-1}\quad\quad\text{ if }v_{0}\nmid\infty\end{cases}\,,\] \[\mathbb{B}_{v_{0}}(\alpha) =(u-1)\mathrm{h}(\alpha)+(u+1)\mathrm{h}(2)+\frac{(2u-1)\log\, \nu(u)}{u}+\frac{u-1}{\varphi(u)}-(u-1)\mathrm{h}_{v_{0}}(\alpha)-\begin{cases} (u+1)\mathrm{h}_{v_{0}}(2)&\text{ if }v_{0}\nmid\infty\\ \log\,|\nu(u)|_{v_{0}}^{-1}&\text{ if }v_{0}\nmid\infty\end{cases}\,,\] \[U_{v_{0}}(\alpha) =(u-1)\mathrm{h}_{v_{0}}(\alpha)+\begin{cases}(u+1)\mathrm{h}_{v _{0}}(2)&\text{ if }v_{0}\nmid\infty\\ \log\,|\nu(u)|_{v_{0}}^{-1}&\text{ if }v_{0}\nmid\infty\end{cases}\,,\] \[V_{v_{0}}(\alpha) =\mathbb{A}_{v_{0}}(\alpha)-\mathbb{B}_{v_{0}}(\alpha)\enspace.\]
We can now state :
**Theorem 7.1**.: _Assume \(V_{v_{0}}(\alpha)>0\). Then, for any positive number \(\varepsilon\) with \(\varepsilon<V_{v_{0}}(\alpha)\), there exists an effectively computable positive number \(H_{0}\) depending on \(\varepsilon\) and the given data such that the following property holds. For any \(\boldsymbol{\lambda}=(\lambda,\lambda_{l})_{0\leq l\leq u-2}\in K^{u}\setminus \{\boldsymbol{0}\}\) satisfying \(H_{0}\leq\mathrm{H}(\boldsymbol{\lambda})\), then we have_
\[\left|\lambda+\sum_{l=0}^{u-2}\lambda_{l}\cdot\frac{1}{\alpha^{l+1}}{}_{2}F_{1 }\left(\frac{1+l}{u},1,\,\frac{u+l}{u}\left|\,\frac{1}{\alpha^{u}}\right.\right) \right|_{v_{0}}>C(\alpha,\varepsilon)\mathrm{H}_{v_{0}}(\boldsymbol{\lambda} )\mathrm{H}(\boldsymbol{\lambda})^{-\mu(\alpha,\varepsilon)}\enspace,\]
_where_
\[\mu(\alpha,\varepsilon)=\frac{\mathbb{A}_{v_{0}}(\alpha)+U_{v_{0}}(\alpha)}{V_ {v_{0}}(\alpha)-\varepsilon}\,\text{ and }\,C(\alpha,\varepsilon)=\exp\left(-\left(\frac{\log(2)}{V_{v_{0}}(\alpha)- \varepsilon}+1\right)(\mathbb{A}_{v_{0}}(\alpha)+U_{v_{0}}(\alpha))\right)\enspace.\]
We shall derive Theorem 1.1 from Theorem 7.1.
Proof of Theorem 1.1.: Let us consider the case of \(K=\mathbb{Q}\), \(v_{0}=\infty\) and \(\alpha\in\mathbb{Z}\setminus\{0,\pm 1\}\). Then we see that \(V_{\infty}(\alpha)=V(\alpha)\) where \(V(\alpha)\) is defined in Theorem 1.1. Assume \(V(\alpha)>0\). Choose some \(\boldsymbol{\lambda}=(\lambda,\lambda_{0}\ldots,\lambda_{u-2})\in\mathbb{Q}^{u }\setminus\{\boldsymbol{0}\}\) such that
\[\lambda_{0}+\sum_{l=0}^{u-2}\lambda_{l}\cdot\frac{1}{\alpha^{l+1}}{}_{2}F_{1} \left(\frac{1+l}{u},1,\,\frac{u+l}{u}\left|\,\frac{1}{\alpha^{u}}\right.\right) =0\enspace.\]
If \(H(\boldsymbol{\lambda})\geq H_{0}\) (where \(H_{0}\) is as in Theorem 7.1), there is nothing more to prove. Otherwise, let \(m>0\) be a rational integer such that \(H(m\boldsymbol{\lambda})\geq H_{0}\). Then Theorem 7.1 ensures that
\[m\left(\lambda_{0}+\sum_{l=0}^{u-2}\lambda_{l}\cdot\frac{1}{\alpha^{l+1}}{}_{2} F_{1}\left(\frac{1+l}{u},1,\,\frac{u+l}{u}\left|\,\frac{1}{\alpha^{u}}\right. \right)\right)\neq 0\enspace.\]
This is a contradiction and completes the proof of Theorem 1.1.
Now we start the proof of Theorem 7.1. The proof is relying on the Pade approximants obtained in Example 6.1. In the following, we use the same notations as in Example 6.1.
### Computation of determinants
**Lemma 7.2**.: _Let \(n\) be a positive integer. We have \(\Theta_{n}(z)=0\) if \(n\) is not divided by \(u\). Moreover, if \(n\) is divisible by \(u\) and \(n=uN\) for \(N\in\mathbb{Z}\), we have_
\[\Delta_{uN}(z)=(-1)^{u-1}\frac{\left((uN+1)u-1-uN\right)_{uN}}{(uN)!}\prod_{l=0} ^{u-2}\frac{\left(\frac{u-1}{u}\right)_{uN}}{\left(\frac{u+l}{u}\right)_{uN}} \in K\setminus\{0\}\enspace.\]
Proof.: Put
\[\Theta_{n}=\det\begin{pmatrix}\varphi_{0}((t^{u}-1)^{n})&\dots&\varphi_{0}(t^{u-2} (t^{u}-1)^{n})\\ \vdots&\ddots&\vdots\\ \varphi_{u-2}((t^{u}-1)^{n})&\dots&\varphi_{u-2}(t^{u-2}(t^{u}-1)^{n})\end{pmatrix} \enspace.\]
Proposition 5.2 implies that
\[\Delta_{n}(z)=(-1)^{(n+1)(u-1)}\times\frac{1}{[(n+1)(u-1)]!}\partial_{z}^{(n+ 1)(u-1)}\cdot P_{u-1}(z)\times\Theta_{n}\enspace.\]
According to the definition of \(P_{u-1}(z)\), we get
\[\frac{1}{[(n+1)(u-1)]!}\partial_{z}^{(n+1)(u-1)}\cdot P_{u-1}(z)=\frac{((n+1)u -1-n)_{n}}{n!}\enspace.\]
By the definition of \(f_{l}\), we have
\[\varphi_{l}(t^{k})=\begin{cases}\frac{\left(\frac{1+l}{u}\right)_{N}}{\left( \frac{u+l}{u}\right)_{N}}&\text{if }\ k=uN+l\text{ for some }N\in\mathbb{Z}\enspace,\\ 0&\text{otherwise }\enspace.\end{cases}\]
Above equality shows \(\Theta_{uN+k}=0\) for non-negative integers \(N,k\) with \(1\leq k\leq u-1\) and
\[\Theta_{uN} =\det\begin{pmatrix}\varphi_{0}((t^{u}-1)^{uN})&0&\dots&0\\ 0&\varphi_{1}(t(t^{u}-1)^{uN})&\dots&0\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\dots&\varphi_{u-2}(t^{u-2}(t^{u}-1)^{uN})\end{pmatrix} \tag{24}\] \[=\prod_{l=0}^{u-2}\varphi_{l}(t^{l}(t^{u}-1)^{uN})\enspace.\]
We shall now compute \(\varphi_{l}(t^{l}(t^{u}-1)^{uN})\). Since we have
\[t^{l}(t^{u}-1)^{uN}=\sum_{k=0}^{uN}\binom{uN}{k}(-1)^{uN-k}t^{uk+l}\enspace,\]
we obtain
\[\varphi_{l}(t^{l}(t^{u}-1)^{uN})=\sum_{k=0}^{uN}\binom{uN}{k}(-1)^{uN-k}\frac{ \left(\frac{1+l}{u}\right)_{k}}{\left(\frac{u+l}{u}\right)_{k}}\enspace.\]
For positive real numbers \(\alpha,\beta\) with \(\alpha<\beta\) and a non-negative integer \(k\), we have
\[\frac{(\alpha)_{k}}{(\beta)_{k}}=\frac{\Gamma(\beta)}{\Gamma(\alpha)\Gamma( \beta-\alpha)}\int_{0}^{1}\xi^{\alpha+k-1}(1-\xi)^{\beta-\alpha-1}d\xi\enspace.\]
Applying above equality for \(\alpha=(1+l)/u,\beta=(u+l)/u\), we obtain
\[\varphi_{l}(t^{l}(t^{u}-1)^{uN}) =\frac{\Gamma(\frac{u+l}{u})}{\Gamma(\frac{1+u}{u})\Gamma(\frac{ u-1}{u})}\sum_{v=0}^{uN}\binom{uN}{v}(-1)^{uN-v}\int_{0}^{1}\xi^{\frac{1+l}{u}+v- 1}(1-\xi)^{\frac{u-1}{u}-1}d\xi\] \[=\frac{(-1)^{uN}\Gamma(\frac{u+l}{u})}{\Gamma(\frac{1+l}{u}) \Gamma(\frac{u-1}{u})}\int_{0}^{1}\xi^{\frac{1+l}{u}-1}(1-\xi)^{uN+\frac{u-1} {u}-1}d\xi\] \[=\frac{(-1)^{uN}\Gamma(\frac{u+l}{u})}{\Gamma(uN+\frac{u+l}{u})} \frac{\Gamma(uN+\frac{u-1}{u})}{\Gamma(\frac{u-1}{u})}\] \[=\frac{(-1)^{uN}\left(\frac{u-1}{u}\right)_{uN}}{\left(\frac{u+l} {u}\right)_{uN}}\enspace. \tag{25}\]
Substituting above equality to Eq. (24), we obtain the assertion.
### Estimates
Unless stated otherwise, the Landau symbol small \(o\) and large \(O\) refer when \(N\) tends to infinity.
For a finite set \(S\) of rational numbers and a rational number \(a\), we define
\[\operatorname{den}\left(S\right)=\min(n\in\mathbb{Z}\mid n\geq 1,ns\in\mathbb{Z} \text{ for all }s\in S)\quad\text{and}\quad\mu(a)=\operatorname{den}\left(a\right)\prod_{ \begin{subarray}{c}q:\text{prime}\\ q|\operatorname{den}(a)\end{subarray}}q^{1/(q-1)}\enspace.\]
We shall now quote an estimate of the denominator of \(((a)_{k}/(b)_{k})_{0\leq k\leq n}\) for \(n\in\mathbb{N}\) and \(a,b\in\mathbb{Q}\) being non-negative integers.
**Lemma 7.3**.: [20, Lemma 5.1] _Let \(n\in\mathbb{N}\) and \(a,b\in\mathbb{Q}\) being non-negative integers. Put_
\[D_{n}=\operatorname{den}\left(\frac{(a)_{0}}{(b)_{0}},\ldots,\frac{(a)_{n}}{( b)_{n}}\right)\enspace.\]
_Then we have_
\[\limsup_{n\to\infty}\frac{1}{n}\log D_{n}\leq\log\,\mu(a)+\frac{\operatorname {den}(b)}{\varphi(\operatorname{den}(b))}\enspace,\]
_where \(\varphi\) is the Euler's totient function._
**Lemma 7.4**.: _Let \(N,l,h\) be non-negative integers with \(0\leq l\leq u-2\) and \(0\leq h\leq u-1\)._
\((i)\) _We have_
\[P_{uN,h}(z)=(-1)^{uN}\sum_{k=0}^{N(u-1)}\left[\sum_{s=0}^{k}\binom{uN-1/u}{s+N }\binom{u(s+N)+h}{uN}\binom{1/u}{k-s}\right](-1)^{k}z^{uk+h}\enspace.\]
\((ii)\) _Put \(\tilde{\varepsilon}_{l,h}=1\) if \(h<l+1\) and \(0\) if \(l+1\leq h\). We have_
\[Q_{uN,l,h}(z)=(-1)^{uN}\sum_{v=\ell_{l,h}}^{N(u-1)}\left(\sum_{k=0}^{u-1)N-v}( -1)^{k+v}\left[\sum_{s=0}^{k+v}\binom{uN-1/u}{s+N}\binom{u(s+N)+h}{uN}\binom{1 /u}{k+v-s}\right]\frac{\left(\frac{1+l}{u}\right)_{k}}{\left(\frac{u+l}{u} \right)_{k}}\right)z^{uv+h-l-1}\enspace.\]
\((iii)\) _Put \(\varepsilon_{l,h}=1\) if \(l<h\) and \(\varepsilon_{l,h}=0\) if \(h\leq l\). We have_
\[\mathfrak{R}_{uN,l,h}(z)=\frac{\left(\frac{u-1}{u}\right)_{uN}}{\left(\frac{u +l}{u}\right)_{uN}z^{uN+l-h+1}}\sum_{k=\varepsilon_{l,h}}^{\infty}\binom{u(N +k)+l-h}{uN}\frac{\left(\frac{1+l}{u}\right)_{k}}{\left(\frac{u+l}{u}+uN \right)_{k}}\frac{1}{z^{uk}}\enspace.\]
Proof.: \((i)\) Put
\[w(z)=(1-z^{u})^{-1/u}=\sum_{k=0}^{\infty}\binom{-1/u}{k}(-z^{u})^{k}\in K[[z]]\enspace.\]
Then \(w(z)\) is a solution of \(-(z^{u}-1)\partial_{z}-z^{u-1}\in K[z,\partial_{z}]\). Lemma 4.3 yields
\[\frac{1}{(uN)!}\left(\partial_{z}-\frac{z^{u-1}}{z^{u}-1}\right)^{uN}(z^{u}-1 )^{uN}=\frac{1}{(uN)!}w(z)^{-1}\partial_{z}^{uN}w(z)(z^{u}-1)^{uN}\enspace,\]
and therefore
\[P_{uN,h}(z) =\frac{1}{(uN)!}w(z)^{-1}\partial_{z}^{uN}w(z)(z^{u}-1)^{uN}\cdot z ^{h}\] \[=\frac{(-1)^{uN}}{(uN)!}w(z)^{-1}\partial_{z}^{uN}\cdot\sum_{k=0} ^{\infty}\binom{uN-1/u}{k}(-1)^{k}z^{uk+h}\] \[=(-1)^{uN}\sum_{k=0}^{\infty}\binom{1/u}{k}(-1)^{k}z^{uk}\cdot \sum_{k=0}^{\infty}\binom{uN-1/u}{k+N}\binom{u(k+N)+h}{uN}(-1)^{k}z^{uk+h}\] \[=(-1)^{uN}\sum_{k=0}^{\infty}\left[\sum_{s=0}^{k}\binom{uN-1/u}{s +N}\binom{u(s+N)+h}{uN}\binom{1/u}{k-s}\right](-1)^{k}z^{uk+h}\enspace.\]
Since \(\deg P_{uN,h}=u(u-1)N+h\), using above equality, we obtain the assertion.
\((ii)\) Put \(P_{uN,h}(z)=\sum_{k=0}^{u(u-1)N+h}p_{k}z^{k}\). Notice that, by \((i)\), we have
\[p_{k}=\begin{cases}(-1)^{uN+k^{\prime}}\sum_{s=0}^{k^{\prime}}\binom{uN-1/u}{s+N }\binom{u(s+N)+h}{uN}\binom{1/u}{k^{\prime}-s}&\text{ if there exists }k^{\prime}\geq 0\text{ such that }k=uk^{\prime}+h\enspace,\\ 0&\text{ otherwise }.\end{cases}\]
Then we have
\[\frac{P_{uN,h}(z)-P_{uN,h}(t)}{z-t} =\sum_{k^{\prime}=1}^{u(u-1)N+h}p_{k^{\prime}}\sum_{v^{\prime}=0} ^{k^{\prime}-1}z^{v^{\prime}}t^{k^{\prime}-v^{\prime}-1}=\sum_{k^{\prime}=0}^ {u(u-1)N+h-1}p_{k^{\prime}+1}\sum_{v^{\prime}=0}^{k^{\prime}}z^{v^{\prime}}t^{ k^{\prime}-v^{\prime}}\] \[=\sum_{v^{\prime}=0}^{u(u-1)N+h-1}\left[\sum_{k^{\prime}=v^{ \prime}}^{u(u-1)N+h-1}p_{k^{\prime}+1}t^{k^{\prime}-v^{\prime}}\right]z^{v^{ \prime}}\] \[=\sum_{v^{\prime}=0}^{u(u-1)N+h-1}\left[\sum_{k^{\prime}=0}^{u(u- 1)N+h-v^{\prime}-1}p_{k^{\prime}+v^{\prime}+1}t^{k^{\prime}}\right]z^{v^{ \prime}}\enspace.\]
Since \(\varphi_{l}(t^{k^{\prime}})=0\) if \(k^{\prime}\not\equiv l\) mod \(u\), putting \(k^{\prime}=uk+l\), we obtain
\[Q_{uN,l,h}(z) =\varphi_{l}\left(\frac{P_{uN,h}(z)-P_{uN,h}(t)}{z-t}\right)\] \[=\sum_{v^{\prime}=0}^{u(u-1)N+h-1}\left[\sum_{k^{\prime}=0}^{u(u- 1)N+h-v^{\prime}-1}p_{k^{\prime}+v^{\prime}+1}\varphi_{l}(t^{k^{\prime}}) \right]z^{v^{\prime}}\] \[=\sum_{v^{\prime}=0}^{u(u-1)N+h-1}\left[\sum_{k=0}^{(u-1)N+[(h-v ^{\prime}-l-1)/u]}p_{uk+l+v^{\prime}+1}\frac{\left(\frac{1+l}{u}\right)_{k}}{ \left(\frac{u+l}{u}\right)_{k}}\right]z^{v^{\prime}}\enspace.\]
Since we have \(p_{uk+l+v^{\prime}+1}=0\) for \(0\leq v^{\prime}\leq u(u-1)N+h-1\) with \(v^{\prime}\notin u\mathbb{Z}+h-l-1\), putting \(v^{\prime}=uv+h-l-1\), we conclude
\[Q_{uN,l,h}(z)=\sum_{v=\tilde{\ell}_{l,k}}^{(u-1)N}\left[\sum_{k=0}^{(u-1)N-v} p_{u(k+v)+h}\frac{\left(\frac{1+l}{u}\right)_{k}}{\left(\frac{u+l}{u}\right)_{k}} \right]z^{uv+h-l-1}\enspace.\]
This completes the the proof of \((ii)\).
\((iii)\) Lemma 5.1 yields
\[\mathfrak{R}_{uN,l,h}(z)=\sum_{k=uN}^{\infty}\frac{\varphi_{l}(t^{k}P_{uN,h}(t ))}{z^{k+1}}\enspace. \tag{26}\]
We shall now compute \(\varphi_{l}(t^{k}P_{uN,h}(t))\) for \(k\geq uN\). Put \(\mathcal{E}=\partial_{t}-t^{u-1}/(t^{u}-1)\). Using Proposition 3.2\((i)\) for \(k\geq uN\), there exists a set of integers \(\{c_{uN,k,v}\mid v=0,1,\ldots,uN\}\) with
\[c_{uN,k,uN}=(-1)^{uN}k(k-1)\cdots(k-uN+1)\enspace\text{and}\] \[t^{k}\mathcal{E}^{uN}(t^{u}-1)^{uN}=\sum_{v=0}^{uN}c_{uN,k,v} \mathcal{E}^{uN-v}t^{k-v}(t^{u}-1)^{uN}\enspace\text{in}\enspace\mathbb{Q}(t)[ \partial_{t}]\enspace.\]
Since \(\mathcal{E}(t^{u}-1)\subseteq\ker\varphi_{l}\), using above relation, we have
\[\varphi_{l}(t^{k}P_{uN,h}(t)) =\varphi_{l}\left(\frac{t^{k}}{(uN)!}\mathcal{E}^{uN}(t^{u}-1)^{uN} \cdot t^{h}\right)=\varphi_{l}\left(\sum_{v=0}^{uN}\frac{c_{uN,k,v}}{(uN)!} \mathcal{E}^{uN-v}t^{k-v}(t^{u}-1)^{uN}\cdot t^{h}\right)\] \[=\varphi_{l}\left(\frac{c_{uN,k,uN}}{(uN)!}t^{k-uN}(t^{u}-1)^{uN} \cdot t^{h}\right)=(-1)^{uN}\binom{k}{uN}\varphi_{l}(t^{k-uN+h}(t^{u}-1)^{uN} )\enspace. \tag{27}\]
Note we have \(\varphi_{l}(t^{k-uN+h}(t^{u}-1)^{uN})=0\) if \(k-uN+h\not\equiv l\bmod u\). Put \(k=u(\tilde{k}+N+\varepsilon_{l,h})+l-h\) with \(\tilde{k}\geq 0\). Similar computation in Eq. (25) implies
\[\varphi_{l}(t^{k-uN+h}(t^{u}-1)^{uN}) =\varphi_{l}(t^{u(\tilde{k}+\varepsilon_{l,h})+l}(t^{u}-1)^{uN})\] \[=\frac{(-1)^{uN}\left(\frac{u-1}{u}\right)_{uN}\left(\frac{1+l}{ u}\right)_{\tilde{k}+\varepsilon_{l,h}}}{\left(\frac{u+l}{u}\right)_{uN+ \tilde{k}+\varepsilon_{l,h}}}=\frac{(-1)^{uN}\left(\frac{u-1}{u}\right)_{uN} \left(\frac{1+l}{u}\right)_{\tilde{k}+\varepsilon_{l,h}}}{\left(\frac{u+l}{u} \right)_{uN+\tilde{k}+\varepsilon_{l,h}}}\enspace.\]
Substituting above equality to Eqs. (27) and (26), we obtain the desire equality.
In the following, for a rational number \(a\) and a non-negative integer \(n\), we put
\[\mu_{n}(a)=\mathrm{den}(a)^{n}\prod_{\begin{subarray}{c}q:\mathrm{prime}\\ q|\mathrm{den}(a)\end{subarray}}q^{\left\lfloor\frac{n}{q-1}\right\rfloor}\enspace.\]
Notice that \(\mu_{n}(a)=\mu_{n}(a+k)\) for \(k\in\mathbb{Z}\) and
\[\mu_{n_{2}}(a)\enspace\text{is divisible by}\enspace\mu_{n_{1}}(a)\quad\text{and} \quad\mu_{n_{1}+n_{2}}(a)\enspace\text{is divisible by}\enspace\mu_{n_{1}}(a)\mu_{n_{2}}(a) \tag{28}\]
for \(n,n_{1},n_{2}\in\mathbb{N}\) with \(n_{1}\leq n_{2}\).
**Lemma 7.5**.: _Let \(K\) be an algebraic number field, \(v\) a place of \(K\) and \(\alpha\in K\setminus\{0\}\)._
\((i)\) _We have_
\[\max_{0\leq h\leq u-1}\log\,|P_{uN,h}(\alpha)|_{v}\leq o(N)^{\dagger}+u(u-1) \mathrm{h}_{v}(\alpha)N+\begin{cases}u(u+1)\mathrm{h}_{v}(2)N&\text{if }v \mid\infty\\ \log\,|\mu_{uN}(1/u)|_{v}^{-1}&\text{if }v\nmid\infty\enspace.\end{cases}\]
\((ii)\) _For \(0\leq l\leq u-2\), put_
\[D_{N}=\mathrm{den}\left(\frac{\left(\frac{1+l}{u}\right)_{k}}{\left(\frac{u+l }{u}\right)_{k}}\right)_{\begin{subarray}{c}0\leq l\leq u-2\\ 0\leq k\leq(u-1)N\end{subarray}}\enspace.\]
_Then we have_
\[\max_{\begin{subarray}{c}0\leq l\leq u-2\\ 0\leq h\leq u-1\end{subarray}}\log\,|Q_{uN,l,h}(\alpha)|_{v}\leq o(N)+u(u-1) \mathrm{h}_{v}(\alpha)N+\begin{cases}u(u+1)\mathrm{h}_{v}(2)N&\text{if }v \mid\infty\\ \log\,|\mu_{uN}(1/u)|_{v}^{-1}+\log\,|D_{N}|_{v}^{-1}&\text{if }v\nmid\infty\enspace.\end{cases}\]
Proof.: \((i)\) Let \(v\) be an archimedean place. Since we have
\[\binom{uN-1/u}{s+N}\leq 2^{uN},\quad\binom{u(s+N)+h}{uN}\leq 2^{u(s+N)+h} \enspace\text{and}\quad\left|\binom{1/u}{k-s}\right|\leq 1\enspace,\]
for \(0\leq k\leq N(u-1)\) and \(0\leq s\leq k\), we obtain
\[\left|\sum_{s=0}^{k}\binom{uN-1/u}{s+N}\binom{u(s+N)+h}{uN}\binom{1/u}{k-s} \right|\leq 2^{2uN+h}\sum_{s=0}^{k}2^{us}\leq 2^{2uN+h+u(k+1)}\enspace. \tag{29}\]
Thus, by Lemma 7.4\((i)\), we get
\[|P_{uN,h}(\alpha)|_{v}\leq|2^{2uN+h}|_{v}\cdot\left|\sum_{k=0}^{N(u-1)}2^{u(k+1)} \alpha^{uk+h}\right|_{v}\leq e^{o(N)}|2|_{v}^{u(u+1)N}\max(1,|\alpha|_{v})^{u(u -1)N}\enspace.\]
This completes the proof of the archimedean case.
Second, we consider the case of \(v\) is a non-archimedean place. Note that we have
\[\binom{uN-1/u}{s+N}=\frac{(-1)^{s+N}(1/u-uN)_{s+N}}{(s+N)!}\quad\text{and} \quad\binom{1/u}{k-s}=\frac{(-1)^{k-s}(-1/u)_{k-s}}{(k-s)!}\]
for \(0\leq k\leq N(u-1),\ 0\leq s\leq k\). Combining
\[\left|\frac{(a)_{k}}{k!}\right|_{v}\leq|\mu_{n}(a)|_{v}^{-1}\ \ \text{for}\ \ a\in\mathbb{Q}\ \ \text{and}\ \ k,n\in\mathbb{N}\ \ \text{with}\ \ k\leq n\enspace,\]
(_confer_[9, Lemma 2.2]) and (28) yields
\[\left|\binom{uN-1/u}{s+N}\binom{u(s+N)+h}{uN}\binom{1/u}{k-s}\right|_{v}\leq| \mu_{k+N}(1/u)|_{v}^{-1}\quad\text{for}\quad 0\leq k\leq(u-1)N\enspace.\]
Therefore the strong triangle inequality yields
\[\max_{0\leq k\leq N(u-1)}\left|\sum_{s=0}^{k}\binom{uN-1/u}{s+N}\binom{u(s+N)+ h}{uN}\binom{1/u}{k-s}\right|_{v}\leq|\mu_{uN}(1/u)|_{v}^{-1}\enspace. \tag{30}\]
Use Lemma 7.4\((i)\) again, we conclude the desire inequality.
\((ii)\) Let \(v\) be an archimedean place. We use the same notations as in the proof of Lemma 7.4\((ii)\). Using Eq. (29) again, we obtain
\[\left|\sum_{k=0}^{(u-1)N-v}p_{u(k+v)+h}\frac{\left(\frac{1+l}{u} \right)_{k}}{\left(\frac{u+l}{u}\right)_{k}}\right|_{v} \leq|2|_{v}^{2uN+h+u(v+1)}\sum_{k=0}^{N(u-1)-v}|2|_{v}^{uk}\] \[\leq|2|_{v}^{2uN+u(u-1)N+u+h}\enspace.\]
Lemma 7.4\((ii)\) implies that
\[|Q_{uN,l,h}(\alpha)|_{v} \leq\sum_{v=0}^{(u-1)N}\left|\left[\sum_{k=0}^{(u-1)N-v}p_{u(k+v )+h}\frac{\left(\frac{1+l}{u}\right)_{k}}{\left(\frac{u+l}{u}\right)_{k}} \right]\right|_{v}|\alpha|_{v}^{uv+h-l-1}\] \[\leq e^{o(N)}|2|_{v}^{u(u+1)N}\max(1,|\alpha|_{v})^{u(u-1)N}\enspace.\]
Let \(v\) be a non-archimedean place. Then by the definition of \(D_{N}\), we have
\[\max_{\begin{subarray}{c}0\leq l\leq u-2\\ 0\leq k\leq(u-1)N\end{subarray}}\left(\left|\frac{\left(\frac{1+l}{u}\right)_ {k}}{\left(\frac{u+l}{u}\right)_{k}}\right|_{v}\right)\leq|D_{N}|_{v}^{-1}\enspace,\]
for all \(N\in\mathbb{N}\). Using above inequality and (30) for Lemma 7.4\((ii)\), we obtain the desire inequality by the strong triangle inequality. This completes the proof of Lemma 7.5.
**Lemma 7.6**.: _Let \(K\) be an algebraic number field, \(v_{0}\) a place of \(K\), \(\alpha\in K\). Let \(N,l,h\) be non-negative integers with \(0\leq l\leq u-2\) and \(0\leq h\leq u-1\)._
\((i)\) _Assume \(v_{0}\) is an archimedean place and \(|\alpha|_{v_{0}}>2\). We have_
\[\max_{\begin{subarray}{c}0\leq l\leq u-2\\ 0\leq h\leq u-1\end{subarray}}\log\,|\mathfrak{R}_{uN,l,h}(\alpha)|_{v_{0}} \leq-u(\mathrm{h}_{v_{0}}(\alpha)-\mathrm{h}_{v_{0}}(2))N+o(N)\enspace.\]
\((ii)\) _Assume \(v_{0}\) is a non-archimedean place and \(|\alpha|_{v_{0}}>1\). Let \(p_{v_{0}}\) be the rational prime under \(v_{0}\). Put \(\varepsilon_{v_{0}}(u)=1\) if \(u\) is coprime with \(p_{v_{0}}\) and \(\varepsilon_{v_{0}}(u)=0\) if \(u\) is divisible by \(p_{v_{0}}\). We have_
\[\max_{\begin{subarray}{c}0\leq l\leq u-2\\ 0\leq h\leq u-1\end{subarray}}\log\,|\mathfrak{R}_{uN,l,h}(\alpha)|_{v_{0}} \leq-u\left(\mathrm{h}_{v_{0}}(\alpha)-\frac{\varepsilon_{v_{0}}(u)\log\,|p_ {v_{0}}|_{v_{0}}}{p_{v_{0}}-1}\right)N+o(N)\enspace.\]
Proof.: \((i)\) For a non-negative integer \(k\), we have \(\binom{u(N+k)+l-h}{uN}\leq 2^{u(N+k)+l-h}\). Thus we get
\[\left|\sum_{k=\varepsilon_{l,h}}^{\infty}\binom{u(N+k)+l-h}{uN} \frac{\left(\frac{1+l}{u}\right)_{k}}{\left(\frac{u+l}{u}+uN\right)_{k}}\frac{ 1}{\alpha^{uk}}\right|_{v_{0}} \leq|2^{uN+l-h}|_{v_{0}}\sum_{k=\varepsilon_{l,h}}^{\infty}\left| \frac{\left(\frac{1+l}{u}\right)_{k}}{\left(\frac{u+l}{u}+uN\right)_{k}}\right| _{v_{0}}\left|\frac{2}{\alpha}\right|_{v_{0}}^{uk}\] \[\leq|2^{uN+l-h}|_{v_{0}}\sum_{k=0}^{\infty}\left|\frac{2}{\alpha }\right|_{v_{0}}^{uk}=|2^{uN}|_{v_{0}}e^{o(N)}\enspace.\]
Using above inequality for Lemma 7.4\((ii)\), we obtain the assertion.
\((ii)\) By [14, Proposition 4, Lemma 4] (loc. cit. (6.1), (6.2)), we have
\[\max_{0\leq l\leq u-2}\left(\left|\frac{\left(\frac{u-1}{u}\right) _{uN}}{\left(\frac{u+l}{u}\right)_{uN}}\right|_{v_{0}}\right)\leq|p_{v_{0}}|_{ v_{0}}^{\varepsilon_{v_{0}}(u)v_{v_{0}}((uN)!)+o(N)}\enspace,\] \[\left|\sum_{k=\varepsilon_{l,h}}^{\infty}\binom{u(N+k)+l-h}{uN} \frac{\left(\frac{1+l}{u}\right)_{k}}{\left(\frac{u+l}{u}+uN\right)_{k}}\frac {1}{\alpha^{uk}}\right|_{v}=e^{o(1)}\enspace.\]
Combining \(v_{p}((uN)!)=uN/(p-1)+o(N)\) and above inequality for Lemma 7.4\((ii)\), we obtain the assertion. This completes the proof of Lemma 7.6.
### Proof of Theorem 7.1
Proof.: We use the same notations as in Theorem 7.1. Let \(\alpha\in K\) with \(|\alpha|_{v_{0}}>1\). For a non-negative integer \(N\), we define a matrix
\[\mathrm{M}_{N}=\begin{pmatrix}P_{uN,0}(\alpha)&\cdots&P_{uN,u-1}(\alpha)\\ Q_{uN,0,0}(\alpha)&\cdots&Q_{uN,0,u-1}(\alpha)\\ \vdots&\ddots&\vdots\\ Q_{uN,u-2,0}(\alpha)&\cdots&Q_{uN,u-2,u-1}(\alpha)\end{pmatrix}\in\mathrm{M}_{ u}(K)\enspace.\]
By Lemma 7.2, the matrices \(\mathrm{M}_{N}\) are invertible for every \(N\). We define functions
\[F_{v}:\mathbb{N}\longrightarrow\mathbb{R}_{\geq 0};\ N\mapsto u(u-1)\mathrm{h}_{v}( \alpha)N+o(N)+\begin{cases}u(u+1)\mathrm{h}_{v}(2)N&\text{ if }v\mid\infty\\ \log\,|\mu_{uN}(1/u)|_{v}^{-1}+\log\,|D_{N}|_{v}^{-1}&\text{ if }v\nmid\infty\enspace,\end{cases}\]
for \(v\in\mathfrak{M}_{K}\). By Lemma 7.3, we have
\[\lim_{N\to\infty}\frac{1}{N}\log\,D_{N}\leq(u-1)\left(\log\,\nu(u)+\frac{u}{ \varphi(u)}\right)\enspace,\]
where \(D_{N}\) is an integer defined in Lemma 7.5, we get
\[\lim_{N\to\infty}\frac{1}{N}\left(\sum_{v\neq v_{0}}F_{v}(N)\right)\leq u\mathbb{ B}_{v_{0}}(\alpha)\enspace,\]
and, by Lemma 7.5,
\[\max_{0\leq h\leq u-1}\log\,\max\{|P_{uN,h}(\alpha)|_{v_{0}}\}\leq uU _{v_{0}}(\alpha)N+o(N)\enspace,\] \[\max_{\begin{subarray}{c}0\leq l\leq u-2\\ 0\leq h\leq u-1\end{subarray}}\log\,\max\{|P_{uN,h}(\alpha)|_{v},|Q_{uN,l,h}( \alpha)|_{v}\}\leq F_{v}(N)\enspace\text{for}\enspace v\in\mathfrak{M}_{K}\enspace.\]
By Lemma 7.6, we have
\[\max_{\begin{subarray}{c}0\leq l\leq u-2\\ 0\leq h\leq u-1\end{subarray}}\log\,|\mathfrak{R}_{uN,l,h}(\alpha)|_{v_{0}} \leq-u\mathbb{A}_{v_{0}}(\alpha)N+o(N)\enspace.\]
Using a linear independence criterion in [11, Proposition 5.6] for
\[\theta_{l}=\frac{1}{\alpha^{l+1}}{}_{2}F_{1}\left(\frac{1+l}{u},1,\,\frac{u+l }{u}\right|\frac{1}{\alpha^{u}}\right)\enspace\text{for}\enspace 0\leq l\leq u-2\enspace,\]
and the invertible matrices \((\mathrm{M}_{N})_{N}\), applying above estimates, we obtain Theorem 7.1.
## 8 Appendix
Denote the algebraic closure of \(\mathbb{Q}\) by \(\overline{\mathbb{Q}}\). Let \(a(z),b(z)\in\overline{\mathbb{Q}}[z]\) with \(w:=\max(\deg a-2,\deg b-1)\geq 0\) and \(a(z)\neq 0\). Put \(D=-a(z)\partial_{z}+b(z)\). The Laurent series \(f_{0}(z),\ldots,f_{w}(z)\) obtained in Lemma 4.1 for \(D\) become \(G\)-functions in the sense of Siegel when \(D\) is a \(G\)-operator (_confer_[2, IV]). Here we refer below a result due to S. Fishcler and T. Rivoal [15] in which they gave a condition so that \(D\) becomes a \(G\)-operator.
**Lemma 8.1**.: (cf. [15, Proposition 3 (ii)]) Let \(m\geq 2\) be a positive integer, \(\alpha_{1},\ldots,\alpha_{m},\beta_{1},\ldots,\beta_{m-1},\gamma\in\overline{ \mathbb{Q}}\) with \(\alpha_{1},\ldots,\alpha_{m}\) being pairwise distinct. In the case of \(0\in\{\alpha_{1},\ldots,\alpha_{m}\}\), we put \(\alpha_{m}=0\). Define \(a(z)=\prod_{i=1}^{m}(z-\alpha_{i}),b(z)=\gamma\prod_{j=1}^{m-1}(z-\beta_{j})\) and \(D=-a(z)\partial_{z}+b(z)\in\overline{\mathbb{Q}}[z,\partial_{z}]\). Then the followings are equivalent.
\((i)\)\(D\) is a \(G\)-operator.
\((ii)\) We have
\[\gamma\frac{\prod_{j=1}^{m-1}(\alpha_{i}-\beta_{j})}{\prod_{i^{ \prime}\neq i}(\alpha_{i}-\alpha_{i^{\prime}})}\in\mathbb{Q}\quad\text{for all}\quad 1\leq i\leq m\quad\text{if} \enspace 0\notin\{\alpha_{1},\ldots,\alpha_{m}\}\enspace,\] \[\gamma\frac{\prod_{j=1}^{m-1}(\alpha_{i}-\beta_{j})}{\prod_{i^{ \prime}\neq i}(\alpha_{i}-\alpha_{i^{\prime}})}\in\mathbb{Q}\quad\text{for all}\quad 1\leq i\leq m\enspace\text{and}\enspace \gamma\prod_{j=1}^{m-1}\frac{\beta_{j}}{\alpha_{j}}\in\mathbb{Q}\quad\text{ otherwise}\enspace.\]
**Acknowledgements.**
The author is grateful to Professors Daniel Bertrand and Sinnou David for their helpful suggestions. The author deeply thanks Professor Noriko Hirata-Kohno for her enlightening comments on a preliminary version. This work is partly supported by the Research Institute for Mathematical Sciences, an international joint usage and research center located in Kyoto University. |
2302.14395 | Item Cold Start Recommendation via Adversarial Variational Auto-encoder
Warm-up | The gap between the randomly initialized item ID embedding and the
well-trained warm item ID embedding makes the cold items hard to suit the
recommendation system, which is trained on the data of historical warm items.
To alleviate the performance decline of new items recommendation, the
distribution of the new item ID embedding should be close to that of the
historical warm items. To achieve this goal, we propose an Adversarial
Variational Auto-encoder Warm-up model (AVAEW) to generate warm-up item ID
embedding for cold items. Specifically, we develop a conditional variational
auto-encoder model to leverage the side information of items for generating the
warm-up item ID embedding. Particularly, we introduce an adversarial module to
enforce the alignment between warm-up item ID embedding distribution and
historical item ID embedding distribution. We demonstrate the effectiveness and
compatibility of the proposed method by extensive offline experiments on public
datasets and online A/B tests on a real-world large-scale news recommendation
platform. | Shenzheng Zhang, Qi Tan, Xinzhi Zheng, Yi Ren, Xu Zhao | 2023-02-28T08:23:15Z | http://arxiv.org/abs/2302.14395v1 | # Item Cold Start Recommendation via Adversarial Variational Autoencoder Warm-up
###### Abstract
With numerous pieces of information emerging daily and greatly influencing people's lives, large-scale recommendation systems are necessary for timely bridging the users with their desired information. However, the existing widely used embedding-based recommendation systems have a shortcoming in recommending new items because little interaction data is available for training new item ID embedding, which is recognized as item cold start problem. The gap between the randomly initialized item ID embedding and the well-trained warm item ID embedding makes the cold items hard to suit the recommendation system, which is trained on the data of historical warm items. To alleviate the performance decline of new items recommendation, the distribution of the new item ID embedding should be close to that of the historical warm items. To achieve this goal, we propose an Adversarial Variational Autoencoder Warm-up model (AVAEW) to generate warm-up item ID embedding for cold items. Specifically, we develop a conditional variational autoencoder model to leverage the side information of items for generating the warm-up item ID embedding. Particularly, we introduce an adversarial module to enforce the alignment between warm-up item ID embedding distribution and historical item ID embedding distribution. We demonstrate the effectiveness and compatibility of the proposed method by extensive offline experiments on public datasets and online A/B tests on a real-world large-scale news recommendation platform.
Keywords:item cold start, generative adversarial network, conditional autoencoder, large-scale recommendation system
## 1 Introduction
Recommendation system has become more important in the era of information, where lots of information content emerge each day and play an important role in each person's life [27, 7, 42, 52]. In order to better capture the characteristics of recommended content, recommendation systems usually learn an item ID embedding from the user-item interaction data [14, 6, 44]. However, such systems suffer from the item cold start problem that the item ID embedding is not trained sufficiently because little interaction information has been gained [37, 39, 47, 25]. Without sufficient training data for cold items, the distribution of
the cold item ID embedding is significantly apart from that of the warm item ID embedding. Since the warm item ID contributes most of the samples in training the recommendation system, such embedding distribution gap makes the cold item hard to suit the recommendation system.
The existing methods proposed to solve the item cold start problem mainly fall into three categories: 1) promoting the robustness of the recommendation model in absence of item ID embedding, such as using dropout or masking on item ID embedding in the model training [39; 53; 32]; 2) improving the learning efficiency with a limited amount of interaction data, such as using meta-learning approaches to quickly adapt the recommendation system to new item [37; 26; 35; 11]; and 3) leveraging the side information of items to facilitate the initialization of the item ID embedding [47; 55; 57; 4].
However, the methods in the first category cannot fully train the item ID embedding and utilize this collaborative information, and thus have deficient performance in adapting to a limited number of user-item interactions in the warm-up phases. The methods in the second category alleviate the inefficient training problem but start with randomly initialized item embedding, which may be quite different from the well-trained hot item embeddings. Such distribution gap slows down the conversion of cold items to warm-up items. The methods in the third category provide a better initialization for the conversion, but none of the existing methods rigorously consider the embedding distribution gap problem and have no strict guarantee of alleviating the distribution gap in the warm-up process. Recently, some approaches have attempted to solve the problem of distribution gap, e.g., GAR [4] designs an adversarial training strategy: train an item generator via maximizing the ranking scores of cold items and train the recommendation model adversarially to decrease the corresponding ranking scores. They are closely related to our work. However, they do not directly consider the consistency of the distribution, nor do they strictly guarantee to reduce the distribution gap during the warm-up process, which reduces their recommendation effect.
To address the problem of item ID embedding gap in item cold start, our core idea in this paper is to generate a better warm-up item ID embedding whose distribution is close to one of the warm items ID embedding by introducing an adversarial model to decrease the embedding distribution gap. Particularly, we proposed a method named Adversarial Variational AutoEncoder Warm-up to address the item cold start problem. AVAEW leverages the side information of items to generate the warm-up item ID embedding that is similar to the distribution of historical warm item embedding. We further introduce an adversarial module into the encoder-decoder generative model to enforce the alignment between warm item ID embedding distribution and historical item embedding. Since our methods only act on item ID embedding, our method is compatible with different kinds of recommendation backbone models. Moreover, as the proposed method only utilizes the side information of the items which is available in most real-world scenarios, it can be easily applied in large-scale industrial systems without further development of data or training pipeline, such as item-item
relationship required by graph embedding learning methods [19, 25] or separation of global and local update in meta-learning method [35]. We evaluate the proposed method by extensive offline experiments on public datasets and online A/B tests on a large-scale real-world recommendation platform.
The contributions in this paper can be summarized as follow:
1. We first directly and rigorously consider the item ID embedding gap problem in the item cold start recommendation and propose AVAEW to alleviate the gap in an adversarial manner. AVAEW is able to ensures the consistent distribution of generated results.
2. AVAEW has no extra data requirements, which makes it easy to be deployed in the online scenario and compatible in equipping with different embedding-based recommendation models.
3. We conduct extensive offline and online experiments to demonstrate the effectiveness of AVAEW compared with other state-of-art warm-up methods and compatibility in equipping with different recommendation models.
## 2 Proposed Method
### Problem Formulation
#### 2.1.1 Click-Through Rate (CTR)
In this work, we focus on the Click-Through Rate (CTR) prediction task, one of the most common tasks in recommendation systems [54, 14]. For a user-item pair, the CTR model aims to generate a prediction \(y\) with inputs of item-related features \(\mathcal{V}\) and other features \(\mathcal{U}\) (such as user profile data and context data). The model is trained by minimizing the discriminative loss between the model prediction and corresponding interaction observation \(o\) (i.e., click or unclick). The Binary Cross Entropy loss is commonly adopted as the discriminative loss:
\[\mathcal{L}_{M}=\mathbb{E}[-o\log(y)-(1-o)\log(1-y)]. \tag{1}\]
**Item Cold Start Problem** Following the embedding learning framework [38, 12, 43], recommendation models learn the embeddings for the discrete input data, among which the item ID embedding is one of the most critical representations for the items. We partition the item side features into two parts: the item ID embedding (\(\mathbf{v}_{\mathcal{I}}\) and side information \(\mathcal{V}_{S}\)). Then the recommendation model can be formulated as:
\[y=f(\mathbf{v}_{\mathcal{I}},\mathcal{V}_{\mathcal{S}},\mathcal{U};\theta) \tag{2}\]
where \(\theta\) is the set of parameters of the recommendation model. However, the item ID embedding of the new items cannot be learned sufficiently due to the lack of enough training interaction data, causing the distribution gap with that of warm items and thus the decline of recommendation performance. Therefore, we need to use the limited amount of new items' interaction data effectively to quickly adapt their item ID embeddings, which is regarded as the item cold start problem, and the new items are called cold items in the recommendation
system. In this work, we aim to alleviate this problem by generating better item ID embeddings for these new cold items such that their distribution is close to the item ID embedding distribution of the warm items.
### Adversarial Variational AutoEncoder Warm-up
We introduce the proposed Adversarial Variational AutoEncoder Warm-up method in this part. Figure 1 shows the framework of our proposed model. We develop a VAE-based latent space model to generate a warm-up item ID embedding for the cold items. In order to close the ID embedding distribution gap between the cold and warm items, we use an adversarial module to align the warm-up item ID embedding distribution with the historical warm item ID embedding.
**Item ID Embedding Encoder & Decoder** We develop the item ID embedding encoder & decoder following the VAE framework [40, 33]. The item ID encoder module takes the item ID embedding as input and generates the corresponding latent distribution. The decoder samples a latent variable from the
Figure 1: Framework of proposed Adversarial Variational Auto-Encoder Warm-up model, which mainly consists of four components additional to the backbone model (shown in the orange box): 1) the ID embedding encoder \(E_{I}\) generates the latent space representation of the item ID embedding; 2) the prior encoder \(E_{S}\) utilizes the side information of item to generate the customized prior distribution for warm-up item ID embedding; 3) the decoder \(D\) generates the reconstructed/warm-up item ID embedding from the latent representation; 4) the adversarial module discriminates the warm-up embedding and real embedding to align their distributions. In the inference phase, we 1) generate the warm-up item embedding for the cold items; 2) use the warm-up item embedding as the item embedding, and 3) inputs all user, context, and item features into the recommendation model for prediction.
generated latent distribution and reconstructs the item ID embedding based on the sampled variable. The encoder-decoder module can be formulated as:
\[\begin{split}\mu,\sigma=g_{I}(\mathbf{v}_{\mathcal{I}};\theta_{I})\\ \mathbf{z}\sim\mathbb{N}(\mu,&\Sigma),\mathrm{diag}( \Sigma)=\sigma\\ \hat{\mathbf{v}}_{\mathcal{I}}=g_{D}(\mathbf{z};\theta_{D})\end{split} \tag{3}\]
where \(\mu\) and \(\sigma\) are k-dimension vectors, \(\theta_{I}\) and \(\theta_{D}\) are the set of parameters of item ID encoder \(E_{I}\) and encoder \(D\), respectively. We train the encoder-decoder module via the reconstruction loss:
\[\mathcal{L}_{R}=||\mathbf{v}_{\mathcal{I}}-\hat{\mathbf{v}}_{\mathcal{I}}||_{2 }^{2} \tag{4}\]
**Prior Encoder** Due to the deficiency of training data, the item ID embeddings of cold items are nearly randomly initialized. To provide a more informative item ID embedding, the prior encoder \(E_{s}\) takes the side information of the items as inputs and generated the warm-up item ID embedding from the prior distribution
\[\begin{split}\mu_{p},\sigma_{p}&=g_{S}(\mathcal{V} _{\mathcal{S}};\theta_{S}).\\ \mathbf{z}_{p}&\sim\mathbb{N}(\mu_{p},\Sigma_{p}), \mathrm{diag}(\Sigma_{p})=\sigma_{p}\\ \hat{\mathbf{v}}_{\mathcal{I}}&=g_{D}(\mathbf{z}_{p };\theta_{D})\end{split} \tag{5}\]
We minimize the Wasserstein Distance [31] between the encoded distribution and prior distribution to train the prior encoder:
\[\mathcal{L}_{WD}=WD(\mathbb{N}(\mu,\Sigma),\mathbb{N}(\mu_{p},\Sigma_{p})). \tag{6}\]
In addition, we replace the warm-up item embedding in Equation 2 to generate the output and calculate the recommendation CTR loss:
\[\begin{split}\bar{y}&=f(\bar{\mathbf{v}}_{ \mathcal{I}},\mathcal{V}_{\mathcal{S}},\mathcal{U};\theta)\\ \mathcal{L}_{CTR}&=\mathbb{E}[-o\log(\bar{y})-(1-o) \log(1-\bar{y})].\end{split} \tag{7}\]
**Adversarial Alignment** In the inference stage, \(\bar{\mathbf{v}}_{\mathcal{I}}\) is used as the item ID embedding for the cold item. Though we used reconstruction loss between \(\mathbf{v}_{\mathcal{I}}\) and \(\hat{\mathbf{v}}_{\mathcal{I}}\) and Wasserstein Distance between the encoded distribution and prior distribution in training different parts of the modules, we don't have direct regulation to alleviate the gap between warm-up item ID embedding \(\bar{\mathbf{v}}_{\mathcal{I}}\) and the origin item ID embedding \(\mathbf{v}_{\mathcal{I}}\), which impairs the performance of the recommendation model.
To directly address this problem, we propose to introduce an extra regulation using the adversarial network to enforce the closeness of warm-up item ID embedding and the origin item ID embedding. A classifier \(G\) is trained to discriminate the origin item ID embedding and warm-up ID embedding by minimizing the loss:
\[\mathcal{L}_{D}=-\mathbb{E}_{\mathbf{v}_{\mathcal{I}}\sim P_{\mathbf{v}_{ \mathcal{I}}}}[\log G(\mathbf{v}_{\mathcal{I}})]-\mathbb{E}_{\mathbf{z}_{p} \sim\mathbb{N}(\mu_{p},\sigma_{p})}[\log(1-G(g_{D}(\mathbf{z}_{p})))]. \tag{8}\]
To train the encoders, we use group mean feature matching [1], which closes the empirical mean of the prior distribution and encoded distribution:
\[\mathcal{L}_{GD}=||\mathbb{E}_{\mathbf{v}_{\mathcal{I}}\sim P_{\mathbf{v}_{ \mathcal{I}}}}[f_{G}(\mathbf{v}_{\mathcal{I}})]-\mathbb{E}_{\mathbf{z}_{p} \sim\mathbb{N}(\mu_{p},\sigma_{p})}[f_{G}(g_{D}(\mathbf{z}_{p})]||_{2}^{2}, \tag{9}\]
and pairwise feature matching, which closes the pairwise sample distance:
\[\mathcal{L}_{GD}^{{}^{\prime}}=\mathbb{E}_{\mathbf{v}_{\mathcal{I}}\sim P_{ \mathbf{v}_{\mathcal{I}}},\mathbf{z}_{p}\sim\mathbb{N}(\mu_{p},\sigma_{p})}||f _{G}(\mathbf{v}_{\mathcal{I}})-f_{G}(g_{D}(\mathbf{z}_{p}))||_{2}^{2}, \tag{10}\]
where \(f_{G}(x)\) is the features on an intermediate layer of the classifier. We choose the last layer of the classifier as the intermediate layer.
**Model Training** We train the backbone model and AVAEW warm-up parts consecutively, We first train the recommendation model via minimizing \(\mathcal{L}_{M}\). Then we train the warm-up item ID generative module (\(E_{I}\), \(E_{S}\), \(D\)) and (2) classifier \(G\), alternatively. The warm-up item ID embedding generative module is trained via minimizing the total loss:
\[\mathcal{L}_{w}=\underbrace{\mathcal{L}_{CTR}+\alpha\mathcal{L}_{R}+\beta \mathcal{L}_{WD}}_{\text{variational}}+\underbrace{\xi\mathcal{L}_{GD}+\xi^{ \prime}\mathcal{L}_{GD}^{{}^{\prime}}}_{\text{adversarial}}, \tag{11}\]
where \(\alpha,\beta,\xi,\xi^{\prime}\) are the hyper-parameters. The classifier \(G\) is trained via minimizing \(\mathcal{L}_{D}\).
## 3 Validation
### Offline Validation
We first validate the effectiveness of the proposed AVAEW offline on the representative public datasets.
**Experiment Setting** We conduct experiments on the following three public datasets: MovieLens-1M 1, MovieLens-25M 2, and Taobao Display Ad Click 3. A detailed description of these datasets can be found in the Supplementary Material.
Footnote 1: [https://grouplens.org/datasets/movielens/1m/](https://grouplens.org/datasets/movielens/1m/)
Footnote 2: [https://grouplens.org/datasets/movielens/25m/](https://grouplens.org/datasets/movielens/25m/)
Footnote 3: [https://tianchi.aliyun.com/dataset/dataDetail?dataId=56](https://tianchi.aliyun.com/dataset/dataDetail?dataId=56)
To validate the recommendation performance of the proposed method in different phases, we preprocess the datasets following [51, 26, 55]. Specifically, we label items as old or new based on their frequency, where items with more than \(N\) labeled instances are old and others are new. We set \(N\) of 200, 2700, and 2000 for Movielens-1M, Movielens-25M, and Taobao Ad data. Then the new item instances are divided into four sets (denoted as warm-a, -b, -c, and test) according to the order of their release timestamp. Specifically, for each new item, the first \(K\) ratings are divided into warm-a, second \(K\) ratings are warm-b, third
\(K\) ratings are warm-c and the later ratings are test set. The new items with less than \(3K\) ratings are discarded. We set \(K\) to 20, 40, and 500. By setting \(N\) and \(K\) to these values, the ratio between the number of news items and old items is around 8:2, which approximates the distribution of long-tail items. The ratio between the sizes of warm-a, -b, -c, and test sets is approximately 1:1:1:3.
#### 4.2.2 Backbones and Baselines
As summarized in Section 1, there are three general categories for solving the item cold start problem. We choose the State-Of-The-Art (SOTA) methods in each category for comparison:
* DropoutNet [39] improves the robustness by applying dropout technology to mitigate the model's dependency on item ID.
* Meta embedding (Meta-E) [26] learns the ID embedding for new items fast adaption with meta-learning technology.
* MWUF [55] and CVAR [51] use side information of items to generate warm-up item ID embedding by meta Scaling/Shifting networks and conditional variational auto-encoder, respectively. GAR [4] attempts to increase the ranking score of cold items by training a warm-up item embedding generator in an adversarial manner.
Because AVAEW only acts on the item ID embedding, it can be equipped with different kinds of embedding-based backbone recommendation models. To demonstrate the compatibility, conduct experiments upon the following representative backbones that are widely used in practical recommendation: Factorization Machine (FM) [34], DeepFM [14], Wide&Deep [6], Deep & Cross Network (DCN) [44], IPNN [29] and OPNN [29].
#### 4.2.3 Implementation Details
For backbones models, the MLPs in backbones use the same structure with two dense layers (hidden units 16). The embedding size of each feature is fixed to 16. For a fair comparison, we use the same setting for all warm-up methods. We use Adam as optimizer with learning rate set to 0.001 and mini-batch size set to 2048. We use old item instances to first pre-train the backbone model and train the AVAEW model with the fixed backbone model. Then we progressively feed warm-a, -b, -c data to train the backbone and AVAEW and evaluate the models on the test set. We take the AUC score as the evaluation metric.
#### 4.2.4 Validation Result
First, we demonstrate the superiority of AVAEW compared with other SOTA warm-up methods. Table 1 shows the AUC of AVAEW and other SOTA warm-up methods with two backbones (DeepFM and IPNN) on three public datasets. As expected, as DropoutNet ignores the reliance on item ID embedding of the cold items to some extent, it cannot quickly fit the cold item with limited samples. DropoutNet performs similarly or even worse than the back backbone in the warm-c phase. Meta-E is designed to fast adaption but ignores the initial embedding in terms of performance. Thus when the warm-up data is very limited, for example after warm-a phase of MoiveLens-1M, the Meta-E cannot perform well. In the cases where the warm-up data is sufficient for overcoming this shortcomings of random initialized item embeddings,
for example in warm-a phase of MovieLens-25M and Taobao-AD, Meta-E performs relatively well. Compared with MWUF and CVAR, the biggest difference in AVAEW is that AVAEW adopts an adversarial module to merge the distribution of cold and hot item embeddings. The results show that AVAEW performs best in the warm phases, which reflects the benefits of quickly adapting the cold item ID embedding distribution by matching the warm item ID embedding distribution. Due to the adoption of variational autoencoder, AVAEW alleviates the mode collapse problem compared to the pure GAN model [1]. Therefore, from the results, we can also see that AVAEW achieves better results than GAR.
Figure 2 illustrates the cold item embedding generated by different methods as well as the embedding of hot items in the pre-trained base model. We use DeepFM as the backbone model and PCA for dimension reduction. The cold item embeddings of the base model, which is only updated by limited interaction, are not well trained and thus near the initialized points. While the distribution of cold item embeddings generated by the CVAR are quite different from the warm item embedding. With alignment by the adversarial module, our method AVAEW can properly warm-up item embeddings for the cold items to increase their recommendation performance.
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c} \hline \multirow{2}{*}{Datasets} & \multirow{2}{*}{Methods} & \multicolumn{3}{c|}{DeepFM - backbone} & \multicolumn{3}{c}{IPNN - backbone} \\ \cline{3-8} & & warm-a & warm-b & warm-c & warm-a & warm-b & warm-c \\ \hline \multirow{8}{*}{MovieLens-1M} & base & 0.7452 & 0.7570 & 0.7667 & 0.7489 & 0.7609 & 0.7715 \\ & DropoutNet & 0.7502 & 0.7590 & 0.7667 & 0.7404 & 0.7472 & 0.7537 \\ & Meta-E & 0.7439 & 0.7574 & 0.7690 & 0.7135 & 0.7309 & 0.7460 \\ & MWUF & 0.7451 & 0.7575 & 0.7682 & 0.7480 & 0.7614 & 0.7719 \\ & CVAR & 0.7848 & 0.7981 & 0.8057 & 0.7825 & 0.7970 & 0.8022 \\ & GAR & 0.7671 & 0.7776 & 0.7962 & 0.7837 & 0.7896 & 0.7991 \\ & AVAEW & **0.7909** & **0.8025** & **0.8064** & **0.7859** & **0.7982** & **0.8027** \\ \hline \multirow{8}{*}{MovieLens-25M} & base & 0.7947 & 0.8040 & 0.8098 & 0.8065 & 0.8113 & 0.8151 \\ & DropoutNet & 0.7936 & 0.8000 & 0.8046 & 0.8028 & 0.8053 & 0.8075 \\ & Meta-E & 0.8010 & 0.8072 & 0.8110 & 0.8094 & 0.8140 & 0.8175 \\ & MWUF & 0.7950 & 0.8046 & 0.8106 & 0.8058 & 0.8109 & 0.8151 \\ & CVAR & 0.8074 & **0.8110** & 0.8118 & 0.8087 & 0.8189 & 0.8217 \\ & GAR & 0.7790 & 0.7841 & 0.7861 & 0.7974 & 0.7992 & 0.8005 \\ & AVAEW & **0.8079** & 0.8107 & **0.8131** & **0.8184** & **0.8226** & **0.8249** \\ \hline \multirow{8}{*}{Taobao-AD} & base & 0.6127 & 0.6222 & 0.6307 & 0.6163 & 0.6230 & 0.6291 \\ & DropoutNet & 0.6138 & 0.6227 & 0.6308 & 0.6162 & 0.6233 & 0.6295 \\ \cline{1-1} & Meta-E & 0.6035 & 0.6148 & 0.6243 & 0.6216 & 0.6287 & 0.6350 \\ \cline{1-1} & MWUF & 0.6079 & 0.6167 & 0.6254 & 0.6197 & 0.6260 & 0.6319 \\ \cline{1-1} & CVAR & 0.6140 & 0.6251 & 0.6360 & 0.6272 & 0.6347 & 0.6413 \\ \cline{1-1} & GAR & 0.6063 & 0.6112 & 0.6175 & 0.6149 & 0.6237 & 0.6314 \\ \cline{1-1} & AVAEW & **0.6186** & **0.6306** & **0.6384** & **0.6287** & **0.6359** & **0.6419** \\ \hline \end{tabular}
\end{table}
Table 1: Comparison result with other SOTA item warm-up methods. AUC metric of recommendation results are reported. The best performances are highlighted in bold.
\begin{table}
\begin{tabular}{c c|c|c|c|c|c|c|c|c|c|c} \hline \multirow{3}{*}{Methods} & \multicolumn{3}{c|}{MovieLens-1M} & \multicolumn{3}{c|}{MovieLens-25M} & \multicolumn{3}{c}{Taobao-AD} \\ \cline{3-13} & & warm-a & warm-b & warm-c & warm-a & warm-b & warm-c & warm-a & warm-b & warm-c \\ \hline \multirow{3}{*}{ImageNet} & Base & 0.7452 & 0.7570 & 0.7667 & 0.7947 & 0.8040 & 0.8098 & 0.6127 & 0.6222 & 0.6307 \\ & AVAEW & 0.7909 & 0.8025 & 0.8064 & 0.8077 & 0.8122 & 0.8143 & 0.6186 & 0.6306 & 0.6384 \\ & improve & 6.12\% & 6.01\% & 5.18\% & 1.64\% & 1.02\% & 0.56\% & 0.96\% & 1.35\% & 1.23\% \\ \hline \multirow{3}{*}{ImageNet} & Base & 0.7352 & 0.7492 & 0.7608 & 0.7860 & 0.7926 & 0.7969 & 0.5932 & 0.6045 & 0.6134 \\ & AVAEW & 0.7764 & 0.7946 & 0.7956 & 0.8001 & 0.8047 & 0.8059 & 0.6126 & 0.6212 & 0.6317 \\ & improve & 5.60\% & 6.06\% & 4.57\% & 1.79\% & 1.53\% & 1.13\% & 3.27\% & 2.77\% & 2.99\% \\ \hline \multirow{3}{*}{ImageNet} & Base & 0.7411 & 0.7513 & 0.7603 & 0.7781 & 0.7915 & 0.7980 & 0.5909 & 0.6135 & 0.6278 \\ & AVAEW & 0.7853 & 0.7973 & 0.8031 & 0.7936 & 0.7973 & 0.8008 & 0.6114 & 0.6220 & 0.6269 \\ & improve & 5.96\% & 6.12\% & 5.62\% & 1.98\% & 0.73\% & 0.35\% & 3.47\% & 1.38\% & -0.13\% \\ \hline \multirow{3}{*}{ImageNet} & Base & 0.7251 & 0.7400 & 0.7528 & 0.7953 & 0.8015 & 0.8058 & 0.6210 & 0.6304 & 0.6377 \\ & AVAEW & 0.7839 & 0.8023 & 0.8077 & 0.8064 & 0.8104 & 0.8123 & 0.6309 & 0.6375 & 0.6431 \\ & improve & 8.11\% & 8.43\% & 7.29\% & 1.40\% & 1.11\% & 0.80\% & 1.59\% & 1.12\% & 0.85\% \\ \hline \multirow{3}{*}{ImageNet} & Base & 0.7432 & 0.7579 & 0.7698 & 0.8012 & 0.8075 & 0.8120 & 0.6257 & 0.6323 & 0.6384 \\ & AVAEW & 0.7783 & 0.7938 & 0.7997 & 0.8136 & 0.8168 & 0.8168 & 0.6276 & 0.6319 & 0.6399 \\ & improve & 4.73\% & 4.74\% & 3.88\% & 1.55\% & 1.15\% & 0.60\% & 0.30\% & -0.07\% & 0.23\% \\ \hline \multirow{3}{*}{ImageNet} & Base & 0.7489 & 0.7609 & 0.7715 & 0.8065 & 0.8113 & 0.8151 & 0.6239 & 0.6311 & 0.6377 \\ & AVAEW & 0.7859 & 0.7982 & 0.8027 & 0.8121 & 0.8184 & 0.8202 & 0.6287 & 0.6359 & 0.6419 \\ \cline{1-1} & improve & 4.95\% & 4.90\% & 4.05\% & 0.69\% & 0.87\% & 0.62\% & 0.77\% & 0.76\% & 0.67\% \\ \hline \end{tabular}
\end{table}
Table 2: AUC of the base model and AVAEW on three datasets with different backbones.
Figure 2: Distribution of the cold items’ ID embedding generated by different methods as well as the item ID embedding of the hot items.
Then we further demonstrate the compatibility of AVAEW on different backbones. Table 2 shows the AUC performance of AVAEW with different types of backbones recommendation models on MovieLens-1M, MovieLens-25M, and Taobao-Ad, respectively. The results show that AVAEW can boost all kinds of tested backbones in cold item recommendations. In the two large datasets (MovieLens-25M and Taobao-AD), the AVAWE can increase the AUC by over 0.1 in the cold item recommendation, which is a huge improvement in the practical large-scale recommender system.
### Online Validation
We further validate the proposed AVAEW on a large-scale real-world news recommendation system, which serves over 100 million users and receives over one hundred thousand new content each day. Here are usually four stages in the large-scale recommender system [46], namely matching, pre-ranking, ranking, and re-ranking. We applied AVAEW in the pre-rank stage, which is designed to predict the CTR of items and filter out plenty of items with the lowest CTR scores. We choose to apply AVAEW in the pre-rank stage because we found out that most of the new items are ranked relatively low and filtered out. The backbone model of the pre-rank stage is DSSM [16]. There are 74 features for the backbone model, which include 49 user features and 25 item features, among which we use 11 important item side information for the inputs of AVAEW.
In the online A/B test, we warm up the cold video-type items and focus on the following metrics: exposure rate, and video view. The exposure rate is the number of unique recommended items. A higher exposure rate indicates that a cold item has more chances to be accessed by users and the diversity of recommended content is increased. The video view is the number of items viewed by users, reflecting the satisfaction of users with the content.
We run the online A/B test for 8 days, which involves over 80 thousand users for each of the base model and AVAEW model experiments. Compared with the base model, AVAEW increases the exposure rate by +1.6019% and the video views by +1.1396%. In order to measure the satisfaction of users on the recommended content more accurately, we further measure the deep video views as the views that the time length of video play \(>\) 30s or finish rate \(>\) 0.8, where finish rate is the ratio of the time length of video play to the video length. By doing so, we exclude the views that users may just be attached to by titles but not the content. For the deep video view, AVAEW achieves +0.3352% improvement on long videos (video length \(>=\) 30s) and +0.7268% improvement on short videos (video length \(<\) 30s).
## 4 Conclusion and Discussion
In this paper, we proposed a novel method, AVAEW, for alleviating the item ID embedding gap between cold and other items. By using an adversarial module to address the item ID embedding gap problem, the AVAEW generates warm-up
item ID embeddings that are more suited to the recommender system. Extensive experiments on public datasets validated the effectiveness of the proposed method by comparing AVAEW with the SOTA methods in three main categories of item cold recommendation methods. Moreover, we demonstrated the compatibility of AVAEW on different kinds of backbone models.
We also showed that AVAEW can be easily applied in the real-world large-scale recommendation system and improve item cold start without extra development of data construction or training pipeline. The model is capable of continuously adapting to the new items. Our method also has the potential to be used in user cold start, because like the item side information, the user profile, such as age and occupation, can partially reflect the user interest and thus can facilitate the user embedding learning. However, the user profile data is privacy sensitive, and the item cold start problem is more serious in our online platform. Thus we first focus on the item cold start in this study.
|
2309.12496 | Optical Photon Simulation with Mitsuba3 | Optical photon propagation is an embarrassingly parallel operation, well
suited to acceleration on GPU devices. Rendering of images employs similar
techniques -- for this reason, a pipeline to offload optical photon propagation
from Geant4 to the industry-standard open-source renderer Mitsuba3 has been
devised. With the creation of a dedicated plugin for single point multi-source
emission, we find a photon propagation rate of $2\times10^{5}$ photons per
second per CPU thread using LLVM and $1.2\times10^{6}$ photons per second per
GPU using CUDA. This represents a speed-up of 70 on CPU and 400 on GPU over
Geant4 and is competitive with other similar applications. The potential for
further applications is discussed. | Adam C. S. Davis, Sacha Barré, Yangyang Cui, Keith L Evans, Marco Gersabeck, Antonin Rat, Zahra Montazeri | 2023-09-21T21:38:29Z | http://arxiv.org/abs/2309.12496v1 | # Optical Photon Simulation with Mitsuba3
###### Abstract
Optical photon propagation is an embarrassingly parallel operation, well suited to acceleration on GPU devices. Rendering of images employs similar techniques--for this reason, a pipeline to offload optical photon propagation from Geant4 to the industry-standard open-source renderer Mitsuba3 has been devised. With the creation of a dedicated plugin for single point multi-source emission, we find a photon propagation rate of \(2\times 10^{5}\) photons per second per CPU thread using LLVM and \(1.2\times 10^{6}\) photons per second per GPU using CUDA. This represents a speed-up of 70 on CPU and 400 on GPU over Geant4 and is competitive with other similar applications. The potential for further applications is discussed.
Keywords:Particle Physics simulation, Ray Tracing, Cherenkov Radiation, Optical Photon, GPU computing, Multi-architecture, Rendering
## 1 Introduction
Propagation of photons produced in High Energy Physics (HEP) simulations is computationally expensive. The propagation of photons is embarrassingly parallel, as each photon can be propagated independently from all others. For this reason, the process is well suited to technologies such as Graphical Processing Units (GPUs), High Performance Computers (HPCs), Cloud deployment or any other emerging technology well suited to embarrassingly parallel tasks.
To this end, we focus on the exploration of rendering technologies often used in animated movies. The processes employed rely on the same underlying principle--propagation of photons through a scene, which encompasses objects of interest and any other objects within the scene to be rendered, then rendering of optical photons using a camera or detection plane. This paper explores the use of one such renderer, Mitsuba3[1], in the context of particle physics optical photon simulation. We present a prototype workflow for the incorporation of Mitsuba3 within the Geant4[2; 3] framework, the standard for the interaction of particles with matter in HEP. The workflow is demonstrated in the context of the simulation of a Ring Imaging Cherenkov (RICH) Detector, specifically modelled after those used by the LHCb experiment[4]. These detectors are tasked with the generation and detection of Cherenkov photons, which are emitted by charged particles that traverse a medium at a speed greater than the phase velocity of light in the medium for use in particle identification, and are therefore key ingredients to many experimental apparatuses.
The paper is organised as follows: Section 2 presents a summary of the features of Mitsuba3, including the key features necessary to exploit multiple architectures. Section 3 discusses the key concepts behind the propagation of Cherenkov radiation within particle physics detectors, including the generation of photons in Geant4 and the modelling of quantum efficiencies of the detectors. Section 4 presents the prototype workflow on which this work is based, with Section 4.1 dedicated to the translation of geometries from Geant4 to the Mitsuba3 renderer, Section 4.2 defining the implementation of a custom photon emitter in Mitsuba3. Section 5 shows the physics validation of Mitsuba3 based photon propagation with that of Geant4, including a comparison of timing. Section 6 presents an in-depth discussion of the results, and finally conclusions and future work are presented in Section 7.
## 2 Summary of Key Features of Mitsuba3
### Mitsuba3 Variants
Mitsuba3[1] is a physics-based renderer, relying on just-in-time (JIT) compilation through the use of the DrJit[5]
engine to generate optimised kernels. Mitsuba3 natively supports rendering on multiple platforms through the use of so-called variants, which configures the JIT compilation of Mitsuba3 on which platform to use. Specifically, Mitsuba3 supports the use of NVIDIA's Compute Unified Device Architecture (CUDA)[6] and NVIDIA's OptiX [7] engine for processing on NVIDIA GPUs. Parallel processed ray-tracing on Central Processing Units (CPUs) is supported using LLVM [8] and Intel Embree[9]; finally serial operations is enabled via use of the scalar variant, which requires no JIT compilation. As the code supports many different architectures, the design of custom functions following the Mitsuba3 conventions enables the simultaneous development for multiple different platforms.
In addition to computational platform, variants are used to select the method by which differing wavelengths are treated during rendering. Mitsuba3 can represent wavelengths in several modes of interest, RGB and mono. In ray-based rendering tasks, it is often infeasible to represent the visible spectrum of light as a continuum of separately coloured rays. Typically, coloured images are represented by three colour channels, Red, Green and Blue (RGB). In this representation, each channel's value corresponds to the intensity of its respective colour component, e.g. how much red, green or blue is present in the pixel, allowing control over the colour composition of each pixel. Therefore by binning the visible spectrum into low, mid and high frequency ranges, colour images can be generated using three rays per photon. Mitsuba3 facilitates this wavelength treatment through RGB variants such as llvm_rgb and cuda_rgb. The mono variants are a simplification of RGB treatment, otherwise known as the intensity treatment, where the wavelength of photons is ignored and the pixel values are represented by a single channel creating a grey scale image. These are selected using the mono variants such as llvm_mono and cuda_mono.
## 3 Simulation of Cherenkov Photons--propagation and detection
It is well known that when a charged particle passes through a medium at a speed faster than the phase velocity of light within the medium, Cherenkov radiation is produced. The radiation is produced at an angle \(\cos\theta_{c}=1/(n\beta)\) relative to the charged particle's path, where \(\beta=v/c\), is the velocity of the particle relative to the speed of light in vacuum and \(n\) is the refractive index of the medium. The angle \(\theta_{c}\) is independent of azimuthal angle with respect to the charged particle's velocity. When combined with momentum measurements, the reconstruction of the angle of Cherenkov radiation originating from a charged track allows for particle identification. The Geant4 toolkit is able to produce Cherenkov radiation; For this reason, we do not explore the generation of Cherenkov Radiation by Mitsuba3 itself, but rather treat the emitted Cherenkov photons as input to the ray tracing provided by Mitsuba3.
Detection of Cherenkov photons is normally performed by either photomultipliers (PMTs), or similar technologies, and may include the reflection of the photons from a set of mirrors or surfaces. As these technologies also have a response that is dependent on photon wavelength, the modelling of the wavelength of photons is extremely important.
## 4 Prototype workflow
A prototype workflow was designed to aid in the transition to a fully-fledged Geant4 workflow and to enable simultaneous developments at all levels. This is illustrated in Figure 1. Each part of the workflow was designed with a memory-resident intermediate implementation in mind, allowing for intermediate file writing to be removed when each step is finished. We enumerate the individual elements of the workflow in the subsequent sections.
### Translation of Geometry between Geant4 and Mitsuba3
It is necessary to transform the geometry of a RICH detector from Geant4 GDML format into a format suitable for rendering in Mitsuba3. First, the geometry is visualised within FreeCAD [10]. The geometry was simplified by removing the detector casing and related components including windows, the remaining mirrors and detector plane were grouped into OBJ files, and incorporated into an XML format file which is readable by Mitsuba3. Each stage of the process introduces its own set of considerations, and the techniques and strategies adopted to address these are discussed in the following.
FreeCAD is an open-source 3D computer-aided design (CAD) platform and is frequently used for creating, manipulating, and analysing 3D models. However, FreeCAD does not naively support the GDML format so the open-source FreeCAD plugin CAD_GDML[11] was used. This plugin serves as an effective intermediary, facilitating the interaction between the GDML input file and FreeCAD's visualisation interface.
With this a simplified model of the RICH geometry was created as shown in Figure 2.
The models corresponding to the aforementioned components are exported as a pair of distinct OBJ files: the first incorporates both mirrors as a unified entity, while the second represents the detector. This strategy is adopted for two primary reasons: initially, treating the mirrors as a unified entity negates the requirement for subsequent
Figure 1: Prototype workflow of the incorporation of Mitsuba3 within the Geant4 framework. Each step was designed to be replaceable by a memory-resident implementation.
angular modifications; furthermore, the mirrors are fabricated from identical material, facilitating the usage of a shared Bidirectional Scattering Distribution Function (BSDF). The BSDF, one of the Mitsuba3 plugins, characterises the surface materials of the objects and simulates the interaction of light with different surfaces. Conversely, the detector elements, composed of a different material, are best used when separated from the mirrors.
For the transformation of objects from FreeCAD to OBJ files, an embedded mesh generation method Mefisto is used to give a high resolution model, without compromising on approximations. The operation of this method is governed by a singular parameter, the maximum edge length. This parameter limits the edge lengths of individual triangles or polygons generated during the meshing phase. In the present instance, a maximum edge length of 5 mm is selected.
Upon integrating the OBJ files into the XML file for rendering in Mitsuba3, it is important that the coordinate systems, interrelated positions and scale of the models remain comparable to those in the GDML file. Consequently, additional model transformations within the XML file are not necessary.
We note that other options for simplification of the geometry are possible--for instance exporting only a single sub-detector to a Geant4 parallel world, followed by exporting of this world to an individual GDML file adequately removes many steps. Finally, the use of other tools, such as pyg4ometry[12] allow the conversion to OBJ formats in a single script. Implementation of these options is beyond the scope of this paper.
### Photon Emitter
In Mitsuba3, the emitter plugins are used to initiate light rays. The spot light emitter (spot) is the closest available emitter that represents the behaviour of a photon. This plugin produces a conical light with linear falloff, governed by the cutoff_angle parameter that restricts the light emission within a specified angular range. Ideally, for photon propagation, it should be possible to represent a single photon by sufficiently minimising the cutoff_angle of a solitary (spot) emitter.
However, as the spot light emitter emits a cone shape light, regardless of the value of the parameter cutoff_angle, the size of the spot increases and its brightness decreases as the light path length increases, insufficient for the representation of single photons. Furthermore, as each spot emitter can only represent a single photon, many are needed to represent the emission of Cherenkov radiation over a charged particle's trajectory within the radiator. The DrJit compiler creates a separate C++ object for each emitter. As such large simulations containing millions of photons can easily surpass a GPU's global memory capacity, this approach can result in slow compilation speeds and rendering failures. This representation of photons and the significant memory usage associated with it meant using a single (spot) emitter to represent one photon is not a viable option.
To rectify these issues the photon_emitter has been developed as a custom emitter by modifying the (spot) emitter. This new emitter initiates multiple light rays by taking a vector input of initial positions and momenta read from a binary format file or Numpy array. Furthermore we set the local_dir parameter in the sample_ray function to a constant thus fixing the emitter's orientation. The falloff_curve parameter in the photon_emitter is set to unity to ensure constant intensity. Given that only one light ray is required per emission, the concept of the cutoff_angle parameter becomes irrelevant and has been removed. As the photon_emitter can initiate every photon in a simulation with a single instantiation, the memory footprint of kernel generated by DrJit is reduced substantially, from gigabytes to kilobytes. Moreover, for large simulations containing millions of photons, the compilation time was reduced from hours to seconds.
### Modelling of Quantum Efficiency
The efficiency of detection of Cherenkov photons relies on the intrinsic efficiency of the photon detectors in question and must be accounted for. In the case of the simplified RICH detector, one must consider the efficiency of the Multi-Anode Photo Multiplier Tube (MaPMT) detectors [13] and the reflectivity of the mirrors, which are wavelength dependent. Furthermore, as computational expense is dependent on the number of photons, it is advantageous to apply any quantum efficiency and reflectivity effects prior to propagation and discard the associated photons. This is justified, as any discarded photons would not be detected. Figure 3(a) describes the quantum efficiency of the detector as enumerated in Ref. [14], which peaks at 35% around 450 nm, due to the Borosilicate window's refractive index variations in the 200 to 1000 nm region. Implementation of the efficiency was performed by transforming the measured efficiencies into histograms of 33 nm
Figure 2: Simplified RICH geometry visualised in FreeCAD. Detector components are visible, including a spherical mirror positioned at the bottom right, a flat mirror on the left, and a detector at the top right corner in the scene.
bin width. The reflectivity of the mirrors, given in Figure 3(b) vary between 94% and 89% between 200 and 600 nm. Subsequent estimation of the reflectivity was performed in a similar fashion to the efficiency, utilising bins of 25 nm. For each photon, with its specific wavelength, an accept-reject algorithm was used to model the three efficiencies before propagation. As described in section 2, this approach is only valid when using the mono variants, which are inherently faster than the variants that render multiple wavelengths.
### Translation to global coordinate systems
In Mitsuba3, detection is achieved using the camera and film plugins to sample the propagated rays. The detector plane in the simplified RICH geometry measured \(620\times 1320\) mm, which defined the size of the film. The camera was placed at a distance of 478.69 mm, facing the centre of the detector. The field of view was empirically set to \(116.38^{\circ}\) to match the size of the detector. These values where determined empirically by matching the radii of the Cherenkov rings. The bitmap output was translated into a Numpy detection matrix. The mono variants created a \(620\times 1320\) two-dimensional array of intensities at each pixel ranging between 0 and 255. A temporary method was used to set an empirical threshold of intensity of 45 in local units to keep the same number of hits as rays propagated, only accepting values above this threshold. This solution is not viable but enabled checking the implementation of the geometry. The position of each cell passing this threshold was recorded as a photons hit; these were 2D coordinates relative to the bottom left corner of the detector plane.
Detected photons in Geant4 are referred to as hits and are represented as \(x,y,z\) coordinates relative to an origin defined in the geometry in this case, the centre of the RICH detector. In Figure 4 we see the hits as recorded in Geant4 relative to the detector volume. There is an offset due to the simplification of geometry, in Geant4 detection occurs at the back side of the MaPMTs whereas for the simplified geometry used in Mitsuba3 detection occurs at the front side of the detector volume shown in red in Figure 4. The hits in Geant4 were projected onto the surface of the detector volume creating two-dimensional coordinates allowing for direct comparison to Mitsuba3, as illustrated in Figure 4. This issue is easily remedied by taking the final position of the rays as global coordinates, but requires a change to the underlying Mitsuba3 output.
## 5 Results
### Comparison of Cherenkov Rings
Simulations of Cherenkov photons using the simplified RICH geometry were first performed in Geant4 using a 100 GeV \(\mu^{+}\) particle. The origin and momenta of the emitted Cherenkov photons were output in binary format and used as input to Mitsuba3, following the described pipeline.
Figure 5(a) shows the path of the \(\mu^{+}\) and of the emitted Cherenkov photons in Geant4. At the bottom are the origins of the photons. At the bottom on the right, a smaller green cone can be seen emerging from the particle's path behind the spherical mirror due to emission past the first mirror. Figure 5(b) shows the corresponding scene in Mitsuba3. For the sake of visualisation, spot emitters were used to show the end of the trajectory of the hits. At the top, a ring of light can be seen. A green wall was added on the right of the scene to show a second ring of light in the bottom right corner, behind the spherical mirror due to photons emitted by the \(\mu^{+}\) after passing through it.
Both scenes demonstrate identical light behaviours. The ring of light forming behind the spherical mirror supported the good calibration of the objects in Mitsuba3.
Hits were simulated in both software including the quantum efficiency and reflectivity and are shown in Figure 6. The Cherenkov rings returned by Geant4 and
Figure 4: Plots of the output from Geant4 simulation of the simplified RICH geometry. In red is the detector volume. On the left, a slice in the y-z plane shows the hits inside of the volume. On the right, a 3D view showing the unit vectors defined on the surface of the detector and point A, the bottom left corner.
Figure 3: Efficiencies of the detector and mirrors against the wavelength of incident light, from Ref [14]. (a) The quantum efficiency of Hybrid Photon Detectors (in black), of MaPMT sensors (in red) and the interpolation created for the simulation in Mitsuba3. (b) The reflectivity of the four mirrors segments composing the spherical mirrors against wavelength. The red and green bins show the interpolation of mirrors 1 & 2 implemented in the simulation.
Mitsuba3 are comparable, exhibiting similar radii, however, the absolute positions differ. Results are discussed later in section 6.
### Timing
Both the wavelength treatment and parallelisation implementation are selected at run-time using variants. This study tested and compared four variants, llvm_mono, llvm_rgb, cuda_mono, and cuda_rgb, which are described in Section 2.1. These variants were chosen to test both relative speed-up against Geant4 for CPU and GPU propagation; as well as for wavelength treatment as these will become important in the future.
The CPU used in testing was an Intel(R) Xeon(R) Silver 4210R 2.40GHz with 20 cores, each timing simulation was executed with 1,2,4,8,16 and 20 threads and repeated three times to eliminate outliers. Testing on GPU was performed on a NVIDIA Tesla T4 with 2560 threads, a base clock of 585 MHz (1590 MHz boost clock) and 40 Ray Tracing (RT) cores.
Mitsuba3 has in built timing reports which are generated at run-time for a variety of rendering stages. Timings were confirmed using the NVIDIA NSight tools [15]. Through investigation of the timing reports, it was determined that there were three areas of computation expense, parsing of the XML file, initialisation of the kernel which includes the DrJit compilation and rendering of the scene.
## 6 Discussion
In Figure 6, the overlap of the rings showed that the centre of both fell in the same position. This confirms the comparable behavior of both Geant4 and Mitsuba3 and the accurate translation of the geometry. The initial coordinates of the photons were directly exported from Geant4 to Mitsuba3; therefore, the similarity of the ring sizes verifies that the coordinate system in Mitsuba3 corresponds to that of Geant4. Finally, corroborating again the correct geometry, the comparable radii are the key result of the simulation. The Cherenkov angle is calculated from the radius of the ring and thus the speed of the charged particle can be identified. Both rings will give the same charged particle's speed and demonstrate Mitsuba3's potential for future implementation.
Two issues arose from Figure 6. The exact position of the hits did not correspond, and the radius of the ring in Mitsuba3 seemed slightly larger than that of Geant4. The clash in the position of the hits came from the random filtering of the photons described in section 4.3 and the method used to identify hits on the detector described in section 4.1. The threshold was set to 45 to identify as many rays as emitted after filtering as possible. Some pixels in the detector were brighter than the implemented threshold due to signal leakage to neighboring pixels. Some other photons fell between pixels and gave a maximum pixel intensity value of less the threshold. The radii mismatch is due to the discussed scale mismatch. Photons in Geant4 and Mitsuba3 did not effectively hit the same surface (see Figure 4) and the projection of the hits onto the surface introduced errors. The recording of final positions in Mitsuba3 was made through a camera with a field of view determined with approximations. A variation of a tenth of degree in the field of view could offset the position of the hits. Finally, the pixel hits were detected, and not their exact positions, providing an accuracy of 1 mm at best - the size of a pixel on the film. These issues can be miti
Figure 5: (a) The scene in Geant4 showing the path of the \(\mu^{+}\) (in blue) and photons (in green), and their interactions with the subdetector elements. Photons can be spotted behind the spherical mirror due to emission after passing the mirror. (b) The corresponding scene in Mitsuba3 shows the trajectory of the charged particle (in blue). A green wall was added on the left to let photons emitted after the spherical mirror diffuse. Rings of light can be observed on the detector and on the wall.
Figure 6: Position in pixels of detector hits in Geant4 (red) and Mitsuba3 (blue).
gated by the output of individual global hit positions as opposed to direct detector response.
The Geant4 simulation used for comparison with Mitsuba3 was run on a single thread on the CPU, and hence, was not parallelised. The time taken to propagate photons was measured for simulations that produced 600 to \(6\times 10^{5}\) photons, in steps of factors of 10. The scaling profile of Mitsuba3 on different architectures and that of Geant4 are presented in Figure 7. The first observation is that, contrary to the Geant4 simulation, the Mitsuba3 implementation does not scale with the number of photons. This indicates that there is a step in the rendering process that is the same over all of the different variations and which dominates the total time. This is known to be the production of the images but can be improved by writing the global coordinates of the photons, directly bypassing this step. Furthermore, this is independent of the architecture since both the CPU and the GPU exhibit this behaviour.
The second observation is that Mitsuba3 outperforms Geant4 when simulating \(\geq~{}2000\) photons on the GPU or on the CPU at 20 cores, and when propagating \(\geq~{}10^{4}\) photons on the CPU with a single thread. Beyond these thresholds, Mitsuba3 then becomes increasingly faster than Geant4. For example, when considering the maximum number of photons simulated by Geant4, i.e. \(6\times 10^{5}\), Mitsuba3 is approximately 70 times faster on the CPU at a single thread and 400 times faster on both the GPU and CPU at 20 threads. Another way of quantifying the performance of the two pipelines is by calculating the photo propagation rate i.e. the number of photons propagated per second. The Mitsuba3 framework is capable of rendering about \(1.85\times 10^{8}\) photons per second on both the GPU and on the CPU at 20 threads, and \(3.3\times 10^{7}\) photons per second on the single-threaded CPU. By contrast, the Geant4 simulation is capable of propagating up to 3200 photons per second. Therefore, for a workload greater than the threshold values, offloading the propagation of Cherenkov photons to the Mitsuba3 pipeline will be significantly more efficient than simulating them using Geant4.
Finally, the third notable feature of this plot is the similar performance between the GPU and the CPU at 20 threads. This is odd behavior at first sight due to the fact that the NVIDIA GPU has a greater number of cores compared to the Intel CPU and that, in general, GPUs are better suited for ray tracing. Breaking down the simulation time into the code generation time and rendering time separately, as shown in Figure 8, reveals that when running Mitsuba3 on the GPU the code generation time significantly outweighs the rendering time. On the other hand, when running Mitsuba3 on the CPU at 20 cores both times are comparable. Isolating the rendering time, the GPU is 22 times faster than the CPU. Therefore, optimising the code generation when using the GPU can significantly improve Mitsuba3's performance, thereby further reducing the threshold number of photons required to outperform Geant4 on that architecture.
## 7 Conclusion and future work
A successful prototype workflow offloading Cherenkov photon propagation from Geant4 to Mitsuba3 has been implemented. A light source analogue to a photon consisting of a single ray with infinitesimal width, in a single direction, and without decaying intensity was successfully created in Mitsuba3. The intrinsic efficiencies of each object contained in the simplified LHCb RICH detector were reproduced and the detection of photons propagated with Mitsuba3 was successful. The comparison of the outputs confirmed the correct implementation of a simplified RICH geometry and the accurate propagation of the photons within that geometry. The timing study showed that, up to \(10^{8}\) photons, the simulation time in Mitsuba3 does not scale with the number of photons. Above \(10^{4}\) pho
Figure 8: Breakdown of the photon propagation time into the code generation and rendering time as a function of the number of photons for both the Intel Xeon 4210R CPU at 20 threads and the NVIDIA Tesla T4.
Figure 7: Scaling profile of the Geant4 simulation compared to that of Mitsuba3, which was executed on three different computational resources: an Intel Xeon 4210R CPU with 1 and 20 threads, and an NVIDIA Tesla T4 GPU. In contrast, Geant4 was run on a single thread on the Intel Xeon 4210R CPU.
tons, Mitsuba3 is significantly faster than Geant4. The simulation in Mitsuba3 is not yet ready to be integrated into Geant4 but, with further development, this is expected to be feasible and to yield significant performance gains.
Mitsuba3 proved its potential for physics experiments involving light propagation. However, the graphical results are not yet accurate enough for the substitution of Geant4. The transition to memory-resident objects is essential. Such a change will allow the output method to return the global coordinates of the hits. In a similar fashion to what was done for the photon_emitter in the source code, a new BSDF class should be constructed for the detector. BSDFs are called every time a ray intersects with a new interface. The BSDF should include a function that writes the position of the ray in a list when it interacts with the detector. This would avoid deformation of the output either due to the leakage issue of hits being created in multiple pixels or due to inaccuracies in the camera's field of view. The scaling mismatch described in Section 5.1 would be solved, hits from Geant4 and Mitsuba3 could be directly compared, and statistical analysis could be conducted to better judge the potential of Mitsuba3 for the simplified RICH detector. Later, hits from Mitsuba3 could be directly fed back into the global Geant4 simulation, allowing further processing.
The photon filtering for quantum efficiency and reflectivity should be improved in one of two ways. The first method is to implement the filtering within the overarching C++ simulation to automate the application to each new set of photons produced by Geant4. Another way is to design a wavelength-dependent BSDF with a function to kill or propagate rays as they are traced and interact with the different objects in Mitsuba3. This could involve using colour-dependent variants or attaching a value for the wavelength to each ray traced--possibly in place of the intensity. Both methods could be studied by isolating the effect of each filtering in Geant4 and Mitsuba3 and comparing the fraction of photons detected.
Finally, the removal of all intermediate steps represents a complete C++ pipeline of passing of photons from Geant4 to Mitsuba3, their filtering, their loading into the photon_emitter, their propagation, their detection and the loading of the hits back to Geant4.
In addition to the simulation of Cherenkov photons in gaseous detectors, which was the focus of the work presented here, the use of Mitsuba3 is also expected to yield similar improvements in other simulations involving optical photons. This includes the simulation of Cherenkov photons in liquids, such as water Cherenkov detectors, and other light sources in liquids including scintillation light in liquid noble gas detectors.
## Acknowledgments
The authors would like to thank S. Easo and Y. Li for discussions related to the simplified RICH Geometry. We thank Nicolas Roussel from Mitsuba3 for helpful discussions. ACSD and MG acknowledge funding from UKRI-STFC under grant reference ST/W000601/1. KE acknowledges funding from UKRI-STFC under grant reference ST/V002546/1 and from the Royal Society (United Kingdom) under grant agreements DH160214 and RGF/EA/201014.
|
2310.20235 | Powers of generalized binomial edge ideals of path graphs | In this article, we study the powers of the generalized binomial edge ideal
$\mathcal{J}_{K_m,P_n}$ of a path graph $P_n$. We explicitly compute their
regularities and determine the limit of their depths. We also show that these
ordinary powers coincide with their symbolic powers. Additionally, we study the
Rees algebra and the special fiber ring of $\mathcal{J}_{K_m,P_n}$ via Sagbi
basis theory. In particular, we obtain exact formulas for the regularity of
these blowup algebras. | Yi-Huang Shen, Guangjun Zhu | 2023-10-31T07:36:31Z | http://arxiv.org/abs/2310.20235v1 | # Powers of generalized binomial edge ideals of path graphs
###### Abstract.
In this article, we study the powers of the generalized binomial edge ideal \(\mathcal{J}_{K_{m},P_{n}}\) of a path graph \(P_{n}\). We explicitly compute their regularities and determine the limit of their depths. We also show that these ordinary powers coincide with their symbolic powers. Additionally, we study the Rees algebra and the special fiber ring of \(\mathcal{J}_{K_{m},P_{n}}\) via Sagbi basis theory. In particular, we obtain exact formulas for the regularity of these blowup algebras.
2020 _Mathematics Subject Classification_. Primary 13C15, 13P10; Secondary 05E40, 13F20 Keywords: Regularity, depth, generalized binomial edge ideal, path graph, Rees algebra, special fiber ring.
In this article, we are interested in the Castelnuovo-Mumford regularity (regularity for short) and the depth of powers of generalized binomial edge ideals. It is well-known that if \(I\) is a homogeneous ideal of a polynomial ring \(R\), then \(\operatorname{reg}(R/I^{t})\) is asymptotically linear in \(t\). At the same time, \(\operatorname{depth}(R/I^{t})\) is constant for sufficiently large \(t\) (cf. [2, 9, 27]). It is usually difficult to determine when these phenomena begin. For this problem, the simplest case is when \(I\) is a quadratic squarefree monomial ideal, i.e., when \(I\) can be recognized as the edge ideal of a suitable graph. In this case, many results have been achieved for simple classes such as forest graphs, cycle graphs, bipartite graphs, and so on. In addition, a few results are known for the binomial edge ideal of a graph. For example, in [23], Jayanthan et al. gave an upper bound on the regularity of powers of almost complete intersection binomial edge ideals using the quadratic sequence approach. Meanwhile, they gave the exact formulas for the regularity of powers of binomial edge ideals of several simple graphs such as cycle graphs, star graphs, and balloon graphs. Recently, in [34], we gave explicit formulas for the regularity of powers of binomial edge ideals which are almost complete intersections. At the same time, in [36], Wang and Tang studied the depth of powers of binomial edge ideals of complete bipartite graphs. Ene et al. in [14] studied the regularity and the depth of powers of the binomial edge ideals of connected closed graphs.
However, nearly nothing is known about the algebraic properties of powers of generalized binomial edge ideals. In this paper we will start such a study by considering the generalized binomial edge ideal \(\mathcal{J}_{K_{m},P_{n}}\) of the path graph \(P_{n}\).
The article is organized as follows. In Section 2, we briefly review essential definitions and terminology that we will need later. In Section 3, we show that taking-initial-ideal commutes with taking-powers, when coming to \(\mathcal{J}_{K_{m},P_{n}}\). Using this fact, we study the regularity and the depth of the powers of \(\mathcal{J}_{K_{m},P_{n}}\) via the Sagbi basis theory. We also show that the symbolic powers and ordinary powers of \(\mathcal{J}_{K_{m},P_{n}}\) coincide. In Section 4, we study the regularities of the Rees algebra and the special fiber ring of \(\mathcal{J}_{K_{m},P_{n}}\), by considering the corresponding problems of their initial algebras. The final computation builds on the combinatorial optimization of different flavors. In the last section, we give applications of the previous results. Since the philosophy of combinatorial pure subrings applies here, we can consider the binomial edge ideal of a pair of graphs. In particular, we will give natural bounds on the regularities of the powers of \(\mathcal{J}_{K_{m},G}\) as well as the two blowup algebras of this generalized binomial edge ideal, if \(G\) contains an induced path \(P_{n}\).
## 2. Preliminaries
In this brief section, we provide a concise overview of some combinatorial notions that will be employed throughout this paper. For a more comprehensive treatment from an algebraic perspective, we refer the readers to [18, 20, 35].
Let \(G\) be a simple graph with the vertex set \(V(G)\) and the edge set \(E(G)\). For a vertex \(v\) of \(G\), the set of all neighborhoods of \(v\) is denoted by \(N_{G}(v)=\{u\in V(G):\{u,v\}\in E(G)\}\). A vertex \(v\) is called a _leaf_ of \(G\) if \(N_{G}(v)\) has cardinality one and \(v\) is _isolated_ if \(N_{G}(v)=\emptyset\).
For any subset \(A\) of \(V(G)\), let \(G[A]\) denote the _induced subgraph_ of \(G\) on the set \(A\), i.e., for \(u,v\in A\), \(\{u,v\}\in E(G[A])\) if and only if \(\{u,v\}\in E(G)\). At the same time, we denote the induced subgraph of \(G\) on the set \(V(G)\setminus A\) by \(G\setminus A\).
A subset \(M\subset E(G)\) is a _matching_ of \(G\) if \(e\cap e^{\prime}=\emptyset\) for all distinct edges \(e\) and \(e^{\prime}\) in \(M\). The _matching number_ of \(G\), denoted by \(\operatorname{match}(G)\), is the maximum size of a matching in \(G\). If \(G\) is a bipartite graph having vertex partitions \(V_{1}\) and \(V_{2}\), a _complete matching from \(V_{2}\) to \(V_{1}\)_ is a matching in which there is one edge incident with every vertex in \(V_{2}\). In other words, every vertex in \(V_{2}\) is matched against some vertex in \(V_{1}\). Whence, \(\operatorname{match}(G)=|V_{2}|\).
A _walk_\(W\) of length \(n\) in a graph \(G\) is a sequence of vertices \((w_{1},\ldots,w_{n},w_{n+1})\), such that \(\{w_{i},w_{i+1}\}\in E(G)\) for \(1\leq i\leq n\). The walk \(W\) is _closed_ if \(w_{1}=w_{n+1}\). Furthermore, the walk \(W\) is called a _cycle_ if it is closed and the points \(w_{1},\ldots,w_{n}\) are distinct. At the same time, a
\(path\) is a walk where all points are distinct. For simplicity, a path of length \(n-1\) is denoted by \(P_{n}\), and a cycle of length \(n\) is denoted by \(C_{n}\).
## 3. Powers of generalized binomial edge ideals of paths
In this section, we will study the regularity and the depth of the powers of the generalized binomial edge ideal \(\mathcal{J}_{K_{m},P_{n}}\), where \(P_{n}\) is a path graph with \(n\) vertices. Using Sagbi basis theory, we can turn to the corresponding study of their initial ideals. However, to make this approach work, we need to show first in Theorem 3.5 that taking-initial-ideal commutes with taking-powers when coming to \(\mathcal{J}_{K_{m},P_{n}}\). Since the proof for this is rather technical, we need to make some preparations. Throughout this paper, we will stay with the following setting:
_Setting 3.1_.: Let \(m,n\geq 2\) be two integers. The polynomial ring \(S\coloneqq\mathbb{K}[x_{i,j}:i\in[m],j\in[n]]\) over a field \(\mathbb{K}\) is endowed with the term order \(\tau\), which is the lexicographic order on \(S\) induced by the natural order
\[x_{1,1}>x_{1,2}>\cdots>x_{1,n}>x_{2,1}>x_{2,2}>\cdots>x_{2,n}>\cdots>x_{m,1}>x _{m,2}>\cdots>x_{m,n}.\]
Let \(P_{n}\) be the path on the set \([n]\) whose edge set is \(E(P_{n})=\{\,\{\,i,i+1\}:i\in[n-1]\,\}\). Furthermore, let \(H\) be the graph on the set \(\{\,x_{i,j}:i\in[m],j\in[n]\,\}\) with
\[E(H)=\big{\{}\,\{x_{i,j},x_{i^{\prime},j+1}\}:1\leq i<i^{\prime}\leq m,1\leq j \leq n-1\,\big{\}}\,.\]
For \(k\in[n-1]\), let \(H_{k}\) be the induced subgraph of \(H\) on the set \(\{\,x_{i,j}:i\in[m],j\in\{k,k+1\}\,\}\).
Regarding the graph \(H\) in Setting 3.1, we have the following two observations:
**Remark 3.2**.:
1. If \(G\) is a simple graph on the set \([n]\), the Grobner basis of the generalized binomial edge ideal \(\mathcal{J}_{K_{m},G}\) with respect to the term order \(\tau\) was computed in [32, Theorem 2]. In particular, the initial ideal \(\operatorname{in}_{\tau}(\mathcal{J}_{K_{m},P_{n}})\) equals the ideal \[(x_{i,j}x_{i^{\prime},j+1}:1\leq i<i^{\prime}\leq m,1\leq j\leq n-1)\] in \(S\). It is clear that this is the edge ideal \(I(H)\) of the graph \(H\).
2. The graph \(H\) is bipartite with respect to the bipartition \(V_{1}\sqcup V_{2}\), where \[V_{1}=\{\,x_{i,j}\in\boldsymbol{X}:i\in[m],j\text{ is odd}\,\}\qquad\text{ and}\qquad V_{2}=\{\,x_{i,j}\in\boldsymbol{X}:i\in[m],j\text{ is even}\,\}\,.\] When \(m\geq 3\), \(H\) has \(3\) connected components, where \(x_{m,1}\) and \(x_{1,n}\) are isolated vertices. If instead \(m=2\), then \(H\) has \(2+(n-1)=n+1\) connected components, where \(x_{m,1}\) and \(x_{1,n}\) are still isolated vertices; see Figure 1.
The proof of Theorem 3.5 depends on the presentation ideal of the Rees algebra \(\mathcal{R}(I(H))\) of the edge ideal \(I(H)\) and the "lifts" of the Sagbi basis. Suppose that \(\mathcal{G}(I(H))=\{f_{1},\ldots,f_{q}\}\) is the minimal monomial generating set of \(I(H)\). Then, there exists a canonical homomorphism from the polynomial ring \(B\coloneqq S[T_{1},\ldots,T_{q}]\) to the Rees algebra \(\mathcal{R}(I(H))\coloneqq S[f_{1}T,\ldots,f_{q}T]\subset S[T]\), induced by \(T_{i}\mapsto f_{i}T\). Let \(\deg(T_{1})=\cdots=\det(T_{q})=\deg(T)=1\), and \(\deg(x_{i,j})=0\) for every \(x_{i,j}\in\boldsymbol{X}\). Then, this map is a graded homomorphism of \(S\)-algebras. Its kernel \(J\) will be called the _presentation ideal_ of \(\mathcal{R}(I(H))\) with respect to \(\mathcal{G}(I(H))\). This is a graded ideal and \(\mathcal{R}(I(H))\) has the _presentation_\(\mathcal{R}(I(H))=B/J\).
Figure 1. Graph \(H\)
Similarly, there is a canonical homomorphism from the polynomial ring \(B^{\prime}\coloneqq\mathbb{K}[T_{1},\dots,T_{q}]\) to the special fiber ring \(\mathcal{F}(I(H))\cong\mathbb{K}[H]\), induced by \(T_{i}\mapsto f_{i}\). The kernel ideal \(J^{\prime}\) of this map is called the _presentation ideal_ of \(\mathcal{F}(I)\). It leads to the _presentation_\(\mathcal{F}(I(H))=B^{\prime}/J^{\prime}\).
In addition, let \(W=(w_{1},w_{2},\dots,w_{2s+1}=w_{1})\) be an even closed walk in \(H\) and suppose that \(e_{j}=\{w_{j},w_{j+1}\}\) for each \(j\). We will write \(T_{W^{+}}-T_{W^{-}}\) for the binomial \(T_{e_{1}}T_{e_{3}}\cdots T_{e_{2s-1}}-T_{e_{2}}T_{e_{4}}\cdots T_{e_{2s}}\) in \(J\). We are mostly interested in the case where \(W\) is a primitive cycle. Recall that a _cycle_\(C\) is a closed walk with distinct vertices. A _chord_ of a cycle \(C\) in a graph \(G\) is an edge of \(G\) that connects two non-adjacent vertices of \(C\). A cycle without chords is called _primitive_. Binomials from primitive cycles are important for describing the presentation ideals.
**Lemma 3.3** ([35, Proposition 10.1.14, Theorem 10.1.15]).: _Let \(I\) be the edge ideal of a bipartite graph._
1. _Suppose that_ \(\mathcal{R}(I)=B/J\) _is the presentation of the Rees algebra_ \(\mathcal{R}(I)\)_. Then_ \(J=BJ_{1}+BP\)_, where_ \(J_{1}\) _is the degree_ \(1\) _part of the graded ideal_ \(J\)_, and_ \[P\coloneqq\{\,T_{w^{+}}-T_{w^{-}}:w\text{ is a primitive cycle in }H\,\}\,.\]
2. _Suppose that_ \(\mathcal{F}(I)=B^{\prime}/J^{\prime}\) _is the presentation of the special fiber ring_ \(\mathcal{F}(I)\)_. Then,_ \(J^{\prime}\) _is minimally generated by the set_ \(P\)_._
Recall that a bipartite graph is _chordal bipartite_ if every cycle of length at least six has a chord in it. In other words, the length of every primitive cycle of this bipartite graph is \(4\).
**Lemma 3.4**.: _Let \(H,H_{1},\dots,H_{n-1}\) be as in Setting 3.1. Then, \(H\) is a chordal bipartite graph._
Proof.: Let \(C=(c_{1},c_{2},\dots,c_{2s+1}=c_{1})\) be a primitive cycle in \(H\) with \(s\geq 2\). By abuse of notation, if \(k>2s\), we will identify \(c_{k}\) with \(c_{k-2s}\), and if \(k<1\), we will identify \(c_{k}\) with \(c_{k+2s}\). We have the following two cases. Figure 2 is helpful in understanding the arguments.
1. Suppose that \(C\) is a cycle in \(H_{k}\) for some \(k\). Without loss of generality, we can assume that \(k=1\) and \(c_{p}=x_{i_{p},j_{p}}\) for each \(p\). Suppose also that \(j_{p}=1\) if \(p\) is odd, and \(j_{p}=2\) if \(p\) is even. We can also assume that \(i_{1}=\min\{i_{1},i_{3},\dots,i_{2s-1}\}\). Then, \(i_{1}<i_{2}\) and \(i_{1}<i_{2s}\). By symmetry, we assume that \(i_{2s}<i_{2}\). Note that \(i_{2s-1}<i_{2s}\) at this time. Therefore, if \(s>2\), then \(C\) has a chord \(\{x_{i_{2s-1},1},x_{i_{2},2}\}\), which violates the primitivity of \(C\).
2. Suppose there is no \(k\) such that \(C\) is a cycle in \(H_{k}\). Then, without loss of generality, we can assume that \(c_{1}=x_{i_{1},1}\). For each \(k\) with \(c_{k}=x_{i_{k},1}\), we say that \(k\) is _marginal_. Note that in this case, we have \(c_{k\pm 1}=x_{i_{k\pm 1},2}\) with \(i_{k-1}\neq i_{k+1}\). If \(i_{k-1}<i_{k+1}\), we say that \(k\) is _extendable_ if \(c_{k+2}=x_{i_{k+2},3}\). If instead \(i_{k-1}>i_{k+1}\), we say that \(k\) is _extendable_ if \(c_{k-2}=x_{i_{k-2},3}\). 1. Suppose there is a marginal \(k\) that is extendable. Without loss of generality, we assume that \(i_{k-1}<i_{k+1}\). Note that in this case \(i_{k}<i_{k-1}<i_{k+1}<i_{k+2}\). If \(s>2\), then \(C\) has a chord \(\{x_{i_{k-1},2},x_{i_{k+2},3}\}\), which violates the primitivity of \(C\). 2. Suppose that there is no marginal \(k\) that is extendable. Then \(C\) is a cycle in \(H_{1}\), a contradiction.
In short, \(s=2\) and \(C\) is a cycle of length \(4\).
The following result is indispensable for establishing the regularity result in Theorem 3.11. It also provides a class of nice ideals sought in [7, Section 1].
**Theorem 3.5**.: _Under the assumptions in Setting 3.1, we have \((\operatorname{in}_{\tau}(\mathcal{J}_{K_{m},P_{n}}))^{t}=\operatorname{in}_{ \tau}(\mathcal{J}_{K_{m},P_{n}}^{t})\) for all \(t\geq 1\)._
Proof.: We define a term order \(\tau^{\prime}\) on \(S[T]\) as follows: for any monomials \(u\) and \(v\) in \(S\), and non-negative integers \(i\) and \(j\), we set
\[uT^{i}<_{\tau^{\prime}}vT^{j}\qquad\Leftrightarrow\qquad i<j\quad\text{or} \quad i=j\quad\text{and}\quad u<_{\tau}v.\]
It follows from [8, Theorem 2.7] that \((\mathrm{in}_{\tau}(\mathcal{J}_{K_{m},P_{n}}))^{t}=\mathrm{in}_{\tau}(\mathcal{J} _{K_{m},P_{n}}^{t})\) for all \(t\geq 1\), if and only if \(\mathcal{R}(\mathrm{in}_{\tau}(\mathcal{J}_{K_{m},P_{n}}))=\mathrm{in}_{\tau^{ \prime}}(\mathcal{R}(\mathcal{J}_{K_{m},P_{n}}))\). In other words, the set
\[\mathcal{G}\coloneqq\boldsymbol{X}\cup\{[i,j\,|\,k,k+1]T:1\leq i<j\leq m\text{ and }k\in[n-1]\}\]
forms a _Sagbi basis_ of \(\mathcal{R}(\mathcal{J}_{K_{m},P_{n}})\) with respect to \(\tau^{\prime}\). For brevity, write \(\mathcal{G}=\{g_{1},\ldots,g_{p}\}\), and let \(F_{1},\ldots,F_{q}\) be a system of binomial relations of the affine semigroup ring \(\mathbb{K}[\mathrm{in}_{\tau^{\prime}}(g):g\in\mathcal{G}]\). It follows from [8, Proposition 1.1] that we have to find \(\lambda_{\alpha}^{(j)}\in\mathbb{K}\) such that
\[F_{j}(g_{1},\ldots,g_{p})=\sum_{\boldsymbol{\alpha}}\lambda_{\alpha}^{(j)}g^ {\boldsymbol{\alpha}}\qquad\text{with }\mathrm{in}_{\tau^{\prime}}(g^{ \boldsymbol{\alpha}})\leq\mathrm{in}_{\tau^{\prime}}(F_{j}(g_{1},\ldots,g_{p}))\]
for all \(\lambda_{\boldsymbol{\alpha}}^{(j)}\neq 0\), where, as usual, \(g^{\boldsymbol{\alpha}}\coloneqq g_{1}^{\alpha_{1}}\cdots g_{p}^{\alpha_{p}}\) for a multi-index \(\boldsymbol{\alpha}=(\alpha_{1},\ldots,\alpha_{p})\). Note that to verify this condition, it suffices to show that all the \(\mathrm{in}_{\tau^{\prime}}(g^{\boldsymbol{\alpha}})\) are distinct. We can check with ease that this is satisfied in the following eqs. (1) to (5).
Note that \(\mathbb{K}[\mathrm{in}_{\tau^{\prime}}(g):g\in\mathcal{G}]\cong\mathcal{R}(I(H))\) for the graph \(H\) introduced in Setting 3.1. It follows from Lemma 3.3 that we have the following two cases:
1. First, suppose that \(F=F_{j}\) is a binomial generator in \(J_{1}\). Since the edge ideal \(I(H)\) is quadratic, we have two subcases. 1. Suppose \(F=uvT_{e_{2}}-u^{\prime}v^{\prime}T_{e_{1}}\) such that \(e_{1}\coloneqq\{u,v\}\), \(e_{2}\coloneqq\{u^{\prime},v^{\prime}\}\) and \(e_{1}\cap e_{2}=\emptyset\). We can assume that \(u=x_{i,k}\), \(v=x_{j,k+1}\), \(u^{\prime}=x_{i^{\prime},k^{\prime}}\) and \(v^{\prime}=x_{j^{\prime},k^{\prime}+1}\) with \(1\leq i<j\leq m\), \(1\leq i^{\prime}<j^{\prime}\leq m\), and \(k,k^{\prime}\in[n-1]\). In this subcase, we have the simple equality: \[x_{i,k}x_{j,k+1}[i^{\prime},j^{\prime}\,|\,k^{\prime},k^{ \prime}+1]-x_{i^{\prime},k^{\prime}}x_{j^{\prime},k^{\prime}+1}[i,j\,|\,k,k+1]\] \[= x_{i,k+1}x_{j,k}[i^{\prime},j^{\prime}\,|\,k^{\prime},k^{\prime}+1 ]-x_{i^{\prime},k^{\prime}+1}x_{j^{\prime},k^{\prime}}[i,j\,|\,k,k+1].\] 2. Suppose \(uT_{e_{2}}-vT_{e_{1}}\) such that \(e_{1}\coloneqq\{u,r\}\) and \(e_{2}\coloneqq\{v,r\}\). We have three additional subcases. 1. Suppose \(r=x_{i_{1},k}\), \(u=x_{i_{2},k+1}\), and \(v=x_{i_{3},k+1}\), with \(1\leq i_{1}<i_{2}<i_{3}\leq m\) and \(k\in[n-1]\). In this subcase, we have the simple equality: \[x_{i_{2},k}[i_{1},i_{3}\,|\,k,k+1]-x_{i_{1},k}[i_{2},i_{3}\,|\,k,k+1]=x_{i_{3}, k}[i_{1},i_{2}\,|\,k,k+1].\] 2. Suppose \(r=x_{i_{1},k+1}\), \(u=x_{i_{2},k}\), and \(v=x_{i_{3},k}\), where \(1\leq i_{1}<i_{2}<i_{3}\leq m\) and \(k\in[n-1]\). This subcase is similar to the previous one. 3. Suppose \(u=x_{i_{1},k}\), \(r=x_{i_{2},k+1}\), and \(v=x_{i_{3},k+2}\), with \(1\leq i_{1}<i_{2}<i_{3}\leq m\) and \(k\in[n-2]\). In this subcase, we have the simple equality: \[x_{i_{3},k+2}[i_{1},i_{2}\,|\,k,k+1]-x_{i_{1},k}[i_{2},i_{3}\,|\,k+1,k+2]\] \[= x_{i_{3},k}[i_{1},i_{2}\,|\,k+1,k+2]+x_{i_{2},k+2}[i_{1},i_{3}\,| \,k,k+1]\] \[-x_{i_{2},k}[i_{1},i_{3}\,|\,k+1,k+2]-x_{i_{1},k+2}[i_{2},i_{3}\,| \,k,k+1].\] 3.
Figure 2. Basic patterns
2. Second, we assume that \(F=T_{C^{+}}-T_{C^{-}}\) for some primitive cycle \(C\). It follows from Lemma 3.4 that we can assume that \(C=(c_{1},c_{2},c_{3},c_{4},c_{5}=c_{1})\) in \(H\). Without loss of generality, we may assume that \(c_{1}=x_{i_{1},1}\). Then, we have two subcases. 1. Suppose that \(C\) is not a cycle in \(H_{1}\). In this subcase, we can assume that \(c_{2}=x_{i_{2},2}\), \(c_{3}=x_{i_{3},3}\), and \(c_{4}=x_{i_{4},2}\) such that \(1\leq i_{1}<i_{2}<i_{4}<i_{3}\leq m\). Now, we have the simple equality: \[[i_{1},i_{2}\,|\,1,2][i_{4},i_{3}\,|\,2,3]-[i_{1},i_{4}\,|\,1,2][i_ {2},i_{3}\,|\,2,3]\] \[= -[i_{1},i_{2}\,|\,2,3][i_{4},i_{3}\,|\,1,2]+[i_{1},i_{4}\,|\,2,3] [i_{2},i_{3}\,|\,1,2]\] \[-[i_{2},i_{4}\,|\,1,2][i_{1},i_{3}\,|\,2,3]-[i_{2},i_{4}\,|\,2,3] [i_{1},i_{3}\,|\,1,2].\] 2. Suppose that \(C\) is a cycle in \(H_{1}\). In this subcase, we can assume that \(c_{2}=x_{i_{2},2}\), \(c_{3}=x_{i_{3},1}\), and \(c_{4}=x_{i_{4},2}\) such that \(1\leq i_{1}<i_{3}<i_{2}<i_{4}\leq m\). Now, we have the well-known Plucker relation: \[[i_{1},i_{2}\,|\,1,2][i_{3},i_{4}\,|\,1,2]-[i_{1},i_{4}\,|\,1,2][i_{3},i_{2}\,| \,1,2]=[i_{1},i_{3}\,|\,1,2][i_{2},i_{4}\,|\,1,2].\] (5) This completes the proof.
In the previous proof, we introduced a term order \(\tau^{\prime}\) on \(S[T]\), which is induced from the term order \(\tau\) on \(S\). We will adopt it from now on. In particular, we can talk about the initial algebra \(\mathrm{in}_{\tau^{\prime}}(\mathcal{R}(\mathcal{J}_{K_{m},P_{n}}))\).
**Corollary 3.6**.:
1. _The Rees algebras_ \(\mathcal{R}(\mathcal{J}_{K_{m},P_{n}})\) _and_ \(\mathcal{R}(\mathrm{in}_{\tau}(\mathcal{J}_{K_{m},P_{n}}))\) _are normal Cohen-Macaulay domains. In particular, if_ \(\mathrm{char}(\mathbb{K})=0\)_, then_ \(\mathcal{R}(\mathcal{J}_{K_{m},P_{n}})\) _has rational singularities. If instead_ \(\mathrm{char}(\mathbb{K})>0\)_, then_ \(\mathcal{R}(\mathcal{J}_{K_{m},P_{n}})\) _is F-rational._
2. _Let_ \(\mathcal{G}=\{g_{1},\ldots,g_{q}\}\) _be the natural generating set of_ \(\mathcal{J}_{K_{m},P_{n}}\)_. Then_ \(\mathcal{G}\) _is a Sagbi basis of the_ \(\mathbb{K}\)_-subalgebra_ \(\mathbb{K}[g_{1},\ldots,g_{q}]\) _of_ \(S\) _with respect to the lexicographic order_ \(\tau\) _on_ \(S\)_, i.e.,_ \[\mathrm{in}_{\tau}(\mathbb{K}[g_{1},\ldots,g_{q}])=\mathbb{K}[\mathrm{in}_{ \tau}(g_{1}),\ldots,\mathrm{in}_{\tau}(g_{q})].\]
3. _The special fiber rings_ \(\mathcal{F}(\mathcal{J}_{K_{m},P_{n}})\) _and_ \(\mathcal{F}(\mathrm{in}_{\tau}(\mathcal{J}_{K_{m},P_{n}}))\) _are normal Cohen-Macaulay domains. In particular,_ \(\mathcal{F}(\mathcal{J}_{K_{m},P_{n}})\) _has rational singularities if_ \(\mathbb{K}\) _is of characteristic_ \(0\)_, and_ \(\mathcal{F}(\mathcal{J}_{K_{m},P_{n}})\) _is F-rational if_ \(\mathbb{K}\) _is of positive characteristic._
4. _The analytic spreads of_ \(\mathcal{J}_{K_{m},P_{n}}\) _and_ \(\mathrm{in}_{\tau}(\mathcal{J}_{K_{m},P_{n}})\) _coincide. They are given by_ \[\begin{cases}n-1,&\text{if $m=2$,}\\ mn-3,&\text{if $m\geq 3$.}\end{cases}\]
5. _The associated graded rings_ \(\mathrm{gr}_{\mathcal{J}_{K_{m},P_{n}}}(S)\) _and_ \(\mathrm{gr}_{\mathrm{in}_{\tau}(\mathcal{J}_{K_{m},P_{n}})}(S)\) _are Cohen-Macaulay._
Proof.:
1. Since \(\mathrm{in}_{\tau}(\mathcal{J}_{K_{m},P_{n}})=I(H)\) is the edge ideal of the bipartite graph \(H\) constructed in Setting 3.1, the Rees algebra \(\mathcal{R}(\mathrm{in}_{\tau}(\mathcal{J}_{K_{m},P_{n}}))\) is normal by [35, Proposition 10.5.8]. It is then Cohen-Macaulay by [21]. Since \(\mathrm{in}_{\tau^{\prime}}(\mathcal{R}(\mathcal{J}_{K_{m},P_{n}}))= \mathcal{R}(\mathrm{in}_{\tau}(\mathcal{J}_{K_{m},P_{n}}))\) by Theorem 3.5 and [8, Theorem 2.7], it follows from [8, Corollary 2.3] that \(\mathcal{R}(\mathcal{J}_{K_{m},P_{n}})\) has rational singularities if \(\mathrm{char}(\mathbb{K})=0\), and \(\mathcal{R}(\mathcal{J}_{K_{m},P_{n}})\) is F-rational if \(\mathrm{char}(\mathbb{K})>0\). Furthermore, the normality of \(\mathrm{in}_{\tau}(\mathcal{R}(\mathcal{J}_{K_{m},P_{n}}))\) implies that \(\mathcal{R}(\mathcal{J}_{K_{m},P_{n}})\) is a normal Cohen-Macaulay domain again by [8, Corollary 2.3].
2. This part follows from Lemma 3.3 and the part 2 of the proof of Theorem 3.5.
3. It follows from the previous part 2 that the special fiber ring \(\mathcal{F}(\mathrm{in}_{\tau}(\mathcal{J}_{K_{m},P_{n}}))=\mathrm{in}_{\tau}( \mathcal{F}(\mathcal{J}_{K_{m},P_{n}}))\) is isomorphic to the edge subring \(\mathbb{K}[H]\) of the graph \(H\) constructed in Setting 3.1. Since \(H\) is bipartite, \(\mathbb{K}[H]\) is Koszul, normal and Cohen-Macaulay by [30, Theorem 1] and [35, Proposition 10.3.1]. The remaining parts follow again from [8, Corollary 2.3].
4. Recall that the _analytic spread_ of a graded ideal \(I\) is the (Krull) dimension of the special fiber ring \(\mathcal{F}(I)\). By [8, Proposition 2.4], the Hilbert functions of \(\mathcal{F}(\mathcal{J}_{K_{m},P_{n}})\) and \(\mathrm{in}_{\tau}(\mathcal{F}(\mathcal{J}_{K_{m},P_{n}}))\) coincide. Therefore, they have the same dimension. Meanwhile, \(\mathrm{in}_{\tau}(\mathcal{F}(\mathcal{J}_{K_{m},P_{n}}))=\mathcal{F}( \mathrm{in}_{\tau}(\mathcal{J}_{K_{m},P_{n}}))\) by 2. Thus, the analytic spreads of \(\mathcal{J}_{K_{m},P_{n}}\) and
\(\mathrm{in}_{\tau}(\mathcal{J}_{K_{m},P_{n}})\) coincide. Note that \(\mathcal{F}(\mathrm{in}_{\tau}(\mathcal{J}_{K_{m},P_{n}}))=\mathbb{K}[H]\) and \(H\) is a bipartite graph. Therefore, it remains to apply Remark 3.2 and [35, Corollary 10.1.21].
* Since the Rees algebras \(\mathcal{R}(\mathcal{J}_{K_{m},P_{n}})\) and \(\mathcal{R}(\mathrm{in}_{\tau}(\mathcal{J}_{K_{m},P_{n}}))\) are Cohen-Macaulay, this part follows from [22, Proposition 1.1].
Let \(I\) be an ideal in \(S\). Recall that \(I\) satisfies the _persistence property_ if and only if the sets of associated primes satisfy \(\mathrm{Ass}(I^{t})\subseteq\mathrm{Ass}(I^{t+1})\) for all \(t\geq 1\). In addition, the ideal \(I\) has the _strong persistence property_ if and only if \(I^{t+1}:I=I^{t}\) for all \(t\geq 1\). It is not difficult to see that the strong persistence property implies the persistence property. On the other hand, the converse is also known to be false.
**Corollary 3.7**.: _Under the assumptions in Setting 3.1, the quadratic ideals \(\mathrm{in}_{\tau}(\mathcal{J}_{K_{m},P_{n}})\) and \(\mathcal{J}_{K_{m},P_{n}}\) satisfy the strong persistence property. In particular, both \(\mathrm{in}_{\tau}(\mathcal{J}_{K_{m},P_{n}})\) and \(\mathcal{J}_{K_{m},P_{n}}\) satisfy the persistence property._
Proof.: Since \(\mathrm{in}_{\tau}(\mathcal{J}_{K_{m},P_{n}})=I(H)\) is the edge ideal of the bipartite graph \(H\) constructed in Setting 3.1, \(\mathrm{in}_{\tau}(\mathcal{J}_{K_{m},P_{n}})\) satisfies the strong persistence property by [35, Lemma 7.7.11]. Furthermore, by Theorem 3.5, we have \(\mathrm{in}_{\tau}(\mathcal{J}_{K_{m},P_{n}}^{t})=(\mathrm{in}_{\tau}( \mathcal{J}_{K_{m},P_{n}}))^{t}\) for all \(t\geq 1\). Thus, \(\mathcal{J}_{K_{m},P_{n}}\) also satisfies the strong persistence property by [14, Proposition 3.13].
**Remark 3.8**.: The Sagbi-basis property mentioned in Theorem 3.5 is not a common characteristic for generalized binomial edge ideals. Even for the complete graph \(K_{3}\), one has \(\mathrm{in}_{\tau}(\mathcal{J}_{K_{3},K_{3}}^{2})\neq(\mathrm{in}_{\tau}( \mathcal{J}_{K_{3},K_{3}}))^{2}\). In other words, the natural generators of the generalized binomial edge ideal \(\mathcal{J}_{K_{3},K_{3}}\) do not form a Sagbi basis under the given term order.
As further applications of Theorem 3.5, we study the regularity and the depth of powers of \(\mathcal{J}_{K_{m},P_{n}}\) in Theorem 3.11 and Theorem 3.12 respectively. In particular, we give a closed formula of the regularities. To achieve it, we apply the following lemma:
**Lemma 3.9** ([25, Theorem 4.4(2)]).: _Let \(G\) be a graph. Then, for all \(t\geq 1\) we have_
\[\mathrm{reg}(I(G)^{t})\leq 2t+\mathrm{cochord}(G)-1.\]
In particular, we need to compute the co-chordal cover number of the bipartite graph \(H\) from Setting 3.1. For this purpose, recall that a graph \(G\) is _chordal_ if every induced cycle in \(G\) has length \(3\). Relatedly, a _perfect elimination order_ of a graph \(G\) is an order \(v_{1},\ldots,v_{n}\) of its vertices such that for all \(i\in[n]\), the neighbor \(N_{G_{i}}(v_{i})\) of \(v_{i}\), restricted to the induced subgraph \(G_{i}\) of \(G\) on the set \(\{v_{i},\ldots,v_{n}\}\), induces a complete subgraph in \(G\). It is well-known that a graph is chordal if and only if it admits a perfect elimination order.
On the other hand, a graph \(G\) is _co-chordal_ if its complement graph \(G^{\complement}\) is chordal. The _co-chordal cover number_ of a graph \(G\), denoted \(\mathrm{cochord}(G)\), is the minimum number \(k\) such that there exist co-chordal subgraphs \(G_{1},\ldots,G_{k}\) of \(G\) with \(E(G)=\bigcup_{i=1}^{k}E(G_{i})\).
In the following, we compute the co-chordal cover number of the bipartite graph \(H\). An upper bound on this number is sufficient for our application.
**Lemma 3.10**.: _For the graph \(H\) introduced in Setting 3.1, one has \(\mathrm{cochord}(H)\leq n-1\)._
Proof.: It is clear that the induced subgraphs \(H_{1},H_{2},\ldots,H_{n-1}\) are pairwise isomorphic. Furthermore, \(E(H)=\bigcup_{k=1}^{n-1}E(H_{k})\). It remains to show that \(H_{1}\) is co-chordal. Notice that
\[E(H_{1}^{\complement})=\{ \{x_{i,1},x_{j,1}\}:1\leq i<j\leq m\,\}\cup\{\,\{x_{i,2},x_{j,2} \}:1\leq i<j\leq m\,\}\] \[\cup\{ \,\{x_{i,1},x_{j,2}\}:1\leq j\leq i\leq m\,\}\,.\]
It can be directly verified that \(x_{1,1},x_{2,1},\ldots,x_{m,1},x_{1,2},x_{2,2},\ldots,x_{m,2}\) form a perfect elimination order. Therefore, \(H_{1}^{\complement}\) is a chordal graph.
The proof of the following Theorem 3.11 will also use the comparison in Theorem 5.2. The argument for the latter is logically irrelevant to what is presented here. Therefore, we decide to postpone the discussion of Theorem 5.2 to a later section of our paper, in order to maintain the consistency and coherence of our presentation.
**Theorem 3.11**.: _Under the assumptions in Setting 3.1, we have_
\[\operatorname{reg}\left(\frac{S}{\mathcal{J}_{K_{m},P_{n}}^{t}}\right)= \operatorname{reg}\left(\frac{S}{\operatorname{in}_{\tau}(\mathcal{J}_{K_{m},P_{ n}}^{t})}\right)=2(t-1)+(n-1)\]
_for all \(t\geq 1\)._
Proof.: It follows from [14, Theorem 3.1], [18, Theorem 3.3.4], and Theorem 5.2 that
\[\operatorname{reg}\left(\frac{S}{\operatorname{in}_{\tau}(\mathcal{J}_{K_{m}, P_{n}}^{t})}\right)\geq\operatorname{reg}\left(\frac{S}{\mathcal{J}_{K_{m},P_{ n}}^{t}}\right)\geq\operatorname{reg}\left(\frac{S}{\mathcal{J}_{K_{2},P_{ n}}^{t}}\right)=2(t-1)+(n-1).\]
Thus, it suffices to prove that \(\operatorname{reg}(S/\operatorname{in}_{\tau}(\mathcal{J}_{K_{m},P_{n}}^{t}) )\leq 2(t-1)+(n-1)\). To achieve this goal, we note that \(\operatorname{in}_{\tau}(\mathcal{J}_{K_{m},P_{n}}^{t})=(\operatorname{in}_ {\tau}(\mathcal{J}_{K_{m},P_{n}}))^{t}=I(H)^{t}\) by Theorem 3.5. Furthermore, it follows from Lemmas 3.9 and 3.10 that \(\operatorname{reg}(S/I(H)^{t})\leq 2(t-1)+(n-1)\). So we are done.
Next, we determine the limit of the depth of powers of \(\mathcal{J}_{K_{m},P_{n}}\).
**Theorem 3.12**.: _The following results hold under the assumptions in Setting 3.1:_
1. \[\operatorname{depth}\left(\frac{S}{\mathcal{J}_{K_{m},P_{n}}}\right)= \operatorname{depth}\left(\frac{S}{\operatorname{in}_{\tau}(\mathcal{J}_{K_{m},P_{n}})}\right)=n+(m-1);\]
2. _for each_ \(t\geq 1\)_, we have_ \[\operatorname{depth}\left(\frac{S}{\operatorname{in}_{\tau}(\mathcal{J}_{K_{m},P_{n}}^{t})}\right)\geq\operatorname{depth}\left(\frac{S}{\operatorname{in}_ {\tau}(\mathcal{J}_{K_{m},P_{n}}^{t+1})}\right);\]
3. \[\lim_{t\to\infty}\operatorname{depth}\left(\frac{S}{\mathcal{J}_{K_{m},P_{n}} ^{t}}\right)=\lim_{t\to\infty}\operatorname{depth}\left(\frac{S}{\operatorname {in}_{\tau}(\mathcal{J}_{K_{m},P_{n}}^{t})}\right)=\begin{cases}3,&\text{if $m\geq 3$,}\\ n+1,&\text{if $m=2$.}\end{cases}\]
Proof.:
1. Recall that a chordal graph is said to be a _generalized block graph_ if it satisfies: for any three maximal cliques \(F_{i}\), \(F_{j}\), and \(F_{k}\), if \(F_{i}\cap F_{j}\cap F_{k}\neq\emptyset\), then \(F_{i}\cap F_{j}=F_{i}\cap F_{k}=F_{j}\cap F_{k}\). Since the path graph \(P_{n}\) is clearly a generalized block graph whose clique number is \(2\), this part follows from [5, Theorem 3.3].
2. By Theorem 3.5, we get that \(\operatorname{in}_{\tau}(\mathcal{J}_{K_{m},P_{n}}^{t})=(\operatorname{in}_{ \tau}(\mathcal{J}_{K_{m},P_{n}}))^{t}=I(H)^{t}\) for all \(t\geq 1\), where \(H\) is the bipartite graph constructed in Setting 3.1. Therefore, it suffices to prove that \(\operatorname{depth}\left(S/I(H)^{t}\right)\geq\operatorname{depth}\left(S/I (H)^{t+1}\right)\) for all \(t\geq 1\). Since \(H\) is a bipartite graph, we have \(I(H)^{t}=I(H)^{(t)}\) for any \(t\geq 1\) by [35, Theorem 14.3.6 and Corollary 14.3.15], where \(I(H)^{(t)}\) is the \(t\)-th symbolic power of \(I(H)\). In addition, note that \(x_{m-1,1}\) and \(x_{2,n}\) are two leaves of \(H\), we obtain that \(\operatorname{pd}(S/I(H)^{(t+1)})\geq\operatorname{pd}(S/I(H)^{(t)})\) by [26, Theorem 5.2]. The desired statement then follows from the Auslander-Buchsbaum formula; cf, for example, [18, Corollary A.4.3].
3. For all \(t\geq 1\), by [18, Theorem 3.3.4], we have \[\operatorname{depth}\left(\frac{S}{\mathcal{J}_{K_{m},P_{n}}^{t}}\right)\geq \operatorname{depth}\left(\frac{S}{\operatorname{in}_{\tau}(\mathcal{J}_{K_{m},P_{n}}^{t})}\right).\] It follows from the previous part 2, [11, Proposition 3.3], Theorem 3.5, and Corollary 3.6(e) that \[\lim_{t\to\infty}\operatorname{depth}\left(\frac{S}{\mathcal{J}_{K_{m},P_{n}}^{t}}\right) \geq\lim_{t\to\infty}\operatorname{depth}\left(\frac{S}{\operatorname{ in}_{\tau}(\mathcal{J}_{K_{m},P_{n}}^{t})}\right)=\inf_{t\geq 1} \operatorname{depth}\left(\frac{S}{\operatorname{in}_{\tau}(\mathcal{J}_{K_{m},P_{n}}^{t})}\right)\] \[=\inf_{t\geq 1}\operatorname{depth}\left(\frac{S}{(\operatorname{in}_ {\tau}(\mathcal{J}_{K_{m},P_{n}}))^{t}}\right)=\dim(S)-\ell(\operatorname{in} _{\tau}(\mathcal{J}_{K_{m},P_{n}}))\] (6) where \(\ell(\operatorname{in}_{\tau}(\mathcal{J}_{K_{m},P_{n}}))\) is the analytic spread of \(\operatorname{in}_{\tau}(\mathcal{J}_{K_{m},P_{n}})\).
At the same time, it follows from that [17, Theorem 1.2] that \(\lim_{t\to\infty}\operatorname{depth}\left(S/\mathcal{J}_{K_{m},P_{n}}^{t}\right)\) exists and satisfies
\[\lim_{t\to\infty}\operatorname{depth}\left(\frac{S}{\mathcal{J}_{K_{m},P_{n}}^{ t}}\right)\leq\dim(S)-\ell(\mathcal{J}_{K_{m},P_{n}}). \tag{7}\]
If we combine Equations (6) and (7) together, it remains to apply Corollary 3.6(d).
We end this section with the study of the symbolic powers of \(\mathcal{J}_{K_{m},P_{n}}\). Let \(I\) be an ideal of \(S\), and suppose that \(\operatorname{Ass}(I)\) is the set of associated prime ideals of \(I\). For any integer \(t\geq 1\), the _\(t\)-th symbolic power_ of \(I\) is defined by
\[I^{(t)}\coloneqq\bigcap_{\mathfrak{p}\in\operatorname{Ass}(I)}(I^{t}S_{ \mathfrak{p}}\cap S).\]
In most cases, symbolic powers are not identical to the ordinary powers. However, Ene and Herzog proved in [12] that if \(G\) is a closed graph and \(J_{G}\) is its binomial edge ideal, then \(J_{G}^{(t)}=J_{G}^{t}\) for all \(t\geq 1\). Recall that \(G\) is said to be _closed_ if for all edges \(\{i,j\}\) and \(\{k,l\}\) of \(G\) with \(i<j\) and \(k<l\), one has \(\{j,l\}\in E(G)\) if \(i=k\), and \(\{i,k\}\in E(G)\) if \(j=l\). Path graphs are the simplest closed graphs. In the remaining of this section, we show that the symbolic powers of the _generalized_ binomial edge ideal of a path graph coincide with the ordinary powers.
**Theorem 3.13**.: _Let \(t\) be a positive integer and \(\mathcal{J}_{K_{m},P_{n}}^{(t)}\) be the \(t\)-th symbolic power of \(\mathcal{J}_{K_{m},P_{n}}\). Then, we have \(\mathcal{J}_{K_{m},P_{n}}^{(t)}=\mathcal{J}_{K_{m},P_{n}}^{t}\)._
The proof of this result is involved. We need some preparations.
**Lemma 3.14**.: _Let \(I=I_{2}(\boldsymbol{X})\) be the ideal in \(S=\mathbb{K}[\boldsymbol{X}]\), which is generated by all \(2\)-minors of the \(m\times n\) matrix \(\boldsymbol{X}\). Then, one has \(\operatorname{in}_{\tau}(I^{(t)})=(\operatorname{in}_{\tau}(I))^{(t)}\) for every integer \(t\geq 1\)._
Proof.: Consider the graph \(\widetilde{G}\) with the edge set
\[E(\widetilde{G})=\left\{\,\{x_{i,j},x_{i^{\prime},j^{\prime}}\}:1\leq i<i^{ \prime}\leq m,1\leq j<j^{\prime}\leq n\,\right\}.\]
It is clear that \(\operatorname{in}_{\tau}(I)\) is the edge ideal of \(\widetilde{G}\) in \(S\). To confirm the expected equality, we first show that \(\widetilde{G}\) is _perfect_ in the sense that the _chromatic number_\(\chi(G_{V^{\prime}})\) equals the _clique number_\(\omega(G_{V^{\prime}})\) for every subset \(V^{\prime}\) of \(V(\widetilde{G})\). Recall that a graph is called a _comparability graph_ if there exists a partial ordering of its vertex set such that two vertices are adjacent if and only if they are comparable. It is well-known that every comparability graph is perfect, see [35, Corollary 14.5.6]. At the same time, the graph \(\widetilde{G}\) here is a comparability graph, where the poset structure can be taken with respect to the subscripts of the vertices of \(\widetilde{G}\): \(x_{i,j}\prec x_{i^{\prime},j^{\prime}}\) if and only if both \(i<i^{\prime}\) and \(j<j^{\prime}\). Thus, \(\widetilde{G}\) is a perfect graph.
By looking at the graded components, it follows from [35, Corollary 13.7.2] that \((\operatorname{in}_{\tau}(I))^{(t)}\) is generated by monomials of the form \(u_{1}u_{2}\ldots u_{s}\) such that
\[u_{k}=x_{i_{k,1},j_{k,1}}x_{i_{k,2},j_{k,2}}\cdots x_{i_{k,r_{k}}j_{k,r_{k}}}\]
with \(1\leq i_{k,1}<i_{k,2}<\cdots<i_{k,r_{k}}\leq m\), \(1\leq j_{k,1}<j_{k,2}<\cdots<j_{k,r_{k}}\leq n\), and \(\sum_{k=1}^{s}(r_{k}-1)=t\).
At the same time, it follows from [4, Theorem 4.3.6] that \(\operatorname{in}_{\tau}(I^{(t)})\) is generated by the monomials \(\operatorname{in}_{\tau}(\Delta)\), where \(\Delta\) is a product of minors with \(\gamma_{2}(\Delta)=t\) and no factor of size \(<2\). These monomials are precisely those described in the previous paragraph. Thus, the proof is completed.
Let \(G\) be a simple graph and let \(c(G)\) denote the number of connected components of \(G\). A vertex \(v\) is called a _cut vertex_ of \(G\) if \(c(G)<c(G\setminus v)\). Let \(A\) be a subset of \(V(G)\). By abuse of notation, we also let \(c(A)\) denote the number of connected components of \(G\setminus A\). If \(v\) is a cut vertex of the induced subgraph \(G\setminus(A\setminus\{v\})\) for any \(v\in A\), then we say that \(A\) has the _cut point property_. Set \(\mathcal{C}(G)\coloneqq\{\emptyset\}\cup\{\,A:A\text{ has the cut point property }\}\).
Now, let \(G\) be a simple graph on the vertex set \([n]\). For each subset \(A\) of \([n]\), we introduce the ideal
\[P_{A}(K_{m},G)\coloneqq(x_{ij}:(i,j)\in[m]\times A)+\mathcal{J}_{K_{m}, \widetilde{G_{1}}}+\cdots+\mathcal{J}_{K_{m},\widetilde{G_{c(A)}}}\]
in \(S\), where \(G_{1},\ldots,G_{c(A)}\) are the connected components of \(G\setminus A\). It is well-known that
\[\mathcal{J}_{K_{m},G}=\bigcap_{A\in\mathcal{C}(G)}P_{A}(K_{m},G)\]
is the minimal primary decomposition of the radical ideal \(\mathcal{J}_{K_{m},G}\); see [32, Theorem 7].
**Lemma 3.15**.: _Let \(A\subset[n]\). Then \(\mathrm{in}_{\tau}(P_{A}(K_{m},G)^{(t)})=(\mathrm{in}_{\tau}(P_{A}(K_{m},G)) )^{(t)}\) for any integer \(t\geq 1\)._
Proof.: We follow the strategy of [12, Lemma 3.2]. The only essential change is that we consider instead the symbolic Rees algebra \(\mathcal{R}_{s}(I)\coloneqq\oplus_{k\geq 0}I^{(k)}T^{k}\) of an ideal \(I\) in \(S\), which is a graded subalgebra of \(S[T]\).
For simplicity, we write \(P\) instead of \(P_{A}(K_{m},G)\), \(c\) instead of \(c(A)\), and \(J_{k}\) instead of \(\mathcal{J}_{K_{m},\widetilde{G_{k}}}\) for \(1\leq k\leq c\). Since the sets of variables \(\{x_{i,j}:i\in[m],j\in V(\widetilde{G_{k}})\}\) as well as the set \(\{\,x_{i,j}:i\in[m],j\in A\,\}\) are pairwise disjoint, we have
\[\mathrm{in}_{\tau}(P)=(x_{i,j}:i\in[m],j\in A)+\mathrm{in}_{\tau}(J_{1})+ \cdots+\mathrm{in}_{\tau}(J_{c}). \tag{8}\]
It follows from the same pairwise disjointness and [16, Theorem 3.4] that
\[\mathcal{R}_{s}(P) =\mathcal{R}_{s}((x_{i,j}:i\in[m],j\in A))\otimes_{\mathbb{K}}( \otimes_{k=1}^{c}\mathcal{R}_{s}(J_{k})), \tag{9}\]
and
\[\mathcal{R}_{s}(\mathrm{in}_{\tau}(P)) =\mathcal{R}_{s}((x_{i,j}:i\in[m],j\in A))\otimes_{\mathbb{K}}( \otimes_{k=1}^{c}\mathcal{R}_{s}(\mathrm{in}_{\tau}(J_{k})))\] \[=\mathcal{R}_{s}((x_{i,j}:i\in[m],j\in A))\otimes_{\mathbb{K}}( \otimes_{k=1}^{c}\mathrm{in}_{\tau^{\prime}}(\mathcal{R}_{s}(J_{k}))), \tag{10}\]
where the last equality is essentially due to Lemma 3.14. Since it is clear that RHS of (10) \(\subseteq\mathrm{in}_{\tau^{\prime}}(\mathrm{RHS}\) of (9)), we have
\[\mathcal{R}_{s}(\mathrm{in}_{\tau}(P))\subseteq\mathrm{in}_{\tau^{\prime}}( \mathcal{R}_{s}(P)). \tag{11}\]
On the other hand, it is well-known that \(\mathcal{R}_{s}(P)\) and \(\mathrm{in}_{\tau^{\prime}}(\mathcal{R}_{s}(P))\) have the same Hilbert functions. Since \(\mathcal{R}_{s}(J_{k})\) and \(\mathrm{in}_{\tau^{\prime}}(\mathcal{R}_{s}(J_{k}))\) have the same Hilbert functions for every \(k\), it follows from eqs. (9) and (10) that \(\mathcal{R}_{s}(\mathrm{in}_{\tau}(P))\) and \(\mathrm{in}_{\tau^{\prime}}(\mathcal{R}_{s}(P))\) have the same Hilbert functions. Therefore, we have \(\mathcal{R}_{s}(\mathrm{in}_{\tau}(P))=\mathrm{in}_{\tau^{\prime}}(\mathcal{R} _{s}(P))\) from eq. (11). At the level of graded components, we obtain that \(\mathrm{in}_{\tau}(P^{(t)})=(\mathrm{in}_{\tau}(P))^{(t)}\) for every \(t\).
**Lemma 3.16**.: _Let \(G\) be a generalized block graph. Then,_
\[\mathrm{in}_{\tau}(\mathcal{J}_{K_{m},G})=\bigcap_{\mathfrak{p}\in\mathrm{Ass} (\mathcal{J}_{K_{m},G})}\mathrm{in}_{\tau}(\mathfrak{p}).\]
Proof.: Without loss of generality, we assume that \(G\) is connected. Let \(r\) be the number of maximal cliques of \(G\). When \(r=1\), \(G\) is a complete graph. Since \(\mathcal{J}_{K_{m},G}\) is a prime ideal in this case, the expected result is trivial. When \(r>1\), we apply the following observation from the proof of [5, Theorem 3.3]: There exists a leaf order, say, \(F_{1},\ldots,F_{r}\), on the clique complex \(\Delta(G)\) of \(G\). Let \(F_{t_{1}},\ldots,F_{t_{q}}\) be the branches of \(F_{r}\). Then, the intersection of any pair of facets from \(F_{t_{1}},\ldots,F_{t_{q}},F_{r}\) is the same set of vertices, say, \(A\). Now, \(\mathcal{J}_{K_{m},G}=J_{1}\cap J_{2}\), where \(J_{1}\coloneqq\bigcap_{B\in\mathcal{C}(G),A\cap B=\emptyset}P_{B}(G)\) and \(J_{2}\coloneqq\bigcap_{B\in\mathcal{C}(G),A\subseteq B}P_{B}(G)\). Note that \(J_{1}=\mathcal{J}_{K_{m},G^{\prime}}\) where \(G^{\prime}\) is obtained from \(G\) by replacing the cliques \(F_{t_{1}},\ldots,F_{t_{q}},F_{r}\) by the clique on the vertex set \(F_{r}\cup(\cup_{k=1}^{q}F_{t_{k}})\). At the same time, \(J_{2}=(x_{i,j}:(i,j)\in[m]\times A)+\mathcal{J}_{K_{m},G^{\prime\prime}}\), where \(G^{\prime\prime}\) is the restriction of \(G\) to the vertex set \(V(G)\setminus A\). Obviously, \(G^{\prime}\) and \(G^{\prime\prime}\) are generalized block graphs with fewer maximal cliques. Since we have \(\mathrm{in}_{\tau}(\mathcal{J}_{K_{m},G})=\mathrm{in}_{\tau}(J_{1})\cap \mathrm{in}_{\tau}(J_{2})\) from the proof of [5, Theorem 3.3(c)] in the \(r>1\) case, we are done by induction on \(r\)
**Lemma 3.17**.: _Let \(t\) be a positive integer. Suppose that \(I\) is an ideal in \(S\) and \(\sigma\) is a term order such that \(\operatorname{in}_{\sigma}(I)\) is a squarefree monomial ideal. It follows that \(I\) is a radical ideal and \(I=\bigcap_{\mathfrak{p}\in\operatorname{Min}(I)}\mathfrak{p}\). Suppose that the following conditions are satisfied:_
1. \(\operatorname{in}_{\sigma}(\mathfrak{p})\) _is a squarefree monomial ideal for each_ \(\mathfrak{p}\in\operatorname{Min}(I)\)_,_
2. \((\operatorname{in}_{\sigma}(I))^{(t)}=(\operatorname{in}_{\sigma}(I))^{t}\)_,_
3. \(\operatorname{in}_{\sigma}(\mathfrak{p}^{(t)})=(\operatorname{in}_{\sigma}( \mathfrak{p}))^{(t)}\) _for each_ \(\mathfrak{p}\in\operatorname{Min}(I)\)_,_
4. \(\operatorname{in}_{\sigma}(I)=\bigcap_{\mathfrak{p}\in\operatorname{Min}(I)} \operatorname{in}_{\sigma}(\mathfrak{p})\)_._
_Then \(I^{(t)}=I^{t}\)._
Proof.: Since \(\operatorname{in}_{\sigma}(I)\) is a squarefree monomial ideal, we have
\[(\operatorname{in}_{\sigma}(I))^{(t)}=\bigcap_{\mathfrak{p}\in\operatorname{ Min}(\operatorname{in}_{\sigma}(I))}\mathfrak{P}^{t}.\]
Likewise, we have
\[(\operatorname{in}_{\sigma}(\mathfrak{p}))^{(t)}=\bigcap_{\mathfrak{p}\in \operatorname{Min}(\operatorname{in}_{\sigma}(\mathfrak{p}))}\mathfrak{P}^{t}\]
for each \(\mathfrak{p}\in\operatorname{Min}(I)\). Therefore, we have \(\left(\bigcap_{\mathfrak{p}\in\operatorname{Min}(I)}\operatorname{in}_{ \sigma}(\mathfrak{p})\right)^{(t)}=\bigcap_{\mathfrak{p}\in\operatorname{ Min}(I)}(\operatorname{in}_{\sigma}(\mathfrak{p}))^{(t)}\). It follows from the remaining assumptions that
\[\operatorname{in}_{\sigma}(I^{t}) \supseteq(\operatorname{in}_{\sigma}(I))^{t}=(\operatorname{in}_ {\sigma}(I))^{(t)}=\left(\bigcap_{\mathfrak{p}\in\operatorname{Min}(I)} \operatorname{in}_{\sigma}(\mathfrak{p})\right)^{(t)}=\bigcap_{\mathfrak{p} \in\operatorname{Min}(I)}(\operatorname{in}_{\sigma}(\mathfrak{p}))^{(t)}\] \[=\bigcap_{\mathfrak{p}\in\operatorname{Min}(I)}\operatorname{in} _{\sigma}(\mathfrak{p}^{(t)})\supseteq\operatorname{in}_{\sigma}\left(\bigcap_ {\mathfrak{p}\in\operatorname{Min}(I)}\mathfrak{p}^{(t)}\right)=\operatorname {in}_{\sigma}(I^{(t)})\supseteq\operatorname{in}_{\sigma}(I^{t}).\]
Therefore, we obtain that \(\operatorname{in}_{\sigma}(I^{(t)})=\operatorname{in}_{\sigma}(I^{t})\). Since \(I^{t}\subseteq I^{(t)}\), we get \(I^{t}=I^{(t)}\).
Proof of Theorem 3.13.: For each \(\mathfrak{p}\in\operatorname{Min}(\mathcal{J}_{K_{m},P_{n}})\), we know \(\operatorname{in}_{\tau}(\mathfrak{p})\) is squarefree from eq. (8). Furthermore, notice that \(\operatorname{in}_{\tau}(\mathcal{J}_{K_{m},P_{n}})=I(H)\) where \(H\) is a bipartite graph. It follows from [35, Corollary 13.3.6] that \((\operatorname{in}_{\tau}(\mathcal{J}_{K_{m},P_{n}}))^{t}=(\operatorname{in} _{\tau}(\mathcal{J}_{K_{m},P_{n}}))^{(t)}\). It remains to apply Lemmas 3.15 to 3.17.
**Remark 3.18**.: Lemma 3.17 is a modification of [12, Lemma 3.1]. Note that we cannot use [12, Lemma 3.1] directly to prove Theorem 3.13, since the condition (ii)(a) of [12, Lemma 3.1] is not satisfied by \(\mathcal{J}_{K_{m},P_{n}}\) when \(m=n=3\). In fact, in this case, there is a prime ideal \(\mathfrak{p}\) associated with \(\mathcal{J}_{K_{3},P_{3}}\) such that \(\mathfrak{p}^{2}\neq\mathfrak{p}^{(2)}\). This prime ideal is \(P_{\emptyset}(K_{3},P_{3})=I_{2}(\boldsymbol{X})\), where \(\boldsymbol{X}\) is the \(3\times 3\) generic matrix. It follows from [4, Theorem 4.3.6] that \(I_{2}^{(2)}(\boldsymbol{X})=I_{2}^{2}(\boldsymbol{X})+I_{3}(\boldsymbol{X})\).
## 4. Blowup algebras
In this section, we will use algebraic properties of the initial algebras and the Sagbi basis theory to study the regularities of the blowup algebras of the ideal \(\mathcal{J}_{K_{m},P_{n}}\). Our approach will involve a combination of combinatorial optimization techniques to analyze the related algebraic invariants.
**Lemma 4.1**.: _For the bipartite graph \(H\) introduced in Setting 3.1, we have_
\[\operatorname{match}(H)=(m-1)\left\lfloor\frac{n}{2}\right\rfloor+\left\lfloor \frac{n-1}{2}\right\rfloor,\]
_where \(\lfloor\frac{n}{2}\rfloor\) is the largest integer \(\leq\frac{n}{2}\)._
Proof.: We have mentioned in Remark 3.2 that \(x_{m,1}\) and \(x_{1,n}\) are two isolated vertices of \(H\). Let \(H^{\prime}\) be the induced subgraph \(H\setminus\{x_{m,1},x_{1,n}\}\). This is a bipartite graph with a bipartition \(V_{1}^{\prime}\sqcup V_{2}^{\prime}\), where \(V_{i}^{\prime}=V_{i}\cap V(H^{\prime})\) for \(i=1,2\). Notice that we have a complete matching from \(V_{2}^{\prime}\) to \(V_{1}^{\prime}\), given by
\[\{\{x_{i,j},x_{i+1,j+1}\}:i\in[m-1],j\text{ is odd}\}\cup\{\{x_{1,j},x_{m,j+1} \}:j\text{ is even}\}; \tag{12}\]
see also Figure 3. Therefore, we have the desired matching number, by counting the number of edges in eq. (12).
**Theorem 4.2**.: _Under the assumptions in Setting 3.1, we have_
\[\operatorname{reg}(\mathcal{R}(\mathcal{J}_{K_{m},P_{n}}))=\operatorname{reg }(\mathcal{R}(\operatorname{in}_{\tau}(\mathcal{J}_{K_{m},P_{n}})))=(m-1) \left\lfloor\frac{n}{2}\right\rfloor+\left\lfloor\frac{n-1}{2}\right\rfloor.\]
Proof.: Note that the ideal \(\mathcal{J}_{K_{m},P_{n}}\) satisfies the following conditions:
1. the natural generators of \(\mathcal{J}_{K_{m},P_{n}}\) form a Grobner basis with respect to the term order \(\tau\) by Remark 3.2;
2. for each \(t\geq 1\), \(\operatorname{in}_{\tau}(\mathcal{J}_{K_{m},P_{n}}^{t})=(\operatorname{in}_{ \tau}(\mathcal{J}_{K_{m},P_{n}}))^{t}\) by Theorem 3.5;
3. the Rees algebras \(\mathcal{R}(\operatorname{in}_{\tau}(\mathcal{J}_{K_{m},P_{n}}))\) is Cohen-Macaulay by Corollary 3.6.
It follows from [28, Theorem 3.2] that \(\operatorname{reg}(\mathcal{R}(\mathcal{J}_{K_{m},P_{n}}))=\operatorname{reg }(\mathcal{R}(\operatorname{in}_{\tau}(\mathcal{J}_{K_{m},P_{n}})))\).
We have seen that \(\operatorname{in}_{\tau}(\mathcal{J}_{K_{m},P_{n}})=I(H)\) is the edge ideal of the bipartite graph \(H\) constructed in Setting 3.1. It follows from [6, Theorem 4.2], or implicitly from [35, Theorems 7.1.8 and 14.3.55], that \(\operatorname{reg}(\mathcal{R}(I(H)))=\operatorname{match}(H)\). The only remaining step is to apply Lemma 4.1.
In the following, we will use the \(a\)-invariant of the Cohen-Macaulay special fiber ring \(\mathcal{F}(\mathcal{J}_{K_{m},P_{n}})\) to compute its regularity; see [35, Theorem 6.4.1]. Recall that if \(A\) is a standard graded \(\mathbb{K}\)-algebra, the \(a\)_-invariant_ of \(A\), denoted by \(a(A)\), is the degree, as a rational function, of the Hilbert series of \(A\).
**Remark 4.3**.: Notice that \(\operatorname{in}_{\tau}(\mathcal{F}(\mathcal{J}_{K_{m},P_{n}}))=\mathbb{K}[H]\) for the graph \(H\) introduced in Setting 3.1. When \(m=2\) or \(3\), the graph \(H\) is acyclic; see also Remark 3.2. It follows from Lemma 3.3 that the presentation ideal of \(\mathbb{K}[H]\) is the zero ideal. In other words, \(\mathcal{F}(\operatorname{in}_{\tau}(\mathcal{J}_{K_{m},P_{n}}))\) is a polynomial ring in these cases. In particular, its regularity is zero. Since \(\operatorname{in}_{\tau}\mathcal{F}(\mathcal{J}_{K_{m},P_{n}})\) and \(\mathcal{F}(\mathcal{J}_{K_{m},P_{n}})\) are Cohen-Macaulay and have the same Hilbert series, the regularity of the special fiber ring \(\mathcal{F}(\mathcal{J}_{K_{m},P_{n}})\) is also zero.
By the previous observation, we will focus on the case when \(m\geq 4\). The technical computation of the \(a\)-invariant of \(\mathcal{F}(\mathcal{J}_{K_{m},P_{n}})\) is given in Proposition 4.7. Before starting the proof, let us briefly introduce its strategy, which consists of two different combinatorial approaches.
The first approach deals with directed graphs, as in Lemma 4.5.
**Definition 4.4**.: Let \(D\) be a directed graph with the vertex set \(V(D)\) and the edge set \(E(D)\). For every subset \(A\subseteq V(D)\), define \(\delta^{+}(A)\coloneqq\{\,(z,w)\in E(D):z\in A,w\notin A\,\}\) to be the set of edges leaving the vertex set \(A\) and \(\delta^{-}(A)\) to be the set of edges entering the vertex set \(A\). The edge set \(\delta^{+}(A)\) is a _directed cut_ of \(D\) if \(\emptyset\neq A\subsetneq V(D)\) and \(\delta^{-}(A)=\emptyset\).
Figure 3. A complete matching of \(H\)
**Lemma 4.5** ([35, Theorem 11.3.2]).: _Let \(G\) be a connected bipartite graph with bipartition \(V_{1}\sqcup V_{2}\). If \(G\) is regarded as a directed graph with all its arrows leaving the vertex set \(V_{1}\), then the following two numbers are equal:_
1. \(-a(\mathbb{K}[G])\)_, minus the_ \(a\)_-invariant of the edge subring_ \(\mathbb{K}[G]\)_;_
2. _the maximum number of edge disjoint directed cuts of_ \(G\)_._
The above lemma invites us to find edge disjoint directed cuts of \(H\). A natural choice of such direct cuts gives the desired \(a\)-invariant that we need. However, proving that this is the value that we are looking for is a different story.
In the second approach, we start with the vector space \(V\coloneqq\operatorname{Mat}_{m\times n}(\mathbb{Q})\) over \(\mathbb{Q}\). It has a canonical basis \(\{\boldsymbol{e}_{i,j}:i\in[m],j\in[n]\}\), where \(\boldsymbol{e}_{i,j}=\begin{pmatrix}e_{k,\ell}^{i,j}\end{pmatrix}_{\begin{subarray} {c}1\leq k\leq m\\ 1\leq\ell\leq n\end{subarray}}\in V\) with
\[e_{k,\ell}^{i,j}=\begin{cases}1,&\text{if }(k,\ell)=(i,j),\\ 0,&\text{otherwise}.\end{cases}\]
Let \(H^{\prime}\) be the graph obtained from \(H\) by removing the two isolated vertices \(x_{m,1}\) and \(x_{1,n}\). From this graph, we introduce a set of vectors
\[\mathcal{A}\coloneqq\big{\{}\,\boldsymbol{e}_{i,j}+\boldsymbol{e}_{i^{\prime },j^{\prime}}:\{x_{i,j},x_{i^{\prime},j^{\prime}}\}\in E(H^{\prime})\,\big{\}} \subset V.\]
The set \(\mathbb{R}_{+}\mathcal{A}\) is called the _edge cone_ of \(H^{\prime}\). At the same time, the _shift polyhedron_ of the edge cone of \(H^{\prime}\) is
\[\mathcal{Q}\coloneqq\operatorname{conv}(\operatorname{Mat}_{m\times n}( \mathbb{Z})\cap\operatorname{ri}(\mathbb{R}_{+}\mathcal{A})),\]
where \(\operatorname{ri}(\mathbb{R}_{+}\mathcal{A})\) is the interior of \(\mathbb{R}_{+}\mathcal{A}\) relative to the affine hull of \(\mathbb{R}_{+}\mathcal{A}\); see also [35, Corollary 11.2.4]. We will use the fact that
\[-a(\mathbb{K}[H^{\prime}])=\min\{\,|\boldsymbol{v}|/2:\boldsymbol{v}\in \mathcal{Q}\,\} \tag{13}\]
by [35, Theorem 11.3.1], where \(|\boldsymbol{v}|\coloneqq\sum_{i=1}^{m}\sum_{j=1}^{n}v_{i,j}\) for \(\boldsymbol{v}=(v_{i,j})\in V\).
To investigate the shift polyhedron in detail, let \(V^{*}\) be the space of linear functions on \(V\). Each \(\boldsymbol{F}\in V^{*}\) defines a hyperplane \(H_{\boldsymbol{F}}\coloneqq\{\boldsymbol{v}\in V:\boldsymbol{F}(\boldsymbol{ v})=0\}\) and a half-space \(H_{\boldsymbol{F}}^{+}\coloneqq\{\boldsymbol{v}\in V:\boldsymbol{F}( \boldsymbol{v})\geq 0\}\). Let \(\{\,\boldsymbol{E}_{i,j}:i\in[m],j\in[n]\,\}\subset V^{*}\) be the dual basis with respect to \(\{\boldsymbol{e}_{i,j}:i\in[m],j\in[n]\}\). For simplicity, we will represent the elements in \(V^{*}\) as \(m\times n\) matrices like \(\boldsymbol{F}=(f_{i,j})\in V^{*}\). For every such function \(\boldsymbol{F}\in V^{*}\) and every vector \(\boldsymbol{v}=(v_{i,j})\in V\), we have \(\boldsymbol{F}(\boldsymbol{v})=\sum_{i,j}f_{i,j}(v_{i,j})\in\mathbb{Q}\).
We have a quick remark.
**Remark 4.6**.: Fix a matrix \(\boldsymbol{A}=(a_{i,j})\) in \(V^{*}\), where
\[a_{i,j}=\begin{cases}0,&\text{if }(i,j)=(1,n)\text{ or }(m,1),\\ 1,&\text{if }j\text{ is otherwise odd},\\ -1,&\text{if }j\text{ is otherwise even},\end{cases}\]
see Figure 4. It is easy to see that \(\mathcal{A}\subseteq H_{\boldsymbol{A}}\cap H_{\boldsymbol{E}_{1,n}}\cap H_{ \boldsymbol{E}_{m,1}}\). Since \(H^{\prime}\) is a connected bipartite graph, we get \(\dim(\mathbb{K}[H^{\prime}])=mn-3\) by [35, Corollary 10.1.21]. At the same time, the
Figure 4. The matrix \(\boldsymbol{A}\) in \(V^{*}\)
integral points of \(\mathcal{Q}\) define the canonical module of \(\mathbb{K}[H^{\prime}]\) by [35, Proposition 11.2.1]. Therefore, \(H_{\mathbf{A}}\cap H_{\mathbf{E}_{1,n}}\cap H_{\mathbf{E}_{m,1}}\) is the minimal linear space that contains \(\mathcal{Q}\).
The two approaches described above will give us a lower bound and an upper bound of \(-a(\mathbb{K}[H^{\prime}])\) respectively. Since they coincide, we obtain the exact value. Now, we carry out this strategy and start the real computation.
**Proposition 4.7**.: _Suppose that \(m\geq 4\) is an integer and \(H\) is the bipartite graph introduced in Setting 3.1. Then we have_
\[-a(\mathbb{K}[H])=\begin{cases}m\cdot\frac{n}{2},&\text{if $n$ is even},\\ m\cdot\frac{n+1}{2}-2,&\text{if $n$ is odd}.\end{cases}\]
Proof.: For the induced subgraph \(H^{\prime}\), we have \(\mathbb{K}[H]=\mathbb{K}[H^{\prime}]\). Thus, we will prove
\[-a(\mathbb{K}[H^{\prime}])=\begin{cases}m\cdot\frac{n}{2},&\text{if $n$ is even},\\ m\cdot\frac{n+1}{2}-2,&\text{if $n$ is odd}\end{cases} \tag{14}\]
by considering the following two steps.
1. First, we show that \(\text{LHS}\geq\text{RHS}\) in eq. (14). Let \(V^{\prime}=V(H^{\prime})\), \(V^{\prime}_{1}=\{x_{i,j}\in V^{\prime}:i\in[m],j\text{ is odd}\}\) and \(V^{\prime}_{2}=V^{\prime}\setminus V^{\prime}_{1}\). Then \(H^{\prime}\) is a connected bipartite graph with bipartition \(V^{\prime}_{1}\sqcup V^{\prime}_{2}\). By Lemma 4.5, we regard \(H^{\prime}\) as a directed graph with all its arrows leaving the vertex set \(V^{\prime}_{1}\). For each \(u\in V^{\prime}_{1}\), the directed cut \(\delta^{+}(\{u\})\) is the set of edges leaving the vertex \(u\). Since \[E(H^{\prime})=\bigsqcup_{u\in V^{\prime}_{1}}\delta^{+}(\{u\})\] is a disjoint union, we immediately have \[-a(\mathbb{K}[H^{\prime}])\geq|V^{\prime}_{1}|=\begin{cases}m\cdot\frac{n}{2} -1,&\text{if $n$ is even},\\ m\cdot\frac{n+1}{2}-2,&\text{if $n$ is odd}\end{cases}\] from Lemma 4.5. At the same time, when \(n\) is even, we consider the two special vertices \(u_{1}=x_{1,n-1}\in V^{\prime}_{1}\) and \(u_{2}=x_{2,n}\in V^{\prime}_{2}\). Notice that \[\delta^{+}(\{u_{1},u_{2}\})=\{\,(u_{1},x_{i,n}):3\leq i\leq n\,\}\] and \[\delta^{+}(V^{\prime}\setminus\{u_{2}\})=\{\,(u_{1},u_{2})\,\}\] are two directed cuts. Thus, \[\delta^{+}(\{u_{1}\})=\delta^{+}(\{u_{1},u_{2}\})\sqcup\delta^{+}(V^{\prime} \setminus\{u_{2}\}).\] As a result, when \(n\) is even, we have additionally \[-a(\mathbb{K}[H^{\prime}])\geq|V^{\prime}_{1}|+1=m\cdot\frac{n}{2},\] from Lemma 4.5. In short, we have LHS \(\geq\) RHS in eq. (14).
2. Second, we prove that LHS \(\leq\) RHS in eq. (14). By the formula in eq. (13), it suffices to find a suitable \(\hat{\mathbf{u}}\in\mathcal{Q}\) such that \(|\hat{\mathbf{u}}|\) is twice the integer in the RHS of eq. (14). The candidate vector \(\hat{\mathbf{u}}=(u_{i,j})\) in \(V\) is given by \[u_{i,j}=\begin{cases}0,&\text{if $(i,j)=(1,n)$ or $(m,1)$},\\ 2,&\text{if $(i,j)=(1,n-1)$},\\ 2,&\text{if $(i,j)=(m,2)$ and $n$ is even},\\ m-2,&\text{if $(i,j)=(m,2)$ and $n$ is odd},\\ 1,&\text{otherwise};\end{cases}\] see Figure 5. It is easy to verify that \(|\hat{\mathbf{u}}|\) satisfies the requirement. Therefore, it remains to show that \(\hat{\mathbf{u}}\) belongs to \(\mathcal{Q}\).
First, we consider the case where \(n\) is odd. We do this in two sub-steps.
1. In the first sub-step, we show that \(\hat{\boldsymbol{u}}\) belongs to the polyhedron \(\mathbb{R}_{+}\mathcal{A}\). Let \(\widetilde{H}\) be a subgraph of \(H^{\prime}\). By abuse of notation, the _degree matrix_ of \(\widetilde{H}\) is the \(m\times n\) matrix \(\boldsymbol{D}=(d_{i,j})\), where \[d_{i,j}=\begin{cases}0,&\text{if $(i,j)=(1,n)$ or $(m,1)$},\\ \text{the degree of the vertex $x_{i,j}$ in $\widetilde{H}$},&\text{otherwise}.\end{cases}\] We will construct subgraphs \(\widetilde{H}\) of \(H^{\prime}\) such that the degree matrix of \(\widetilde{H}\) is given by the \(\hat{\boldsymbol{u}}\) above; we will call such subgraphs of \(\hat{\boldsymbol{u}}\)_-type_. The first instance \(\widetilde{H}_{1}\) of \(\hat{\boldsymbol{u}}\)-type can be constructed as follows. For simplicity, we will say that edges of the form \(\{x_{i,j},x_{i^{\prime},j+1}\}\) belong to the zone \(\mathcal{Z}_{j}\) for each \(j\in[n-1]\). For \(\widetilde{H}_{1}\), the edges in zone \(\mathcal{Z}_{1}\) are \[\{\,\{x_{1,1},x_{m-1,2}\},\{x_{2,1},x_{m,2}\},\{x_{3,1},x_{m,2}\},\ldots,\{x_ {m-1,1},x_{m,2}\}\,\}\,.\] Note that it contains two long parallel edges. For \(2\leq j<n\), where \(j\) is odd, there are only two long parallel edges in the zone \(\mathcal{Z}_{j}\): \(\{x_{1,j},x_{m-1,j+1}\}\) and \(\{x_{2,j},x_{m,j+1}\}\). For \(2\leq j<n\) with \(j\) being even, there are \(m-2\) parallel slightly shorter edges in the zone \(\mathcal{Z}_{j}\): \(\{x_{i,j},x_{i+2,j+1}\}\) with \(1\leq i\leq m-2\). Finally, we supplement the last zone \(\mathcal{Z}_{n-1}\) with the extra edge \(\{x_{1,n-1},x_{2,n}\}\). Then, we get all the edges for the graph \(\widetilde{H}_{1}\). At this point, the reader is invited to look at the first graph of Figure 6. Notice that \(\hat{\boldsymbol{u}}=\sum_{\{x_{i,j},x_{i^{\prime},j+1}\}\in E(\widetilde{H} _{1})}(\boldsymbol{e}_{i,j}+\boldsymbol{e}_{i^{\prime},j+1})\). Consequently, \(\hat{\boldsymbol{u}}\in\mathbb{R}_{+}\mathcal{A}\).
2. Next, we show that \(\hat{\boldsymbol{u}}\in\operatorname{ri}(\mathbb{R}_{+}\mathcal{A})\). For this purpose, we show that for any \(\boldsymbol{F}\in V^{*}\) such that \(H_{\boldsymbol{F}}\) is a supporting hyperplane of \(\mathbb{R}_{+}\mathcal{A}\) and \(\hat{\boldsymbol{u}}\in H_{\boldsymbol{F}}\), we have \(\mathcal{Q}\subseteq H_{\boldsymbol{F}}\) (and equivalently, \(\mathbb{R}_{+}\mathcal{A}\subseteq H_{\boldsymbol{F}}\)). Without loss of generality, we can assume that \(\boldsymbol{F}\) is represented by the matrix \((f_{i,j})\) with \(f_{1,n}=f_{m,1}=0\). Whence, it remains to prove that \(\boldsymbol{F}\) is a multiple of the matrix \(\boldsymbol{A}\), which was defined earlier in Remark 4.6.
To prove this, we still use the \(\hat{\boldsymbol{u}}\)-type subgraphs. Let \(\widetilde{H}\) be such a subgraph. Since \(\hat{\boldsymbol{u}}=\sum_{\{x_{i,j},x_{i^{\prime},j+1}\}\in E(\widetilde{H} )}(\boldsymbol{e}_{i,j}+\boldsymbol{e}_{i^{\prime},j+1})\in H_{\boldsymbol{F}}\), we have \(\sum_{\{x_{i,j},x_{i^{\prime},j+1}\}\in E(\widetilde{H})}(f_{i,j}+f_{i^{ \prime},j+1})=0\). On the other hand, if \(\{x_{i,j},x_{i^{\prime},j+1}\}\in E(\widetilde{H})\), then \(\boldsymbol{e}_{i,j}+\boldsymbol{e}_{i^{\prime},j+1}\in\mathcal{A}\).
Figure 5. The extremal vector \(\hat{\boldsymbol{u}}\) in \(V\)
Figure 6. Three subgraphs of \(\hat{\boldsymbol{u}}\)-type in the case \((m,n)=(6,7)\)
Since \(H_{\boldsymbol{F}}\) is a supporting hyperplane, we have \(f_{i,j}+f_{i^{\prime},j+1}\geq 0\). Therefore, we have indeed \(f_{i,j}=-f_{i^{\prime},j+1}\).
In addition to the subgraph \(\widetilde{H}_{1}\) in the previous part, we will construct subgraphs \(\widetilde{H}_{2}\) and \(\widetilde{H}_{3}\) of \(\boldsymbol{\hat{u}}\)-type, such that the subgraph \(\widetilde{H}\) of \(H^{\prime}\) with edges \(E(\widetilde{H}_{1})\cup E(\widetilde{H}_{2})\cup E(\widetilde{H}_{3})\) is a connected graph. Now, \(f_{i,j}=-f_{i^{\prime},j+1}\in\mathbb{Z}\) whenever \(\{x_{i,j},x_{i^{\prime},j+1}\}\in E(\widetilde{H})\). Since \(\widehat{H}\) is connected, this implies that \(\boldsymbol{F}\) is a multiple of \(\boldsymbol{A}\).
The graph \(\widetilde{H}_{2}\) is constructed from \(\widetilde{H}_{1}\) as follows. When \(j\) is odd, the zone \(\mathcal{Z}_{j}\) contains two long parallel edges in \(\widetilde{H}_{1}\). We cross them for \(\widetilde{H}_{2}\). When \(j\) is even, the zone \(\mathcal{Z}_{j}\) contains \(m-2\) parallel edges of slope \(-2\) in \(\widetilde{H}_{1}\). We make \(m-3\) of them as parallel edges of slope \(-1\), and the remaining one to be \(\{x_{1,j},x_{m,j+1}\}\). At this point, the reader is invited to look at the second graph of Figure 6. Meanwhile, note that for each \(j\), one has
\[\left\{\,x_{i,j},x_{i^{\prime},j+1}:\{x_{i,j},x_{i^{\prime},j+1}\}\in E( \widetilde{H}_{1})\,\right\}=\left\{\,x_{i,j},x_{i^{\prime},j+1}:\{x_{i,j},x_{ i^{\prime},j+1}\}\in E(\widetilde{H}_{2})\,\right\}.\]
For later reference, we denote this vertex set as \(\widetilde{V}_{j}\). It is crucial to observe that the edges \((E(\widetilde{H}_{1})\cup E(\widetilde{H}_{2}))\cap\mathcal{Z}_{j}\) define a connected subgraph over \(\widetilde{V}_{j}\).
The graph \(\widetilde{H}_{3}\) is constructed from \(\widetilde{H}_{1}\) as follows. For \(1\leq j\leq n-2\) with \(j\) being odd, we change the edges \(\{x_{1,j},x_{m-1,j+1}\}\) and \(\{x_{m-2,j+1},x_{m,j+2}\}\) in \(\widetilde{H}_{1}\) to the edges \(\{x_{1,j},x_{m-2,j+1}\}\) and \(\{x_{m-1,j+1},x_{m,j+2}\}\) in \(\widetilde{H}_{3}\). For \(1\leq j\leq n-2\) with \(j\) being even, we change the edges \(\{x_{1,j},x_{3,j+1}\}\) and \(\{x_{2,j+1},x_{m,j+2}\}\) in \(\widetilde{H}_{1}\) to the edges \(\{x_{1,j},x_{2,j+1}\}\) and \(\{x_{3,j+1},x_{m,j+2}\}\) in \(\widetilde{H}_{3}\). At this point, the reader is invited to look at the third graph of Figure 6. Note that for each \(j\), the vertex set
\[\left\{\,x_{i,j},x_{i^{\prime},j+1}:\{x_{i,j},x_{i^{\prime},j+1}\}\in E( \widetilde{H}_{3})\,\right\}\]
intersects both \(\widetilde{V}_{j}\) and \(\widetilde{V}_{j+1}\). This fact makes the combined subgraph \(\widehat{H}\) to be connected.
In summary, we have completed the proof for the case when \(n\) is odd. The case when \(n\) is even is analogous, so the details will be omitted. We only give the construction of the corresponding graphs \(\widetilde{H}_{1}\), \(\widetilde{H}_{2}\), and \(\widetilde{H}_{3}\) for the case \((m,n)=(6,8)\), see Figure 7.
**Theorem 4.8**.: _Under the assumptions in Setting 3.1, we suppose additionally that \(m\geq 4\). Then, we have_
\[\operatorname{reg}(\mathcal{F}(\mathcal{J}_{K_{m},P_{n}})) =\operatorname{reg}(\mathcal{F}(\operatorname{in}_{\tau}(\mathcal{ J}_{K_{m},P_{n}})))\] \[=\begin{cases}(mn-3)-(m\cdot\frac{n}{2}),&\text{if $n$ is even},\\ (mn-3)-(m\cdot\frac{n+1}{2}-2),&\text{if $n$ is odd}.\end{cases}\]
Proof.: Since \(\operatorname{in}_{\tau}(\mathcal{F}(\mathcal{J}_{K_{m},P_{n}}))=\mathcal{F}( \operatorname{in}_{\tau}(\mathcal{J}_{K_{m},P_{n}}))\), the two special fiber rings \(\mathcal{F}(\mathcal{J}_{K_{m},P_{n}})\) and \(\mathcal{F}(\operatorname{in}_{\tau}(\mathcal{J}_{K_{m},P_{n}}))\) have the same Hilbert series by [8, Proposition 2.4]. At the same time, these
Figure 7. Three subgraphs of \(\boldsymbol{\hat{u}}\)-type in the case \((m,n)=(6,8)\)
two special fiber rings are Cohen-Macaulay domains by Corollary 3.6. Thus, \(\operatorname{reg}(\mathcal{F}(\mathcal{J}_{K_{m},P_{n}}))=\operatorname{reg}( \mathcal{F}(\operatorname{in}_{\tau}(\mathcal{J}_{K_{m},P_{n}})))\) by [20, Corollary 2.18].
Note that the algebras \(\mathcal{F}(\operatorname{in}_{\tau}(\mathcal{J}_{K_{m},P_{n}}))\) and \(\mathbb{K}[H]\) are isomorphic. Since \(m\geq 4\), we have \(\dim(\mathbb{K}[H])=mn-3\) by [35, Corollary 10.1.21]. The desired result then follows from Proposition 4.7 and the equality
\[\operatorname{reg}(\mathbb{K}[H])=\dim(\mathbb{K}[H])+a(\mathbb{K}[H])\]
in the Cohen-Macaulay case, see [35, Theorem 6.4.1].
## 5. Applications
In this section, we will explore the generalized binomial edge ideal of a simple graph and its blowup algebras. We will compare them with the corresponding parts of an induced subgraph. Our analysis will be based on the regularity results from previous sections, which serve as natural lower bounds for the comparison.
We will start with the comparison result in a more general setting. Recall that if \(G^{\prime}\) is an induced subgraph of a graph \(G\), then for the graded Betti numbers of the powers of the binomial edge ideals, one has \(\beta_{ij}(S/J^{s}_{G^{\prime}})\leq\beta_{ij}(S/J^{s}_{G})\) for all \(i,j\geq 0\) and \(s\geq 1\); see [23, Proposition 3.3]. This result for the classical binomial edge ideal can be easily generalized to the binomial edge ideal of a pair, as follows.
_Setting 5.1_.: For \(i=1,2\), let \(G_{i}\) be a simple graph on the vertex set \([n_{i}]\), and let \(H_{i}\) be its induced subgraph. Correspondingly, we have \(\mathcal{J}_{G_{1},G_{2}}\), the binomial ideal of the pair \((G_{1},G_{2})\) in the polynomial ring \(\mathbb{K}[\mathbf{X}]\), as well as \(\mathcal{J}_{H_{1},H_{2}}\), the binomial ideal of the pair \((H_{1},H_{2})\) in the polynomial ring \(\mathbb{K}[\mathbf{Y}]\). If we consider \(\mathbf{X}\) to be an \(n_{1}\times n_{2}\) matrix of variables, then we can naturally regard \(\mathbf{Y}\) as a \(|V(H_{1})|\times|V(H_{2})|\) submatrix.
**Theorem 5.2**.: _Under the assumptions in Setting 5.1, we have_
\[\beta_{ij}(\mathbb{K}[\mathbf{Y}]/\mathcal{J}^{s}_{H_{1},H_{2}})\leq\beta_{ij}( \mathbb{K}[\mathbf{X}]/\mathcal{J}^{s}_{G_{1},G_{2}})\]
_for all \(i,j\) and \(s\geq 1\). In particular, we have_
\[\operatorname{reg}(\mathcal{J}^{s}_{H_{1},H_{2}})\leq\operatorname{reg}( \mathcal{J}^{s}_{G_{1},G_{2}})\qquad\text{and}\qquad\operatorname{pd}( \mathcal{J}^{s}_{H_{1},H_{2}})\leq\operatorname{pd}(\mathcal{J}^{s}_{G_{1},G_ {2}})\]
_for all \(s\geq 1\)._
Proof.: First, we show that \(\mathcal{J}^{s}_{H_{1},H_{2}}=\mathcal{J}^{s}_{G_{1},G_{2}}\cap\mathbb{K}[\mathbf{ Y}]\) for all \(s\geq 1\). Since the natural generators of \(\mathcal{J}^{s}_{H_{1},H_{2}}\) are automatically contained in \(\mathcal{J}^{s}_{G_{1},G_{2}}\), one has \(\mathcal{J}^{s}_{H_{1},H_{2}}\subseteq\mathcal{J}^{s}_{G_{1},G_{2}}\cap \mathbb{K}[\mathbf{Y}]\). For the converse inclusion, let \(g\in\mathcal{J}^{s}_{G_{1},G_{2}}\cap\mathbb{K}[\mathbf{Y}]\). We can write \(g\) as a finite sum
\[g=\sum_{\begin{subarray}{c}(e_{i},f_{i})\in E(G_{1})\times E(G_{2}),\\ 1\leq i\leq s\end{subarray}}h_{(e_{1},f_{1}),\ldots,(e_{s},f_{s})}p_{(e_{1},f_ {1})}\cdots p_{(e_{s},f_{s})},\]
where \(h_{(e_{1},f_{1}),\ldots,(e_{s},f_{s})}\in\mathbb{K}[\mathbf{X}]\). Now, consider the \(\mathbb{K}\)-algebra homomorphism \(\pi:\mathbb{K}[\mathbf{X}]\to\mathbb{K}[\mathbf{Y}]\) by setting
\[\pi(x_{i,j})=\begin{cases}x_{i,j},&\text{if $x_{i,j}$ is a variable in $\mathbf{Y}$},\\ 0,&\text{otherwise}.\end{cases}\]
Thus,
\[\pi(p_{(e,f)})=\begin{cases}p_{(e,f)},&\text{if $(e,f)\in E(H_{1})\times E(H_{2})$},\\ 0,&\text{otherwise}.\end{cases}\]
Since \(g\in\mathbb{K}[\mathbf{Y}]\), we have \(\pi(g)=g\). Therefore, we get
\[g =\sum_{\begin{subarray}{c}(e_{i},f_{i})\in E(G_{1})\times E(G_{2}),\\ 1\leq i\leq s\end{subarray}}\pi(h_{(e_{1},f_{1}),\ldots,(e_{s},f_{s})})\pi(p_{(e_{ 1},f_{1})})\cdots\pi(p_{(e_{s},f_{s})})\] \[=\sum_{\begin{subarray}{c}(e_{i},f_{i})\in E(H_{1})\times E(H_{2} ),\\ 1\leq i\leq s\end{subarray}}\pi(h_{(e_{1},f_{1}),\ldots,(e_{s},f_{s})})p_{(e_{1},f _{1})}\cdots p_{(e_{s},f_{s})}.\]
Thus, \(g\in\mathcal{J}^{s}_{H_{1},H_{2}}\). This completes our proof for \(\mathcal{J}^{s}_{H_{1},H_{2}}=\mathcal{J}^{s}_{G_{1},G_{2}}\cap\mathbb{K}[\mathbf{Y}]\).
Consequently, \(\mathbb{K}[\mathbf{Y}]/\mathcal{J}^{s}_{H_{1},H_{2}}\) is a \(\mathbb{K}\)-subalgebra of \(\mathbb{K}[\mathbf{X}]/\mathcal{J}^{s}_{G_{1},G_{2}}\). Let \(\overline{\pi}:\mathbb{K}[\mathbf{X}]/\mathcal{J}^{s}_{G_{1},G_{2}}\to\mathbb{K}[ \mathbf{Y}]/\mathcal{J}^{s}_{H_{1},H_{2}}\) be the homomorphism induced by \(\pi\). Since \(\pi(\mathcal{J}^{s}_{G_{1},G_{2}})\subseteq\mathcal{J}^{s}_{H_{1},H_{2}}\), the map \(\overline{\pi}\) is well-defined. Notice that the restriction of \(\overline{\pi}\) to \(\mathbb{K}[\mathbf{Y}]/\mathcal{J}^{s}_{H_{1},H_{2}}\) is the identity map. Thus, \(\overline{\pi}\) is surjective, and \(\mathbb{K}[\mathbf{Y}]/\mathcal{J}^{s}_{H_{1},H_{2}}\) is an algebra retract of \(\mathbb{K}[\mathbf{X}]/\mathcal{J}^{s}_{G_{1},G_{2}}\). Now, the expected inequalities follow from [29, Corollary 2.5].
**Corollary 5.3**.: _Let \(G\) be a simple graph and \(G^{\prime}\) be its induced subgraph. Then we have \(\operatorname{reg}(\mathcal{J}^{s}_{K_{m},G^{\prime}})\leq\operatorname{reg} (\mathcal{J}^{s}_{K_{m},G})\) for all \(s\geq 1\)._
Kumar proved in [28, Theorems 3.5 and 4.6] that if \(G^{\prime}\) is an induced subgraph of a graph \(G\), then \(\operatorname{reg}(\mathcal{R}(J_{G^{\prime}}))\leq\operatorname{reg}( \mathcal{R}(J_{G}))\) and \(\operatorname{reg}(\mathcal{F}(J_{G^{\prime}}))\leq\operatorname{reg}( \mathcal{F}(J_{G}))\) for the regularities of the blowup algebras of classical binomial edge ideals. We can generalize this to the binomial edge ideals of pairs.
**Theorem 5.4**.: _Under the assumptions in Setting 5.1, we have_
\[\operatorname{reg}(\mathcal{R}(\mathcal{J}_{H_{1},H_{2}}))\leq\operatorname{ reg}(\mathcal{R}(\mathcal{J}_{G_{1},G_{2}}))\]
_and_
\[\operatorname{reg}(\mathcal{F}(\mathcal{J}_{H_{1},H_{2}}))\leq\operatorname{ reg}(\mathcal{F}(\mathcal{J}_{G_{1},G_{2}})).\]
Proof.: Let \(\pi:\mathbb{K}[\mathbf{X}]\to\mathbb{K}[\mathbf{Y}]\) be the map defined in the proof of Theorem 5.2. We have \(\pi(\mathcal{J}^{s}_{G_{1},G_{2}})=\mathcal{J}^{s}_{H_{1},H_{2}}=\mathcal{J}^ {s}_{G_{1},G_{2}}\cap\mathbb{K}[\mathbf{Y}]\) for all \(s\geq 0\). This fact induces the graded embedding map
\[\iota:\mathcal{R}(\mathcal{J}_{H_{1},H_{2}}){\hookrightarrow}\mathcal{R}( \mathcal{J}_{G_{1},G_{2}})\]
as well as a graded epimorphism
\[\pi^{*}:\mathcal{R}(\mathcal{J}_{G_{1},G_{2}})\twoheadrightarrow\mathcal{R}( \mathcal{J}_{H_{1},H_{2}}).\]
Notice that \(\pi^{*}\circ\iota\) is the identity map on \(\mathcal{R}(\mathcal{J}_{H_{1},H_{2}})\). It follows that \(\mathcal{R}(\mathcal{J}_{H_{1},H_{2}})\) is an algebra retract of \(\mathcal{R}(\mathcal{J}_{G_{1},G_{2}})\).
Meanwhile, notice that
\[\mathcal{F}(\mathcal{J}_{G_{1},G_{2}})\cong\mathcal{R}(\mathcal{J}_{G_{1},G_{ 2}})\otimes_{\mathbb{K}[\mathbf{X}]}\mathbb{K}\cong\mathbb{K}[p_{(e,f)}:e\in E(G_{1 }),f\in E(G_{2})]\subseteq\mathbb{K}[\mathbf{X}],\]
and
\[\mathcal{F}(\mathcal{J}_{H_{1},H_{2}})\cong\mathcal{R}(\mathcal{J }_{H_{1},H_{2}})\otimes_{\mathbb{K}[\mathbf{X}]}\mathbb{K}\cong\mathcal{R}( \mathcal{J}_{H_{1},H_{2}})\otimes_{\mathbb{K}[\mathbf{Y}]}\mathbb{K}\] \[\cong\mathbb{K}[p_{(e,f)}:e\in E(H_{1}),f\in E(H_{2})]\subseteq \mathbb{K}[\mathbf{Y}]\subseteq\mathbb{K}[\mathbf{X}].\]
Therefore, we have a graded embedding map \(\iota^{\triangle}:\mathcal{F}(\mathcal{J}_{H_{1},H_{2}})\hookrightarrow \mathcal{F}(\mathcal{J}_{G_{1},G_{2}})\). On the other hand, by tensoring \(\pi^{*}\) with \(\mathbb{K}\), we have an induced graded epimorphism \(\pi^{\triangle}:\mathcal{F}(\mathcal{J}_{G_{1},G_{2}})\twoheadrightarrow \mathcal{F}(\mathcal{J}_{H_{1},H_{2}})\). Notice that \(\pi^{\triangle}\circ\iota^{\triangle}\) is the identity map on \(\mathcal{F}(\mathcal{J}_{H_{1},H_{2}})\). It follows that \(\mathcal{F}(\mathcal{J}_{H_{1},H_{2}})\) is an algebra retract of \(\mathcal{F}(\mathcal{J}_{G_{1},G_{2}})\).
To complete the proof, it remains to apply [29, Corollary 2.5].
**Corollary 5.5**.: _Let \(G\) be a graph and \(G^{\prime}\) be its induced subgraph. Then,_
\[\operatorname{reg}(\mathcal{R}(\mathcal{J}_{K_{m},G^{\prime}}))\leq \operatorname{reg}(\mathcal{R}(\mathcal{J}_{K_{m},G}))\]
_and_
\[\operatorname{reg}(\mathcal{F}(\mathcal{J}_{K_{m},G^{\prime}}))\leq \operatorname{reg}(\mathcal{F}(\mathcal{J}_{K_{m},G})).\]
**Corollary 5.6**.: _Let \(G\) be a graph which contains an induced path with \(n\) vertices. Then, we have_
\[\operatorname{reg}\left(\frac{S}{\mathcal{J}^{t}_{K_{m},G}}\right)\geq 2(t-1)+(n-1)\]
_for each \(t\geq 1\). Furthermore,_
\[\operatorname{reg}(\mathcal{R}(\mathcal{J}_{K_{m},G}))\geq(m-1)\left\lfloor\frac {n}{2}\right\rfloor+\left\lfloor\frac{n-1}{2}\right\rfloor,\]
\[\operatorname{reg}(\mathcal{R}(\mathcal{J}_{K_{m},G}))\geq\begin{cases}(mn-3)-(m\cdot \frac{n}{2}),&\text{if $n$ is even},\\ (mn-3)-(m\cdot\frac{n+1}{2}-2),&\text{if $n$ is odd}.\end{cases}\]
Proof.: These results follow from Theorems 3.11, 4.2, 4.8, 5.2 and 5.4.
_Acknowledgment_.: The authors are grateful to the software systems Macaulay2 [15] and Normaliz [3], for serving as excellent sources of inspiration. This work is supported by the Natural Science Foundation of Jiangsu Province (No. BK20221353). In addition, the first author is partially supported by the Anhui Initiative in Quantum Information Technologies (No. AHY150200) and the "Innovation Program for Quantum Science and Technology" (2021ZD0302902). And the second author is supported by the Foundation of the Priority Academic Program Development of Jiangsu Higher Education Institutions.
|
2306.00165 | Identifying invariant solutions of wall-bounded three-dimensional shear
flows using robust adjoint-based variational techniques | Invariant solutions of the Navier-Stokes equations play an important role in
the spatiotemporally chaotic dynamics of turbulent shear flows. Despite the
significance of these solutions, their identification remains a computational
challenge, rendering many solutions inaccessible and thus hindering progress
towards a dynamical description of turbulence in terms of invariant solutions.
We compute equilibria of three-dimensional wall-bounded shear flows using an
adjoint-based matrix-free variational approach. To address the challenge of
computing pressure in the presence of solid walls, we develop a formulation
that circumvents the explicit construction of pressure and instead employs the
influence matrix method. Together with a data-driven convergence acceleration
technique based on dynamic mode decomposition, this yields a practically
feasible alternative to state-of-the-art Newton methods for converging
equilibrium solutions. We compute multiple equilibria of plane Couette flow
starting from inaccurate guesses extracted from a turbulent time series. The
variational method outperforms Newton(-hookstep) iterations in successfully
converging from poor initial guesses, suggesting a larger convergence radius. | Omid Ashtari, Tobias M. Schneider | 2023-05-31T20:20:14Z | http://arxiv.org/abs/2306.00165v2 | Identifying invariant solutions of wall-bounded three-dimensional shear flows using robust adjoint-based variational techniques
###### Abstract
Invariant solutions of the Navier-Stokes equations play an important role in the spatiotemporally chaotic dynamics of turbulent shear flows. Despite the significance of these solutions, their identification remains a computational challenge, rendering many solutions inaccessible and thus hindering progress towards a dynamical description of turbulence in terms of invariant solutions. We compute equilibria of three-dimensional wall-bounded shear flows using an adjoint-based matrix-free variational approach. To address the challenge of computing pressure in the presence of solid walls, we develop a formulation that circumvents the explicit construction of pressure and instead employs the influence matrix method. Together with a data-driven convergence acceleration technique based on dynamic mode decomposition, this yields a practically feasible alternative to state-of-the-art Newton methods for converging equilibrium solutions. We successfully converge multiple equilibria of plane Couette flow starting from inaccurate guesses extracted from a turbulent time series. The variational method significantly outperforms the standard Newton-hookstep method, demonstrating its superior robustness and suggesting a considerably larger convergence radius.
dynamical systems approach to turbulence, wall-bounded shear flows, invariant solutions, matrix-free numerical methods, adjoint methods, variational methods
## 1 Introduction
Viewing fluid turbulence as a deterministic chaotic dynamical system has revealed new insights beyond what can be achieved through a purely statistical approach (see reviews by Kawahara _et al._ (2012) and Graham & Floryan (2021)). The idea for a dynamical description by envisioning turbulence as a chaotic trajectory in the infinite-dimensional state space of the Navier-Stokes equations dates back to the seminal work of Hopf (1948). A remarkable progress in bridging the gaps between ideas from dynamical systems theory and practically studying turbulence in this framework has been the numerical computation of _invariant solutions_ - an advance that did not happen until the 1990's. Invariant solutions are non-chaotic solutions to the governing equations with simple dependence on time. This includes equilibria (Nagata (1990)), travelling waves (Faisst & Eckhardt (2003); Wedin & Kerswell (2004)), periodic and relative periodic orbits (Kawahara & Kida (2001); Chandler & Kerswell (2013); Budanur _et al._ (2017)) and invariant tori (Parker & Schneider (2022); Parker _et al._ (2023)). In the dynamical description, the chaotic trajectory of the turbulent dynamics transiently, yet recurringly, visits the neighbourhood of the unstable invariant
solutions embedded in the state space of the evolution equations. In this picture, therefore, unstable invariant solutions serve as the building blocks supporting the turbulent dynamics, and extracting them is the key for studying turbulence in the dynamical systems framework.
Equilibria of plane Couette flow (PCF) numerically computed by Nagata (1990) were the first nontrivial invariant solutions discovered in a wall-bounded three-dimensional (3D) fluid flow. Despite their lack of temporal variation, equilibrium solutions play an important role in characterising the dynamics of chaotic flows. In PCF for instance, Gibson _et al._ (2008, 2009) demonstrate how the chaotic dynamics is organised by coexisting equilibrium solutions together with their stable and unstable manifolds; Schneider _et al._ (2010) and Gibson & Brand (2014) compute equilibria that capture localisation in the spanwise direction; Brand & Gibson (2014) compute equilibria that capture localisation in both streamwise and spanwise directions; and Reetz _et al._ (2019) identify an equilibrium solution underlying self-organised oblique turbulent-laminar stripes. Despite the successes in relating flow properties to unstable equilibria, only a relatively small number of isolated equilibrium solutions have been identified. This highlights the challenges inherent in the computational identification of such solutions in very high-dimensional fluid flow problems.
One approach to computing equilibrium solutions is to consider a _root finding problem_. Equilibria of the dynamical system \(\partial u/\partial t=r(u)\) are, by definition, roots of the nonlinear operator governing the time evolution, \(r(u)=0\). Irrespective of the dynamical stability of the equilibrium solution, the root finding problem can be solved by Newton(-Raphson) iterations. Newton iterations are popular because of their locally quadratic convergence. However, employing Newton iterations for solving the root finding problem has two principal drawbacks: For a system described by \(N\) degrees of freedom, the update vector in each iteration is the solution to a linear system of equations whose coefficient matrix is the \(N\times N\) Jacobian. Solving this large system of equations and the associated quadratically scaling memory requirement are too costly for very high-dimensional, strongly coupled fluid flow problems. In addition to poor scaling, Newton iterations typically have a small radius of convergence, meaning that the algorithm needs to be initialised with an extremely accurate initial guess in order to converge successfully. Finding sufficiently accurate guesses is not simple even for weakly chaotic flows close to the onset of turbulence. Newton-GMRES-hookstep is the state-of-the-art matrix-free variant of the Newton method commonly used for computing invariant solutions of fluid flows. This method defeats the scaling drawback by employing the generalised minimal residual (GMRES) method and approximating the update vector in a Krylov subspace. In addition, the robustness of the convergence is improved via hook-step trust-region optimisation. Newton-GMRES-hookstep thereby enlarges the basin of convergence of Newton iterations. Yet, requiring an accurate initial guess is still a bottleneck of this method, and identifying unstable equilibria remains challenging.
An alternative to the root finding setup is to view the problem of computing an equilibrium solution as an _optimisation problem_. Deviation of a flow field from being an equilibrium solution can be penalised by the norm of the to-be-zeroed right-hand side operator, \(\|r(u)\|\). The absolute minima of this cost function, \(\|r(u)\|=0\), correspond to equilibrium solutions of the system. Therefore, the problem of finding equilibria can be recast as the minimisation of the cost function. A matrix-free method is crucial for solving this minimisation problem in very high-dimensional fluid flows. Farazmand (2016) proposed an adjoint-based minimisation technique to find equilibria and travelling waves of a 2D Kolmogorov flow. The adjoint calculations allow the gradient of the cost function to be constructed analytically as an explicit function of the current flow field. This results in a matrix-free gradient-descent algorithm whose memory requirement scales linearly with the size of the problem. The adjoint-based minimisation method is significantly more robust to inaccurate initial guesses in comparison to its alternatives based on solving a root finding problem using Newton iterations. This improvement, however, is obtained by sacrificing the quadratic convergence of the Newton iterations and exhibiting slow convergence. In the context
of fluid mechanics, the variational approach has been successfully applied to the 2D Kolmogorov flows (see Farazmand (2016); Parker & Schneider (2022)).
Despite the robust convergence and favourable scaling properties of the adjoint-based minimisation method, it has not been applied to 3D wall-bounded flows. Beyond the high-dimensionality of the 3D wall-bounded flows, the main challenge in the application of this method lies in handling the nonlinear, nonlocal pressure term. Constructing the pressure field associated with an instantaneous divergence-free velocity field is straightforward in a doubly periodic 2D (or triply periodic 3D) flow represented in Fourier basis. However, computing pressure in the presence of walls is far more complex. Thus, successfully implementing the adjoint-descent method for wall-bounded flows hinges on resolving the challenge of dealing with pressure.
We propose an algorithm for computing equilibrium solutions of wall-bounded flows using the adjoint-descent minimisation method. The proposed algorithm circumvents the explicit construction of pressure, thereby overcoming the inherent challenge of dealing with pressure in the application of the adjoint-descent method to wall-bounded flows. We construct equilibria of plane Couette flow, and discuss the application of the introduced method to other wall-bounded flows and other types of invariant solutions where the challenge of dealing with pressure exists analogously. To accelerate the convergence of the algorithm we propose a data-driven procedure which takes advantage of the almost linear behaviour of the adjoint-descent dynamics in the vicinity of an equilibrium solution. The acceleration technique approximates the linear dynamics using dynamic mode decomposition, and thereby approximates the asymptotic solution of the adjoint-descent dynamics. The large basin of convergence together with the improved convergence properties renders the adjoint-descent method a viable alternative to the state-of-the-art Newton method.
The remainder of the manuscript is structured as follows: The adjoint-based variational method for constructing equilibrium solutions is introduced in a general setting in SS2. The adjoint-descent dynamics is derived for wall-bounded shear flows in SS3, and an algorithm for numerically integrating the derived dynamics is presented in SS4. The method is applied to plane Couette flow in SS5 where the convergence of multiple equilibria is demonstrated. The data-driven procedure for accelerating the convergence is discussed in SS6. Finally, the article is summarised and concluding remarks are provided in SS7.
## 2 Adjoint-descent method for constructing equilibrium solutions
Consider a general autonomous dynamical system
\[\frac{\partial\mathbf{u}}{\partial t}=\mathbf{r}(\mathbf{u}), \tag{1}\]
where \(\mathbf{u}\) is an \(n\)-dimensional real-valued field belonging to an inner product space \(\mathcal{M}\subseteq\mathbb{R}^{n}\), defined over a \(d\)-dimensional spatial domain \(\mathbf{x}\in\Omega\subseteq\mathbb{R}^{d}\) and varying with time \(t\in\mathbb{R}\). The evolution of \(\mathbf{u}\) is governed by the smooth nonlinear operator \(\mathbf{r}\) subject to time-independent boundary conditions at \(\partial\mathbf{\Omega}\), the boundary of \(\Omega\). Equilibrium solutions of this dynamical system are \(\mathbf{u}^{*}\in\mathcal{M}\) for which
\[\mathbf{r}(\mathbf{u}^{*})=\mathbf{0}. \tag{2}\]
The residual of Equation (2) is not zero for non-equilibrium states \(\mathbf{u}\neq\mathbf{u}^{*}\). We thus penalise non-equilibrium states by the non-negative cost function \(J^{2}\) defined as
\[J^{2}=\left\langle\mathbf{r}(\mathbf{u}),\mathbf{r}(\mathbf{u})\right\rangle, \tag{3}\]
where \(\left\langle\cdot,\cdot\right\rangle\) denotes the inner product defined on \(\mathcal{M}\). The cost function takes zero value if and only if \(\mathbf{u}=\mathbf{u}^{*}\). We thereby recast the problem of finding equilibrium solutions \(\mathbf{u}^{*}\) as a minimisation
problem over \(\mathcal{M}\), and look for the global minima of \(J^{2}\) at which \(J^{2}=0\), following the arguments of Farazmand (2016).
In order to find minima of \(J^{2}\), we construct another dynamical system in \(\mathcal{M}\) whose evolution monotonically decreases the cost function \(J^{2}\). The objective is to define an evolution equation
\[\frac{\partial\mathbf{u}}{\partial\tau}=\mathbf{g}(\mathbf{u}), \tag{4}\]
where the choice of the operator \(\mathbf{g}:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\) guarantees
\[\frac{\partial J^{2}}{\partial\tau}\leqslant 0\,;\quad\forall\tau. \tag{5}\]
Here, \(\tau\) is a fictitious time that parametrizes the evolution governed by the constructed dynamics. The rate of change of \(J^{2}\) along trajectories of the dynamical system (4) is
\[\frac{\partial J^{2}}{\partial\tau}=2\left\langle\mathcal{L}(\mathbf{u}; \mathbf{g}),\mathbf{r}(\mathbf{u})\right\rangle, \tag{6}\]
where \(\mathcal{L}(\mathbf{u};\mathbf{g})\) is the directional derivative of \(\mathbf{r}(\mathbf{u})\) along \(\partial\mathbf{u}/\partial\tau=\mathbf{g}\):
\[\mathcal{L}(\mathbf{u};\mathbf{g})=\lim_{\epsilon\to 0}\frac{ \mathbf{r}(\mathbf{u}+\epsilon\mathbf{g})-\mathbf{r}(\mathbf{u})}{\epsilon}. \tag{7}\]
We can rewrite Equation (6) as
\[\frac{\partial J^{2}}{\partial\tau}=2\left\langle\mathcal{L}^{\dagger}( \mathbf{u};\mathbf{r}),\mathbf{g}(\mathbf{u})\right\rangle, \tag{8}\]
where \(\mathcal{L}^{\dagger}\) is the adjoint operator of the directional derivative \(\mathcal{L}\), with the following definition:
\[\left\langle\mathcal{L}(\mathbf{v};\mathbf{v}^{\prime}),\mathbf{v}^{\prime \prime}\right\rangle=\left\langle\mathcal{L}^{\dagger}(\mathbf{v};\mathbf{v}^{ \prime\prime}),\mathbf{v}^{\prime}\right\rangle;\quad\forall\ \mathbf{v},\mathbf{v}^{\prime},\mathbf{v}^{\prime\prime}\in\mathcal{M}. \tag{9}\]
To guarantee the monotonic decrease of \(J^{2}\) with \(\tau\) we choose
\[\mathbf{g}(\mathbf{u})=-\mathcal{L}^{\dagger}(\mathbf{u};\mathbf{r}). \tag{10}\]
This choice results in monotonic decrease of \(J^{2}\) along solution trajectories of the adjoint dynamical system (4):
\[\frac{\partial J^{2}}{\partial\tau}=-2\left\langle\mathcal{L}^{\dagger}( \mathbf{u};\mathbf{r}),\mathcal{L}^{\dagger}(\mathbf{u};\mathbf{r})\right\rangle \leqslant 0. \tag{11}\]
In summary, in order to find equilibria of \(\partial\mathbf{u}/\partial t=\mathbf{r}(\mathbf{u})\) the variational approach proposed by Farazmand (2016) constructs a globally contracting dynamical system \(\partial\mathbf{u}/\partial\tau=\mathbf{g}(\mathbf{u})\), that is essentially the gradient descent of the cost function \(J^{2}\). Every trajectory of the constructed dynamical system eventually reaches a stable equilibrium corresponding to a minimum of the cost function. Equilibria of the original dynamics are equilibria of the adjoint dynamics at which the cost function takes its global minimum value \(J^{2}=0\). However, the adjoint dynamics might have other equilibria that correspond to a local minimum of the cost function with \(J^{2}>0\), and are not equilibria of the original dynamics. This is schematically illustrated in Figure 1. Finding equilibria of \(\partial\mathbf{u}/\partial t=\mathbf{r}(\mathbf{u})\) requires integrating the adjoint dynamics \(\partial\mathbf{u}/\partial\tau=\mathbf{g}(\mathbf{u})\) forward in the fictitious time \(\tau\). The solutions obtained at \(\tau\rightarrow\infty\) for which \(J^{2}=0\) are equilibria of the original system. Otherwise, when the trajectory gets stuck in a local minimum of the cost function, the search fails and the adjoint dynamics should be integrated from another initial condition.
## 3 Application to the wall-bounded shear flows
### Governing equations
We consider the flow in a three-dimensional rectangular domain \(\Omega\) of non-dimensional size \(x\in[0,L_{x})\), \(y\in[-1,+1]\) and \(z\in[0,L_{z})\). The domain is bounded in \(y\) between two parallel plates, and is periodic in the lateral directions \(x\) and \(z\). Incompressible, isotherm flow of a Newtonian fluid is governed by the Navier-Stokes equations (NSE). The non-dimensional, perturbative form of the NSE reads
\[\frac{\partial\mathbf{u}}{\partial t}=-\left[(\mathbf{u}_{b}\cdot \nabla)\mathbf{u}+(\mathbf{u}\cdot\nabla)\mathbf{u}_{b}+(\mathbf{u}\cdot \nabla)\mathbf{u}\right]-\nabla p+\frac{1}{Re}\Delta\mathbf{u}=:\mathcal{N}( \mathbf{u},p), \tag{1}\] \[\nabla\cdot\mathbf{u}=0. \tag{2}\]
Here, \(Re\) is the Reynolds number and \(\mathbf{u}_{b}\) is the laminar base flow velocity field. \(\mathbf{u}\) and \(p\) are the deviations of the total velocity and pressure from the base flow velocity and pressure fields, respectively. For all common driving mechanisms including the motion of walls in the \(xz\) plane, externally imposed pressure differences, or injection/suction through the walls, the laminar base flow satisfies the inhomogeneous boundary conditions (BCs) and absorbs body forces. Consequently, the perturbative Navier-Stokes equations (1) and (2) are subject to the boundary conditions
\[\mathbf{u}(x,y=\pm 1,z;t)=\mathbf{0}, \tag{3}\] \[\left[\mathbf{u},p\right](x=0,y,z;t)=\left[\mathbf{u},p\right](x =L_{x},y,z;t),\] (4) \[\left[\mathbf{u},p\right](x,y,z=0;t)=\left[\mathbf{u},p\right](x,y,z=L_{z};t). \tag{5}\]
The canonical wall-bounded shear flows such as plane Couette flow, plane Poiseuille flow and asymptotic suction boundary layer flow are governed by the incompressible NSE (1)-(5) where \(\mathbf{u}_{b}\) differentiates them from one another. We derive the adjoint-descent dynamics based on a general base flow velocity field \(\mathbf{u}_{b}\), and in SS4 demonstrate the adjoint-based method for the specific case of plane Couette flow.
Figure 1: Replacing the original dynamics with the gradient descent of the cost function \(J=\|\mathbf{r}(\mathbf{u})\|\) by the adjoint-descent method. Panel (a) schematically shows the trajectories and two equilibria of the original system parametrized by the physical time \(t\), while panel (b) shows contours of \(J\) and sample trajectories of its gradient flow parametrized by the fictitious time \(\tau\). Trajectories of the adjoint-descent dynamics converge to a stable fixed point, that is either an equilibrium of the original dynamics, where the global minimum value of \(J=0\) is achieved, or a state at which \(J\) takes a local minimum value.
The incompressible NSE consists of one vector-valued evolution equation for the velocity and one constraint which implicitly governs the evolution of the pressure. Therefore, we extend the definition of the residual, and define the cost function such that residuals of both Equations (1) and (2) are included. Otherwise, the derivation follows SS2.
### The search space
We define the inner product space of general flow fields as
\[\mathcal{P}=\left\{\begin{bmatrix}\mathbf{u}\\ p\end{bmatrix}\left|\begin{array}{l}\mathbf{u}:\Omega\to\mathbb{R}^{3}\\ p:\Omega\to\mathbb{R}\\ \mathbf{u}\text{ and }p\text{ are periodic in }x\text{ and }z\end{array}\right.\right\}, \tag{6}\]
where \(\mathbf{u}\) and \(p\) are sufficiently smooth functions of space. \(\mathcal{P}\) is endowed with the real-valued inner product
\[\left\langle\,\ \right\rangle:\mathcal{P}\times\mathcal{P}\to\mathbb{R}, \tag{7}\]
Here \(\cdot\) is the conventional Euclidean inner product in \(\mathbb{R}^{3}\). A physical incompressible velocity field is divergence-free, \(\nabla\cdot\mathbf{u}=0\), and satisfies the no-slip condition, \(\mathbf{u}=\mathbf{0}\), at the walls. The physical pressure associated with a physical velocity field ensures that under the NSE dynamics the velocity remains divergence-free,
\[\partial(\nabla\cdot\mathbf{u})/\partial t=\nabla\cdot\mathcal{N}(\mathbf{u}, p)=0, \tag{8}\]
and the no-slip boundary conditions \(\mathbf{u}(y=\pm 1)=\mathbf{0}\) are preserved,
\[\partial\mathbf{u}/\partial t\big{|}_{y=\pm 1}=\mathcal{N}(\mathbf{u},p)\big{|}_ {y=\pm 1}=\mathbf{0}. \tag{9}\]
Therefore, the space of physical flow fields is defined as
\[\mathcal{M}=\left\{\begin{bmatrix}\mathbf{u}\\ p\end{bmatrix}\in\mathcal{P}_{0}\left|\begin{array}{l}\nabla\cdot\mathbf{u}=0 \\ \nabla\cdot\mathcal{N}(\mathbf{u},p)=0\\ \mathcal{N}(\mathbf{u},p)\big{|}_{y=\pm 1}=\mathbf{0}\end{array}\right.\right\}, \tag{10}\]
where \(\mathcal{P}_{0}\) is the subset of \(\mathcal{P}\) whose vector-valued component satisfies the homogeneous Dirichlet BC at the walls:
\[\mathcal{P}_{0}=\left\{\begin{bmatrix}\mathbf{u}\\ p\end{bmatrix}\in\mathcal{P}\left|\begin{array}{l}\mathbf{u}(y=\pm 1)=\mathbf{0} \end{array}\right.\right\}. \tag{11}\]
Equilibrium solutions of the NSE are \([\mathbf{u}^{*},p^{*}]\in\mathcal{M}\) for which
\[\mathcal{N}(\mathbf{u}^{*},p^{*})=\mathbf{0}. \tag{12}\]
We aim to impose the zero-divergence constraint together with the defining property of an equilibrium solution via the variational minimisation discussed in SS2. To that end, we consider an evolution in the space of general flow fields \(\mathbf{U}=[\mathbf{u},p]\in\mathcal{P}_{0}\) in which the velocity and the pressure component are evolved independently. A flow field \(\mathbf{U}\in\mathcal{P}_{0}\) neither necessarily satisfies the defining property of an equilibrium solution nor the zero-divergence constraint. Therefore, we define the residual field \(\mathbf{R}\in\mathcal{P}\) associated with a general flow field as
\[\mathbf{R}=\begin{bmatrix}\mathbf{r}_{1}\\ r_{2}\end{bmatrix}=\begin{bmatrix}\mathcal{N}(\mathbf{u},p)\\ \nabla\cdot\mathbf{u}\end{bmatrix}, \tag{13}\]
and the cost function \(J^{2}\) as
\[J^{2}=\int_{\Omega}\left(\mathcal{N}^{2}(\mathbf{u},p)+(\nabla\cdot\mathbf{u} )^{2}\right)\mathrm{d}\mathbf{x}=\int_{\Omega}\left(\mathbf{r}_{1}\cdot \mathbf{r}_{1}+r_{2}^{2}\right)\mathrm{d}\mathbf{x}=\left\langle\mathbf{R}, \mathbf{R}\right\rangle. \tag{14}\]
At the global minima of the cost function, \(J^{2}=0\), the defining property of an equilibrium solution (3.12) and the incompressibility constraint (3.2) are both satisfied. The operator \(\mathbf{G}=[\mathbf{g}_{1},g_{2}]\) acting on general flow fields \(\mathbf{U}=[\mathbf{u},p]\in\mathcal{P}_{0}\) is constructed such that an equilibrium solution \([\mathbf{u}^{*},p^{*}]\) is obtained by evolving the variational dynamics
\[\frac{\partial\mathbf{U}}{\partial\tau}=\frac{\partial}{\partial\tau}\left[ \begin{matrix}\mathbf{u}\\ p\end{matrix}\right]=\left[\begin{matrix}\mathbf{g}_{1}\\ g_{2}\end{matrix}\right]. \tag{3.15}\]
The operator \(\mathbf{G}\) is derived following the adjoint-based method described in SS2 to guarantee the monotonic decrease the cost function along trajectories of the variational dynamics (3.15).
### Adjoint operator for the NSE
The variational dynamics (3.15) must ensure that the flow field \(\mathbf{U}\) remains within \(\mathcal{P}_{0}\), thus \(\mathbf{U}\) is periodic in \(x\) and \(z\), and its velocity component \(\mathbf{u}\) takes zero value at the walls for all \(\tau\). In order for these properties of \(\mathbf{U}\) to be preserved under the variational dynamics, the operator \(\mathbf{G}\) must be periodic in \(x\) and \(z\), and \(\mathbf{g}_{1}=\partial\mathbf{u}/\partial\tau\) must take zero value at the walls, meaning that \(\mathbf{G}\in\mathcal{P}_{0}\). In addition, we choose the residual \(\mathbf{R}\) to lie within \(\mathcal{P}_{0}\). The periodicity of \(\mathbf{R}\) in \(x\) and \(z\) automatically results from the spatial periodicity of \(\mathbf{U}\) in these two directions. However, we enforce the condition \(\mathbf{r}_{1}(\mathbf{u},p)=\mathcal{N}(\mathbf{u},p)=\mathbf{0}\) at the walls. With the choice of \(\mathbf{U},\mathbf{R},\mathbf{G}\in\mathcal{P}_{0}\), the flow field remains within \(\mathcal{P}_{0}\) as desired. Following this choice, all the boundary terms resulting from partial integrations in the derivation of the adjoint operator cancel out (see A), and the adjoint of the directional derivative of \(\mathbf{R}(\mathbf{U})\) along \(\mathbf{G}\) is obtained as
\[\mathcal{L}_{1}^{\dagger}=(\nabla\mathbf{r}_{1})\;(\mathbf{u}_{b} +\mathbf{u})-(\nabla(\mathbf{u}+\mathbf{u}_{b}))^{\top}\,\mathbf{r}_{1}+\frac {1}{Re}\Delta\mathbf{r}_{1}+r_{2}\mathbf{r}_{1}-\nabla r_{2}, \tag{3.16}\] \[\mathcal{L}_{2}^{\dagger}=\nabla\cdot\mathbf{r}_{1}. \tag{3.17}\]
Therefore, with \(\mathbf{G}=-\mathcal{L}^{\dagger}(\mathbf{U};\mathbf{R})\) the variational dynamics takes the form
\[\frac{\partial\mathbf{u}}{\partial\tau}= -\mathcal{L}_{1}^{\dagger}=-(\nabla\mathbf{r}_{1})\;(\mathbf{u}_ {b}+\mathbf{u})+(\nabla(\mathbf{u}+\mathbf{u}_{b}))^{\top}\,\mathbf{r}_{1}- \frac{1}{Re}\Delta\mathbf{r}_{1}-r_{2}\mathbf{r}_{1}+\nabla r_{2}, \tag{3.18}\] \[\frac{\partial p}{\partial\tau}= -\mathcal{L}_{2}^{\dagger}=-\nabla\cdot\mathbf{r}_{1}, \tag{3.19}\]
subject to the following BCs:
\[\mathbf{u}(x,y=\pm 1,z;\tau)=\mathbf{0}, \tag{3.20}\] \[[\mathbf{u},p](x=0,y,z;\tau)=[\mathbf{u},p](x=L_{x},y,z;\tau),\] (3.21) \[[\mathbf{u},p](x,y,z=0;\tau)=[\mathbf{u},p](x,y,z=L_{z};\tau),\] (3.22) \[\left[-\mathbf{u}_{b}\cdot\nabla\mathbf{u}-\nabla p+\frac{1}{Re }\Delta\mathbf{u}\right]_{y=\pm 1}=\mathbf{0}, \tag{3.23}\]
where the BCs (3.20)-(3.22) are properties of \(\mathbf{U}\) as an element of \(\mathcal{P}_{0}\), while the BC (3.23) is the choice of \(\mathbf{r}_{1}=\mathcal{N}(\mathbf{u},p)=\mathbf{0}\) at the walls obtained by substituting \(\mathbf{u}(y=\pm 1)=\mathbf{0}\) in the definition of \(\mathcal{N}(\mathbf{u},p)\). Note that in the absence of solid walls in a doubly periodic 2D or a triply periodic 3D domain, the BCs (3.20) and (3.23) do not apply. Instead, the fields are subject to periodic BCs only.
Numerically imposing the BCs (3.20)-(3.23) while evolving Equations (3.18)-(3.19) forward in the fictitious time is not straightforward. Consequently, instead of advancing the derived variational dynamics directly, we project the adjoint-descent dynamics on the space of physical flow fields \(\mathcal{M}\). This allows us to employ the influence matrix method (Kleiser & Schumann (1980)) to integrate the adjoint-descent dynamics.
### Handling pressure: Projection on the space of physical flow fields
To obtain a numerically tractable variational dynamics, we project the adjoint-descent dynamics (3.18)-(3.23) from \(\mathcal{P}_{0}\) onto the space of physical flow fields \(\mathcal{M}\subset\mathcal{P}_{0}\). Within \(\mathcal{M}\), pressure is no longer governed by an explicit evolution equation, but by Poisson's equation with a velocity-dependent source term. Let \(p=\mathcal{P}\left[\mathbf{u}\right]\) denote the solution to the Poisson's equation yielding pressure associated with an instantaneous divergence-free velocity \(\mathbf{u}\). To preserve the zero divergence of \(\mathbf{u}\), the evolution of the velocity, \(\partial\mathbf{u}/\partial\tau=\mathbf{g}_{1}\), is projected onto the space of divergence-free fields:
\[\frac{\partial\mathbf{u}}{\partial\tau}=\mathbb{P}\left\{-\left(\nabla \mathbf{r}_{1}\right)\left(\mathbf{u}_{b}+\mathbf{u}\right)+\left(\nabla( \mathbf{u}+\mathbf{u}_{b})\right)^{\top}\mathbf{r}_{1}-\frac{1}{Re}\Delta \mathbf{r}_{1}\right\}=:\mathbf{f}, \tag{3.24}\]
where \(\mathbb{P}\) denotes the projection operator. The argument of the operator \(\mathbb{P}\) is the right-hand side of Equation (3.18) with \(r_{2}=0\) and \(\nabla r_{2}=\mathbf{0}\) that result from the zero divergence of \(\mathbf{u}\). According to the Helmholtz's theorem, a smooth 3D vector field can be decomposed into a divergence-free and a curl-free component. Thus, \(\mathbf{g}_{1}=\partial\mathbf{u}/\partial\tau\) is decomposed as \(\mathbf{g}_{1}=\mathbf{f}-\nabla\phi\) where \(\mathbf{f}=\mathbb{P}\left\{\mathbf{g}_{1}\right\}\) is the divergence-free component and \(\phi\) is the scalar potential whose gradient gives the curl-free component. Therefore, the evolution of the divergence-free velocity is governed by
\[\frac{\partial\mathbf{u}}{\partial\tau}=-\left(\nabla\mathbf{r}_{1} \right)\left(\mathbf{u}_{b}+\mathbf{u}\right)+\left(\nabla(\mathbf{u}+ \mathbf{u}_{b})\right)^{\top}\mathbf{r}_{1}+\nabla\phi-\frac{1}{Re}\Delta \mathbf{r}_{1}, \tag{3.25}\] \[\nabla\cdot\mathbf{u}=0, \tag{3.26}\]
subject to
\[\mathbf{u}(x,y=\pm 1,z;\tau)=\mathbf{0}, \tag{3.27}\] \[\mathbf{u}(x=0,y,z;\tau)=\mathbf{u}(x=L_{x},y,z;\tau),\] (3.28) \[\mathbf{u}(x,y,z=0;\tau)=\mathbf{u}(x,y,z=L_{z};\tau). \tag{3.29}\]
The pressure governed by \(p=\mathcal{P}\left[\mathbf{u}\right]\) contained in \(\mathbf{r}_{1}=\mathbf{r}_{1}(\mathbf{u},p)\) automatically satisfies the BC (3.23). The Helmholtz's decomposition is an orthogonal decomposition, \(\left\langle\mathbf{f},\nabla\phi\right\rangle=0\). Therefore, \(\left\langle\mathbf{f},\mathbf{g}_{1}\right\rangle=\left\langle\mathbf{f}, \mathbf{f}\right\rangle+\left\langle\mathbf{f},-\nabla\phi\right\rangle=\left\| \mathbf{f}\right\|^{2}\geq 0\). Since \(\mathbf{f}\) makes an acute angle with the steepest descent direction \(\mathbf{g}_{1}\), evolution along \(\mathbf{f}\) guarantees the monotonic decrease of the cost function, as desired.
The variational dynamics (3.25)-(3.29) is equivariant under continuous translations in the periodic directions \(x\) and \(z\). Furthermore, one can verify through simple calculations that this dynamics is also equivariant under the action of any reflection or rotation permitted by the laminar base velocity field \(\mathbf{u}_{b}\). Consequently, the symmetry group generated by translations, reflections and rotations in the obtained variational dynamics is identical to that of the NSE (3.1)-(3.5). Therefore, to construct equilibria within a particular symmetry-invariant subspace of the NSE, one can use initial conditions from the same symmetry-invariant subspace to initialise the variational dynamics, and the variational dynamics preserves the symmetries of the initial condition.
In the variational dynamics the scalar field \(\phi\) plays a role analogous to the pressure \(p\) in the incompressible NSE. The scalar fields \(\phi\) and \(p\) adjust themselves to the instantaneous physical velocity \(\mathbf{u}\) such that \(\nabla\cdot\mathbf{u}=0\) and \(\mathbf{u}(y=\pm 1)=\mathbf{0}\) are preserved under the evolution with the fictitious time \(\tau\) and the physical time \(t\), respectively. Similar to the pressure in the NSE, \(\phi\) satisfies a Poisson's equation with a velocity-dependent source term. Solving the Poisson's equation for \(\phi\) and \(p\) is a numerically challenging task in the present wall-bounded configuration (Rempfer (2006)). Therefore, instead of attempting to compute \(p\) and \(\phi\) and thereby advancing the variational dynamic (3.25), we formulate the numerical integration scheme based on the influence matrix method (Kleiser & Schumann (1980)) where the no-slip BC and zero divergence are precisely satisfied while the explicit construction of \(p\) and \(\phi\) is circumvented.
## 4 Numerical implementation
To advance the variational dynamics (3.25)-(3.29) without explicitly computing \(\phi\) and \(p\), we take advantage of the structural similarity between the variational dynamics and the NSE. In order to evaluate the right-hand side of Equation (3.25), we consider the following PDE for the residual field \(\mathbf{r}_{1}\):
\[\frac{\partial\mathbf{r}_{1}}{\partial\hat{\tau}}=-\left(\mathbf{N}(\mathbf{r} _{1})-\nabla\phi+\frac{1}{Re}\Delta\mathbf{r}_{1}\right), \tag{4.1}\]
subject to
\[\mathbf{r}_{1}(y=\pm 1)=\mathbf{0}, \tag{4.2}\] \[\nabla\cdot\mathbf{r}_{1}=0, \tag{4.3}\]
where \(\mathbf{N}(\mathbf{r}_{1})=\left(\nabla\mathbf{r}_{1}\right)\left(\mathbf{u} _{b}+\mathbf{u}\right)-\left(\nabla(\mathbf{u}+\mathbf{u}_{b})\right)^{\top} \mathbf{r}_{1}\) with both \(\mathbf{u}\) and \(\mathbf{u}_{b}\) being treated as constant fields. We use the dummy Equation (4.1) to evaluate the right-hand side of Equation (3.25) since the instantaneously evaluated right-hand of these two systems are identically equal. For brevity, we are omitting the periodic BCs in \(x\) and \(z\) since spatial periodicity can be enforced via spectral representation in an appropriate basis, such as Fourier basis, that is periodic by construction. Equation (4.1) together with the BC (4.2) and the zero-divergence constraint (4.3) resembles the structure of the incompressible NSE:
\[\frac{\partial\mathbf{u}}{\partial t}=\mathbf{M}(\mathbf{u})-\nabla p+\frac{ 1}{Re}\Delta\mathbf{u}, \tag{4.4}\]
which is subject to
\[\mathbf{u}(y=\pm 1)=\mathbf{0}, \tag{4.5}\] \[\nabla\cdot\mathbf{u}=0, \tag{4.6}\]
with \(\mathbf{M}(\mathbf{u})=-(\mathbf{u}_{b}\cdot\nabla)\mathbf{u}-(\mathbf{u} \cdot\nabla)\mathbf{u}_{b}-(\mathbf{u}\cdot\nabla)\mathbf{u}\). The influence matrix (IM) algorithm has been developed to numerically advance this particular type of dynamical systems, which have a Laplacian linear term and gradient of a scalar on the right-hand side, and are subject to zero-divergence constraint and homogeneous Dirichlet BCs at the walls. This algorithm enforces zero divergence and the homogeneous Dirichlet BCs within the time-stepping process while the scalar field is handled implicitly and is not resolved as a separate variable (Kleiser & Schumann (1980); Canuto _et al._ (2007), SS3.4). We use the IM algorithm, and introduce the following five steps which advance \(\mathbf{u}\) under the variational dynamics (3.25)-(3.29) for one time step of size \(\Delta\tau\):
1. The current velocity field \(\mathbf{u}\), that satisfies \(\nabla\cdot\mathbf{u}=0\) and \(\mathbf{u}(y=\pm 1)=\mathbf{0}\), is advanced under the NSE dynamics for one physical time step \(\Delta t\) using the IM algorithm. This yields the updated velocity \(\mathbf{u}^{\Delta t}\) where the IM algorithm ensures \(\nabla\cdot\mathbf{u}^{\Delta t}=0\) and \(\mathbf{u}^{\Delta t}(y=\pm 1)=\mathbf{0}\).
2. The residual field \(\mathbf{r}_{1}\), which is by definition the right-hand side of the NSE (3.1), is approximated via finite differences \[\mathbf{r}_{1}=\frac{\partial\mathbf{u}}{\partial t}\approx\frac{\mathbf{u}^{ \Delta t}-\mathbf{u}}{\Delta t}.\] (4.7) Since both \(\mathbf{u}\) and \(\mathbf{u}^{\Delta t}\) are divergence-free and satisfy homogeneous Dirichlet BCs at the walls, \(\nabla\cdot\mathbf{r}_{1}=0\) and \(\mathbf{r}_{1}(y=\pm 1)=\mathbf{0}\).
3. The current residual field \(\mathbf{r}_{1}\) is advanced under the dummy dynamics (4.1)-(4.3) for one time step \(\Delta\hat{\tau}\) using the IM algorithm, which yields \(\mathbf{r}_{1}^{\Delta\hat{\tau}}\). The IM algorithm ensures that \(\nabla\cdot\mathbf{r}_{1}^{\Delta\hat{\tau}}=0\) and \(\mathbf{r}_{1}^{\Delta\hat{\tau}}(y=\pm 1)=\mathbf{0}\).
4. The right-hand side of Equation (4.1) is approximated via finite differences \[\mathbf{f}=\frac{\partial\mathbf{r}_{1}}{\partial\hat{\tau}}\approx\frac{ \mathbf{r}_{1}^{\Delta\hat{\tau}}-\mathbf{r}_{1}}{\Delta\hat{\tau}}.\] (4.8)
Since both \(\mathbf{r}_{1}\) and \(\mathbf{r}_{1}^{\Delta\tau}\) are divergence-free and satisfy homogeneous Dirichlet BCs at the walls, \(\nabla\cdot\mathbf{f}=0\) and \(\mathbf{f}(y=\pm 1)=\mathbf{0}\).
1. Having approximated \(\mathbf{f}\), which is the descent direction at the current fictitious time \(\tau\), we advance the velocity for one step of size \(\Delta\tau\) using \[\mathbf{u}^{\Delta\tau}=\mathbf{u}+\Delta\tau\,\mathbf{f}.\] (11) Since both \(\mathbf{u}\) and \(\mathbf{f}\) are divergence-free and take zero value at the walls, the updated velocity satisfies \(\nabla\cdot\mathbf{u}^{\Delta\tau}=0\) and \(\mathbf{u}^{\Delta\tau}(y=\pm 1)=\mathbf{0}\).
The finite differences (10) and (11) affect the accuracy of time-stepping the variational dynamics, but they do not interfere with imposing the boundary condition \(\mathbf{u}(y=\pm 1)=\mathbf{0}\) and the constraint \(\nabla\cdot\mathbf{u}=0\) within machine precision. The low accuracy of the first-order finite differences does not affect the accuracy of the obtained equilibrium solution since both \(\|\mathbf{r}_{1}\|\) and \(\|\mathbf{f}\|\) tend to zero when an equilibrium is approached. We are also not concerned about the low accuracy of the first-order forward Euler update rule (11) since the objective is to obtain the attracting equilibria of the adjoint-descent dynamics reached at \(\tau\to\infty\). Therefore, the introduced procedure is able to construct equilibrium solutions within machine precision.
We implement this procedure in _Channelflow 2.0_, an open-source software package for numerical analysis of the incompressible NSE in wall-bounded domains. In this software, an instantaneous divergence-free velocity field is represented by Chebyshev expansion in the wall-normal direction \(y\) and Fourier expansion in the periodic directions \(x\) and \(z\):
\[u_{j}(x,y,z)=\sum_{\begin{subarray}{c}m,p\in\mathbb{Z}\\ n\in\mathbb{W}\end{subarray}}\hat{u}_{m,n,p,j}T_{n}(y)e^{2\pi i(mx/L_{\infty} +pz/L_{z})}\ ;\quad j=1,2,3, \tag{12}\]
where \(T_{n}(y)\) is the \(n\)-th Chebyshev polynomial of the first kind, \(i\) is the imaginary unit, and indices \(1\) to \(3\) specify directions \(x\), \(y\) and \(z\), respectively. _Channelflow 2.0_ employs the influence matrix algorithm for time-marching the NSE (11). With modification for the nonlinear term \(\mathbf{N}(\mathbf{r}_{1})\), Equation (10) can also be advanced in time.
## 5 Application to plane Couette flow
We apply the introduced variational method to plane Couette flow (PCF), the flow between two parallel plates moving at equal and opposite velocities. PCF is governed by the general NSE (1)-(5) with the laminar base flow \(\mathbf{u}_{b}=[y,0,0]^{\top}\). Due to the periodicity in \(x\) and \(z\), PCF is equivariant under continuous translations in these directions:
\[\tau(\ell_{x},\ell_{z}):\ [u,v,w]\ (x,y,z)\mapsto[u,v,w]\ (x+\ell_{x},y,z+\ell_{z}), \tag{13}\]
where \(u\), \(v\) and \(w\) are the components of \(\mathbf{u}\) in \(x\), \(y\) and \(z\) directions, respectively. In addition, PCF is equivariant under two discrete symmetries as well, rotation around the line \(x=y=0\):
\[\sigma_{1}:\ [u,v,w]\ (x,y,z)\mapsto[-u,-v,w]\ (-x,-y,z), \tag{14}\]
and reflection with respect to the plane \(z=0\):
\[\sigma_{2}:\ [u,v,w]\ (x,y,z)\mapsto[u,v,-w]\ (x,y,-z). \tag{15}\]
The variational dynamics (3.25)-(3.29) is easily verified to be equivariant under the same continuous and discrete symmetry operators. Therefore, the variational dynamics preserves these symmetries, if present in the initial condition. In the following, we demonstrate the convergence of multiple equilibrium solutions from guesses both within a symmetry-invariant subspace and outside.
### Results
We search for equilibria of PCF at \(Re=400\) within a domain of dimensions \(L_{x}=2\pi/1.14\) and \(L_{z}=2\pi/2.5\) (see SS3.1). The flow field is discretised with \(N_{y}=31\) collocation points in the wall-normal direction and \(N_{x}=N_{z}=32\) points in the lateral directions. The adjoint-descent dynamics is numerically integrated by the forward Euler scheme (4.9) with \(\Delta\tau=0.03\), and \(\mathbf{r}_{1}\) and \(\mathbf{f}\) are approximated via finite differences (4.7) and (4.8) with the step size \(\Delta t=0.25\) and \(\Delta\hat{\tau}=0.25\), respectively (see SS4).
To verify the scheme and its implementation, we converge the so-called 'Nagata's lower branch' equilibrium solution (Nagata (1990); Clever & Busse (1997)) at \(Re=400\). As initial guess, we take an equilibrium solution on the same branch but at a significantly different \(Re\). The Nagata's lower branch solution at \(Re=400\) is available in the database on channelflow.org. We continue this equilibrium solution to \(Re=230\), and use the resulting solution to initialise both the adjoint-descent variational method and the standard Newton iterations at \(Re=400\). The standard Newton iterations, i.e. without optimisations such as hook steps, fail to converge. However, the adjoint-descent variational method successfully converges to the equilibrium solution at \(Re=400\) on the same branch.
Along the trajectory of the adjoint-descent dynamics, the cost function initially drops rapidly and subsequently decreases with an exponential rate, as shown in Figure 2. The exponential decrease of the cost function is explained by the dynamical system picture of the adjoint descent: the adjoint-descent dynamics converges to a stable fixed point, hence the evolution is dominated by the slowest eigenmode of the linearised dynamics in the vicinity of that fixed point. The sharp initial drop and the following exponential decay of the cost function are reflected in fast and slow traversal, respectively, of the trajectory within the state space. Figure 3 presents a 2D projection of the trajectory, with markers indicating that the majority of the trajectory is traversed quickly in the beginning of the integration, and the majority of the integration time is spent on the remaining, much shorter portion of the trajectory. For instance, the portion of the trajectory traversed during the first \(1.2\times 10^{6}\) fictitious time units, that decreases the cost function from \(J=5.9\times 10^{-3}\) to \(J=10^{-5}\), is considerably longer than the remaining portion which takes over \(90\,\%\) of the integration time to be traversed. \(P_{1}\) and \(P_{2}\) in Figure 3 are the real parts of \(\hat{u}_{0,3,0,1}\) and \(\hat{u}_{0,5,0,1}\), i.e. the coefficients of the third and the fifth Chebyshev polynomial in the expansion of the mean
Figure 2: Convergence of the adjoint-descent variational method for constructing an equilibrium solution of the plane Couette flow. The minimisation of the cost function \(J\) evolves the initial guess towards a true equilibrium solution at which \(J=0\).
streamwise velocity in \(y\) (see Equation (24)). The visualisation of the trajectory in different projections of the state space yields a similar observation.
Nagata's lower branch equilibrium solutions are symmetric under shift-and-rotate symmetry \(s_{1}=\tau(L_{x}/2,L_{z}/2)\sigma_{1}\):
\[s_{1}\left[u,v,w\right](x,y,z)=[-u,-v,w]\left(-x+L_{x}/2,-y,z+L_{z}/2\right), \tag{25}\]
and shift-and-reflect symmetry \(s_{2}=\tau(L_{x}/2,0)\sigma_{2}\):
\[s_{2}\left[u,v,w\right](x,y,z)=[u,v,-w]\left(x+L_{x}/2,y,-z\right). \tag{26}\]
Therefore, the initial guess in the present example, namely the Nagata's lower branch solution at \(Re=230\), is symmetric under \(s_{1}\) and \(s_{2}\) that are preserved by the adjoint-descent dynamics. The velocity field remains symmetric under \(s_{1}\) and \(s_{2}\) without explicitly enforcing them during the forward integration until the equilibrium solution on the same branch at \(Re=400\) is converged.
To further investigate the robustness of the adjoint-descent variational method in successfully converging from inaccurate guesses, we initialise the method with guesses obtained from a direct numerical simulation. We construct a random divergence-free velocity field with \(L_{2}\)-norm \(\|\mathbf{u}\|=0.2\), and time-march the NSE along a turbulent trajectory until the flow laminarises. The initial condition and therefore the entire trajectory are not symmetric under any of the symmetries allowed by the PCF. We extract the local extrema of \(\|\mathbf{u}\|\) as a function of time \(t\), where \(\partial\|\mathbf{u}\|/\partial t=0\), as guesses for potential equilibrium solutions. Figure 4 shows \(\|\mathbf{u}\|\) plotted against \(t\) from which 26 guesses are extracted. The standard Newton iterations do not converge starting from any of the guesses. With hook-step optimisation, 5 of the searches converge within 50 Newton-GMRES-hookstep (NGH) iterations. The converged solutions include the trivial laminar solution \(\mathbf{u}=\mathbf{0}\) as well as two nontrivial solutions EQ1 and EQ3 (see Tables 1 and 2 for properties of the converged solutions). By integrating the adjoint-descent dynamics, 10 of the guesses converge to an equilibrium solution. These solutions include the trivial solution as well as five nontrivial equilibria EQ1 to EQ5 (see Tables 1 and 2). Snapshots that lead to a successful search
Figure 3: The trajectory of the adjoint-descent dynamics along which the cost function \(J\) decreases monotonically as shown in Figure 2. The projection shows \(P_{2}=\Re\{\hat{u}_{0,5,0,1}\}\) against \(P_{1}=\Re\{\hat{u}_{0,3,0,1}\}\). The majority of the trajectory is traversed rapidly at the beginning, as indicated by a sharp drop of \(J\) in Figure 2, followed by a slow traversal of the remaining portion towards the asymptotic solution, reflected in Figure 2 as an exponential decay of the cost function.
via either NGh iterations or the adjoint-descent algorithm are marked on Figure 4. The variational method succeeds in twice as many cases as the NGh method, and extracts three more non-trivial equilibria from a turbulent trajectory with a crude criterion for selecting guesses. This suggests that the basin of attraction to converge an equilibrium solution is typically larger for the adjoint-descent variational method compared to the NGh method. However, the larger basin of attraction does not necessarily contain the smaller one. Notice, for instance, that the NGh iterations and the adjoint-descent algorithm converge to different equilibrium solutions when initialised with the snapshot 4, or the NGh iterations converge when initialised with the snapshot 5 while the adjoint-descent does not.
\begin{table}
\begin{tabular}{c|c c|c} \hline \hline snapshot & NGh iterations & NGh solution & adjoint-descent solution \\
1 & 13 & EQ0 & EQ0 \\
2 & 11 & EQ0 & EQ0 \\
3 & - & - & EQ0 \\
4 & 23 & EQ1 & EQ2 \\
5 & 15 & EQ1 & - \\
6 & - & - & EQ1 \\
7 & 13 & EQ3 & EQ2 \\
8 & - & - & EQ4 \\
9 & - & - & EQ3 \\
10 & - & - & EQ5 \\
11 & - & - & EQ5 \\
12 & - & - & EQ3 \\ \hline \hline \end{tabular}
\end{table}
Table 1: The list of the equilibrium solutions converged by Newton-GMRES-hookstep (NGh) and the adjoint-descent variational method from the guesses marked in Figure 4. See Table 2 for properties of the equilibria EQ0 to EQ5.
Figure 4: The \(L_{2}\)-norm of the velocity field against the physical time \(t\) in direct numerical simulation from a random initial condition. The snapshots corresponding to the local extrema of \(\|\mathbf{u}\|\) are selected as guesses for an equilibrium solution. Table 1 summarises the result of the convergence from each guess using Newton-GMRES-hookstep and the adjoint-descent variational method.
## 6 Accelerating the convergence
The variational dynamics evolves along the gradient descent of the cost function. As a result, this dynamics is globally contracting, and almost all its trajectories eventually converge to a stable fixed point where the cost function takes a minimum value. When the trajectory of the adjoint-descent dynamics has got sufficiently close to its destination fixed point, the cost function is well represented by a quadratic function and its gradient flow is almost linear. The approximately linear behaviour of the variational dynamics in the vicinity of an asymptotic fixed point inspires the idea of the following data-driven technique for accelerating the slow convergence of the variational method.
Our acceleration technique aims to approximate the expected linear dynamics and thereby approximate the equilibrium solution of the adjoint-descent dynamics. Since the destination fixed point is not known a priori, linearisation around the unknown fixed point is obviously not possible. Instead, we employ dynamic mode decomposition (DMD) to approximate the linear dynamics based on the available portion of the trajectory that has been traversed. DMD is a regression framework that constructs the best-fit linear model over a series of snapshots (Schmid (2010, 2022)). The equilibrium solution of the adjoint-descent dynamics is approximated by letting the fictitious time go to infinity in the approximated linear system.
### Dynamic mode decomposition (DMD)
Suppose each instantaneous spatially resolved flow field \(\mathbf{u}(\mathbf{x};\tau)\) is represented by an \(N\)-dimensional real-valued column vector \(\psi(\tau)\). \(M\) snapshots \(\psi_{k}=\psi(\tau_{k})\); \(k=1,\ldots,M\) along a single trajectory can be related to the snapshots taken \(\delta\tau\) later along the same trajectory, \(\psi_{k}^{\prime}=\psi(\tau_{k}+\delta\tau)\), via the following linear relation:
\[\psi_{k}^{\prime}=\mathbf{A}\psi_{k}+e_{k};\quad k=1,\ldots,M, \tag{10}\]
where \(e_{k}\) is the error in approximating \(\psi_{k}^{\prime}\) by the linear map \(\psi_{k}\mapsto\mathbf{A}\psi_{k}\). DMD constructs the \(N\times N\) linear operator \(\mathbf{A}\) which minimises the sum of squares of the elements of \(e_{k}\) over all \(M\) snapshot pairs:
\[\mathbf{A}:=\Psi^{\prime}\Psi^{+}, \tag{11}\]
where \(\Psi:=\begin{bmatrix}\psi_{1}&\psi_{2}&\ldots&\psi_{M}\end{bmatrix}\), \(\Psi^{\prime}:=\begin{bmatrix}\psi_{1}^{\prime}&\psi_{2}^{\prime}&\ldots&\psi _{M}^{\prime}\end{bmatrix}\), and the superscript \(+\) denotes the Moore-Penrose pseudo-inverse. The dimensionality of the system can be prohibitively large for constructing \(\mathbf{A}\) directly as defined in Equation (11), which is typically the case in a fluid dynamics problem. Therefore, we instead use a rank-reduced representation of this matrix. For this, the data matrix \(\Psi\) is factorised via singular value decomposition (SVD) as \(\Psi\approx\mathbf{U}\Sigma\mathbf{V}^{\top}\) with truncation rank \(r\). The \(r\times r\) projection of \(\mathbf{A}\) on the POD modes \(\mathbf{U}\) is
\[\mathbf{\tilde{A}}=\mathbf{U}^{\top}\mathbf{A}\mathbf{U}=\mathbf{U}^{\top} \Psi^{\prime}\mathbf{\nabla}\Sigma^{-1}. \tag{12}\]
\begin{table}
\begin{tabular}{c c c} solution & \(\|\mathbf{u}\|\) & \(D/D_{\text{lam}}\) \\ EQ0 & 0 & 1 \\ EQ1 & 0.385858 & 3.04427 \\ EQ2 & 0.268277 & 1.76302 \\ EQ3 & 0.240519 & 1.60348 \\ EQ4 & 0.168131 & 1.45374 \\ EQ5 & 0.328654 & 2.37353 \\ \end{tabular}
\end{table}
Table 2: Properties of the equilibrium solutions converged by Newton-GMRES-hookstep and the adjoint-descent variational method (see Table 1 and Figure 4).
The dynamic modes and their temporal behaviour are constructed from the eigendecomposition of \(\tilde{\mathbf{A}}\): Dynamic modes are \(\phi_{q}=\left(\Psi^{\prime}\mathbf{V}\Sigma^{-1}\right)v_{q}\) with \(q=1,\ldots,r\), where \(v_{q}\) are eigenvectors of \(\tilde{\mathbf{A}}\); and the dynamic mode \(\phi_{q}\) evolves as \(e^{\omega_{q}\tau}\) where \(\omega_{q}=\ln(\lambda_{q})/\delta\tau\) and \(\lambda_{q}\) is the eigenvalue of \(\tilde{\mathbf{A}}\) associated with \(v_{q}\). Finally, the linear evolution of \(\psi(\tau)\) is approximated as
\[\psi(\tau)\approx\sum_{q=1}^{r}b_{q}\phi_{q}e^{\omega_{q}\tau}, \tag{10}\]
where \(b_{q}\) are the amplitudes of the dynamic modes at a reference time, for instance at \(\tau_{M}\). Based on this linear model we approximate the asymptotic equilibrium solution of the variational dynamics as follows.
### Numerical implementation
Suppose the dynamic modes are sorted in increasing order of \(|\omega_{q}|\). For a low truncation rank \(r\), all the exponents \(\omega_{q}\) are real, \(\omega_{1}\) is significantly closer to zero than the rest, and \(\omega_{2},\ldots,\omega_{r}\) are negative, which is consistent with the expected linear behaviour in the vicinity of the stable equilibria of the gradient flow. By assuming \(\omega_{1}\approx 0\), the linear model (10) can be expressed as the superposition of the steady state \(\psi_{s}:=b_{1}\phi_{1}\), and the decaying terms \(b_{q}\phi_{q}\exp(\omega_{q}\tau)\); \(q=2,\ldots,r\). The steady state \(\psi_{s}\) approximates the equilibrium solution of the almost linear adjoint-descent dynamics. The state vector \(\psi_{s}\) is mapped back to the corresponding flow field, from where the integration of the adjoint-descent dynamics is restarted. Let \(r^{*}\) denote the largest truncation rank for which \(\omega_{1},\ldots,\omega_{r^{*}}\in\mathbb{R}\). Then, the truncation rank \(r\leqslant r^{*}\) is chosen such that the cost function associated with the approximated equilibrium is the smallest. In the following, we demonstrate the acceleration of the first test case presented in SS5.
The snapshot vectors \(\psi\) are the (real-valued) state vectors containing the minimum number of independent variables required for describing a divergence-free velocity field in Fourier-Chebyshev-Fourier spectral representation (15). The vector \(\psi\) has \(N=20\,218\) elements for the discretisation used in SS5. Initially, we integrate the adjoint descent dynamics and let the cost function drop to \(\log(J)=-4.5\) before performing the first DMD extrapolation. The linear model is constructed using \(M=100\) snapshots uniformly spaced over an interval of \(2\times 10^{4}\) time units (\(\delta\tau=200\)). The next DMD extrapolations are performed using the same number of snapshots \(M\) and the same spacing \(\delta\tau\) while the adjoint dynamics is integrated forward in time for \(15\times 10^{4}\) time units before starting to collect new snapshots. The acceleration technique allows to achieve the convergence criterion \(J=10^{-12}\) through \(\tau=7.36\times 10^{5}\) time units of total forward integration while without acceleration it takes \(\tau=1.38\times 10^{7}\) time units, that is almost 19 times longer (see Figure 5, compare with Figure 2). The time required for performing the extrapolation is negligible compared to the time required for the forward integration of the adjoint-descent dynamics. The first DMD extrapolation has resulted in a slight increase in the value of \(J\). The 2D projection of the state space, displayed in Figure 6, shows that the first extrapolated state is significantly closer to the destination fixed point, despite being located on a higher level of \(J\). By restarting the integration from the extrapolated state, the trajectory gets quickly attracted to the dominating eigendirection of the linearised dynamics resulting in a rapid drop in \(J\) (see Figures 5 and 6).
Exploiting the linear behaviour of the variational dynamics, the acceleration technique typically achieves more than an order of magnitude speed-up in converging equilibria of PCF. The linear behaviour in the vicinity of an equilibrium solution at sufficiently large \(\tau\) is a generic characteristic of the adjoint-descent variational method. Therefore, the introduced DMD-based acceleration technique is system-independent, and provided the snapshot vectors of the variational dynamics can be applied directly to any other problem.
## 7 Summary and concluding remarks
The unstable invariant solutions embedded within the chaotic attractor of the Navier-Stokes equations underpin the dynamics of a turbulent flow. Despite the significance of invariant solutions for a dynamical description of chaotic flows, the identification of these solutions remains a computational challenge, demanding robust algorithms. In this work, we have presented a matrix-free, adjoint-based variational method for computing equilibrium solutions of wall-bounded shear flows. We have applied the introduced method to plane Couette flow, and demonstrated the
Figure 5: Acceleration of the convergence of the adjoint-descent variational method by successive DMD-based extrapolations. The extrapolation employs DMD to construct a best-fit linear model for the dynamics in the vicinity of an equilibrium, and approximates the asymptotic solution of the adjoint-descent dynamics by the asymptotic solution of the linear model. The acceleration technique reduces the total duration of the forward integration by 95% in this example. The jumps in the state space associated with the first two extrapolations, \(E_{1}\) and \(E_{2}\), are shown in Figure 6.
Figure 6: The trajectory of the accelerated adjoint-descent dynamics in the same 2D projection of Figure 3. DMD-based extrapolations allow jumping to a state closer to the destination fixed point while avoiding integration of the adjoint-descent dynamics. The inset displays 225 times magnification of the area around the asymptotic solution.
convergence of multiple equilibrium solutions. The variational method outperforms the state-of-the-art Newton iterations in successfully converging from inaccurate initial guesses, that suggests a larger basin of attraction.
The present method employs the norm of the right-hand side of the evolution equation as a cost function to penalise the deviation of a flow field from the equilibrium state. Thereby, the problem of finding an equilibrium solution is recast as the minimisation of the cost function. To solve the minimisation problem, we adopted the variational approach of Farazmand (2016) where the gradient of the cost function is constructed analytically via adjoint calculations, and thereby a matrix-free gradient descent method is utilised. The cost function decreases monotonically along trajectories of the gradient descent dynamics until a minimum value is obtained. The global minima of the cost function, taking zero value, correspond to the equilibrium solutions of the flow. If a local minimum is obtained, the search for an equilibrium solution has failed. However, a local minimum of the cost function corresponds to the locally slowest state with respect to the chosen norm. This provides a means of characterising the so-called 'ghost' of a saddle-node bifurcation (Strogatz (2018)), which may influence the emerging spatiotemporal structures in chaotic flows (see, for example, Reetz _et al._ (2020), SS3.1).
The present work describes two key contributions: First, we apply the adjoint-based variational method to 3D wall-bounded flows. Previously, the variational approach had only been successfully applied to a 2D Kolmogorov flow in a doubly periodic domain without walls (Farazmand (2016)). The primary challenge in extending the variational method for computing equilibria to wall-bounded flows lies in handling the nonlinear, nonlocal pressure in the presence of solid walls. To overcome this challenge, we have formulated the variational dynamics in a way that an explicit computation of pressure is avoided, allowing for application to 3D wall-bounded flows. We demonstrated the variational method specifically for plane Couette flow. However, the variational dynamics has been derived for the deviation of the velocity field from the laminar base flow. Consequently, an identical formulation and implementation directly translates to other canonical shear flows such as plane Poiseuille and asymptotic suction boundary layer flows as only the respective laminar velocity profiles in the variational dynamics (3.25)-(3.29) needs to be adapted. It can also be easily verified that the variational dynamics preserves the symmetries of plane Poiseuille flow and asymptotic suction boundary layer as well as plane Couette flow.
The second contribution is addressing the slow convergence of the adjoint-based variational method, that poses a challenge in practically utilising this method for 3D Navier-Stokes equations. We propose a data-driven technique for accelerating the convergence by extrapolating the asymptotic fixed point of the variational dynamics based on the traversed portion of its trajectory. Since any trajectory of the variational dynamics converges to a stable fixed point, the dynamics behaves almost linearly when the trajectory has got close enough to the asymptotic solution. The extrapolation technique takes advantage of this predictability, and approximates the best-fit linear dynamics using dynamic mode decomposition (DMD). The asymptotic solution of the approximated linear system approximates the asymptotic solution of the variational dynamics. This results in an order-of-magnitude speed-up in the overall duration of the forward integration required to converge to a solution within machine accuracy. The proposed acceleration technique is based on the generic properties of gradient descent minimisation, and is therefore independent of the physical system of study.
The advantages of the adjoint-based variational method have inspired its application in computing other invariant sets, such as periodic orbits (Azimi _et al._ (2022); Parker & Schneider (2022)) and connecting orbits (Ashtari & Schneider (2023)). These methods view the identification of a periodic or connecting orbit as a minimisation problem in the space of space-time fields with prescribed behaviour in the temporal direction. They then employ a similar adjoint-based technique to solve the minimisation problem. The robust convergence of these extensions has so far only been demonstrated in 2D flows in a doubly periodic domain and for 1D model systems.
Like in computing equilibria, dealing with pressure is the key challenge in formulating the adjoint-based variational method for computing periodic or connecting orbits in 3D wall-bounded flows. In our ongoing research, the next step is to extend the introduced algorithm to the computation of more complex invariant solutions in wall-bounded flows via extensions of the adjoint-based variational method.
## Acknowledgements
Authors would like to thank Sajjad Azimi, Jeremy P. Parker, Moritz Linkmann, and Matthias Engel for insightful discussions. This research has been supported by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement no. 865677).
## Appendix A Derivation of the adjoint operator
### Directional derivative of the residual
Using indicial notation to specify the \(x\), \(y\) and \(z\) components of vector quantities by the indices \(i=1,2,3\), respectively, we write the residual of the momentum and continuity equations as
\[r_{1,i}=-u_{b,j}\frac{\partial u_{i}}{\partial x_{j}}-u_{j}\frac{ \partial u_{b,i}}{\partial x_{j}}-u_{j}\frac{\partial u_{i}}{\partial x_{j}}- \frac{\partial p}{\partial x_{i}}+\frac{1}{Re}\frac{\partial^{2}u_{i}}{ \partial x_{j}\partial x_{j}}, \tag{10}\] \[r_{2}=\frac{\partial u_{j}}{\partial x_{j}}, \tag{11}\]
where repeated indices imply Einstein summation convention. The directional derivative of the residual components, \(r_{1,i}\) and \(r_{2}\), along \(\mathbf{G}=[\mathbf{g_{1}},g_{2}]\) is found directly from the definition:
\[\begin{split}\mathcal{L}_{1,i}(\mathbf{U};\mathbf{G})=\lim_{ \epsilon\to 0}\frac{r_{1,i}(\mathbf{U}+\epsilon\mathbf{G})-r_{1,i}( \mathbf{U})}{\epsilon}=&-u_{b,j}\frac{\partial g_{1,i}}{ \partial x_{j}}-g_{1,j}\frac{\partial u_{b,i}}{\partial x_{j}}-g_{1,j}\frac{ \partial u_{i}}{\partial x_{j}}\\ &-u_{j}\frac{\partial g_{1,i}}{\partial x_{j}}-\frac{\partial g _{2}}{\partial x_{i}}+\frac{1}{Re}\frac{\partial^{2}g_{1,i}}{\partial x_{j} \partial x_{j}},\end{split} \tag{12}\]
\[\mathcal{L}_{2}(\mathbf{U};\mathbf{G})=\lim_{\epsilon\to 0}\frac{r_{2}( \mathbf{U}+\epsilon\mathbf{G})-r_{2}(\mathbf{U})}{\epsilon}=\frac{\partial g _{1,j}}{\partial x_{j}}. \tag{13}\]
### The adjoint operator
To derive the adjoint operator of the directional derivative of the residual, \(\mathcal{L}(\mathbf{U};\mathbf{G})\), we expand the inner product of \(\mathcal{L}(\mathbf{U};\mathbf{G})\) and the residual \(\mathbf{R}\) as follows:
\[\begin{split}\langle\mathcal{L}(\mathbf{U};\mathbf{G}),\mathbf{R} \rangle&=\int_{\Omega}\left(\mathcal{L}_{1}\cdot\mathbf{r}_{1}+ \mathcal{L}_{2}r_{2}\right)\mathrm{d}\mathbf{x}\\ &=\int_{\Omega}\left[\left(-u_{b,j}\frac{\partial g_{1,i}}{ \partial x_{j}}-g_{1,j}\frac{\partial u_{b,i}}{\partial x_{j}}-g_{1,j}\frac{ \partial u_{i}}{\partial x_{j}}-u_{j}\frac{\partial g_{1,i}}{\partial x_{j}} \right.\right.\\ &\qquad\qquad\left.-\frac{\partial g_{2}}{\partial x_{i}}+\frac{ 1}{Re}\frac{\partial^{2}g_{1,i}}{\partial x_{j}\partial x_{j}}\right)r_{1,i}+ \left(\frac{\partial g_{1,j}}{\partial x_{j}}\right)r_{2}\right]\mathrm{d} \mathbf{x}.\end{split}\]
Integrating by parts we have
\[\int_{x_{j,\min}}^{x_{j,\max}}u_{b,j}\frac{\partial g_{1,i}}{\partial x_{j}}r_ {1,i}\mathrm{d}x_{j}=u_{b,j}g_{1,i}r_{1,i}\Big{|}_{x_{j}=x_{j,\min}}^{x_{j, \max}}-\int_{x_{j,\min}}^{x_{j,\max}}\frac{\partial(u_{b,j}r_{1,i})}{\partial x _{j}}g_{1,i}\mathrm{d}x_{j},\]
\[\int_{x_{j,\min}}^{x_{j,\max}}u_{j}\,\frac{\partial g_{1,i}}{\partial x_{j}}r_{1,i} \mathrm{d}x_{j}=u_{j}g_{1,i}r_{1,i}\Big{|}_{x_{j}=x_{j,\min}}^{x_{j,\max}}-\int_ {x_{j,\min}}^{x_{j,\max}}\frac{\partial(u_{j}r_{1,i})}{\partial x_{j}}g_{1,i} \mathrm{d}x_{j},\]
\[\int_{x_{i,\min}}^{x_{i,\max}}\frac{\partial g_{2}}{\partial x_{j}}r_{1,i} \mathrm{d}x_{i}=g_{2}r_{1,i}\Big{|}_{x_{i}=x_{i,\min}}^{x_{i,\max}}-\int_{x_{i, \min}}^{x_{i,\max}}\frac{\partial r_{1,i}}{\partial x_{i}}g_{2}\mathrm{d}x_{j},\]
\[\int_{x_{j,\min}}^{x_{j,\max}}\frac{\partial^{2}g_{1,i}}{\partial x_{j}\partial x _{j}}r_{1,i}\mathrm{d}x_{j}=\left[\frac{\partial g_{1,i}}{\partial x_{j}}r_{1,i}-g_{1,i}\frac{\partial r_{1,i}}{\partial x_{j}}\right]_{x_{j}=x_{j,\min}}^{ x_{j,\max}}+\int_{x_{j,\min}}^{x_{j,\max}}\frac{\partial^{2}r_{1,i}}{\partial x _{j}\partial x_{j}}g_{1,i}\mathrm{d}x_{j},\]
\[\int_{x_{j,\min}}^{x_{j,\max}}\frac{\partial g_{1,j}}{\partial x_{j}}r_{2} \mathrm{d}x_{j}=g_{1,j}r_{2}\Big{|}_{x_{j}=x_{j,\min}}^{x_{j,\max}}-\int_{x_{j,\min}}^{x_{j,\max}}\frac{\partial r_{2}}{\partial x_{j}}g_{1,j}\mathrm{d}x_{j}.\]
For \(\mathbf{U},\mathbf{R},\mathbf{G}\in\mathcal{P}_{0}\), the following boundary terms cancel out either due to the periodicity of \(\mathbf{U}\), \(\mathbf{R}\) and \(\mathbf{G}\) in \(x\) and \(z\), or due to \(\mathbf{g}_{1}(y=\pm 1)=\mathbf{0}\):
\[u_{b,j}g_{1,i}r_{1,i}\Big{|}_{x_{j}=x_{j,\min}}^{x_{j,\max}}=0,\]
\[g_{1,i}\frac{\partial r_{1,i}}{\partial x_{j}}\Big{|}_{x_{j}=x_{j,\min}}^{x_{ j,\max}}=0,\]
\[g_{1,j}r_{2}\Big{|}_{x_{j}=x_{j,\min}}^{x_{j,\max}}=0.\]
Similarly, the other two boundary terms cancel out either due to the periodicity of \(\mathbf{R}\) and \(\mathbf{G}\) in \(x\) and \(z\), or due to \(\mathbf{r}_{1}(y=\pm 1)=\mathbf{0}\):
\[g_{2}r_{1,i}\Big{|}_{x_{i}=x_{i,\min}}^{x_{i,\max}}=0,\]
\[\frac{\partial g_{1,i}}{\partial x_{j}}r_{1,i}\Big{|}_{x_{j}=x_{j,\min}}^{x_{ j,\max}}=0.\]
We now rewrite the inner product as
\[\langle\mathcal{E}(\mathbf{U};\mathbf{G}),\mathbf{R}\rangle=\int_{\Omega} \left(\frac{\partial(u_{b,j}r_{1,i})}{\partial x_{j}}-r_{1,j}\frac{ \partial u_{b,j}}{\partial x_{i}}-r_{1,j}\frac{\partial u_{j}}{\partial x_{i} }+\frac{\partial(u_{j}r_{1,i})}{\partial x_{j}}\right.\]
that can be written in the vector form as
\[\langle\mathcal{E}(\mathbf{U};\mathbf{G}),\mathbf{R}\rangle= \int_{\Omega}\left((\nabla\mathbf{r}_{1})\;(\mathbf{u}_{b}+ \mathbf{u})-(\nabla(\mathbf{u}+\mathbf{u}_{b}))^{\top}\;\mathbf{r}_{1}+\frac{ 1}{Re}\Delta\mathbf{r}_{1}+r_{2}\mathbf{r}_{1}-\nabla r_{2}\right)\cdot \mathbf{g}_{1}\mathrm{d}\mathbf{x}\] \[+\int_{\Omega}(\nabla\cdot\mathbf{r}_{1})\;g_{2}\mathrm{d} \mathbf{x}.\]
By definition
\[\left\langle\mathcal{L}(\mathbf{U};\mathbf{G}),\mathbf{R}\right\rangle=\left\langle \mathbf{G},\mathcal{L}^{\dagger}(\mathbf{U};\mathbf{R})\right\rangle=\int_{ \Omega}\left(\mathcal{L}_{1}^{\dagger}\cdot\mathbf{g}_{1}+\mathcal{L}_{2}^{ \dagger}g_{2}\right)\!\mathrm{d}\mathbf{x},\]
therefore, the components of \(\mathcal{L}^{\dagger}(\mathbf{U};\mathbf{R})\) are obtained as
\[\mathcal{L}_{1}^{\dagger} =\left(\nabla\mathbf{r}_{1}\right)\left(\mathbf{u}_{b}+\mathbf{u }\right)-\left(\nabla(\mathbf{u}+\mathbf{u}_{b})\right)^{\top}\mathbf{r}_{1}+ \frac{1}{Re}\Delta\mathbf{r}_{1}+r_{2}\mathbf{r}_{1}-\nabla r_{2}, \tag{10}\] \[\mathcal{L}_{2}^{\dagger} =\nabla\cdot\mathbf{r}_{1}. \tag{11}\]
|
2309.14795 | Assessing the alignment accuracy of state-of-the-art deterministic
fabrication methods for single quantum dot devices | The realization of efficient quantum light sources relies on the integration
of self-assembled quantum dots (QDs) into photonic nanostructures with high
spatial positioning accuracy. In this work, we present a comprehensive
investigation of the QD position accuracy, obtained using two marker-based QD
positioning techniques, photoluminescence (PL) and cathodoluminescence (CL)
imaging, as well as using a marker-free in-situ electron beam lithography
(in-situ EBL) technique. We employ four PL imaging configurations with three
different image processing approaches and compare them with CL imaging. We
fabricate circular mesa structures based on the obtained QD coordinates from
both PL and CL image processing to evaluate the final positioning accuracy.
This yields final position offset of the QD relative to the mesa center of
$\mu_x$ = (-40$\pm$58) nm and $\mu_y$ = (-39$\pm$85) nm with PL imaging and
$\mu_x$ = (-39$\pm$30) nm and $\mu_y$ = (25$\pm$77) nm with CL imaging, which
are comparable to the offset $\mu_x$ = (20$\pm$40) nm and $\mu_y$ =
(-14$\pm$39) nm obtained using the in-situ EBL method. We discuss the possible
causes of the observed offsets, which are significantly larger than the QD
localization uncertainty obtained from simply imaging the QD light emission
from an unstructured wafer. Our study highlights the influences of the image
processing technique and the subsequent fabrication process on the final
positioning accuracy for a QD placed inside a photonic nanostructure. | Abdulmalik A. Madigawa, Jan N. Donges, Benedek Gaál, Shulun Li, Martin Arentoft Jacobsen, Hanqing Liu, Deyan Dai, Xiangbin Su, Xiangjun Shang, Haiqiao Ni, Johannes Schall, Sven Rodt, Zhichuan Niu, Niels Gregersen, Stephan Reitzenstein, Battulga Munkhbat | 2023-09-26T09:42:56Z | http://arxiv.org/abs/2309.14795v2 | Assessing the alignment accuracy of state-of-the-art deterministic fabrication methods for single quantum dot devices
###### Abstract
The realization of efficient quantum light sources relies on the integration of self-assembled quantum dots (QDs) into photonic nanostructures with high spatial positioning accuracy. In this work, we present a comprehensive investigation of the QD position accuracy, obtained using two marker-based QD positioning techniques, photoluminescence (PL) and cathodoluminescence (CL) imaging, as well as using a marker-free in-situ electron beam lithography (in-situ EBL) technique. We employ four PL imaging configurations with three different image processing approaches and compare them with CL imaging. We fabricate circular mesa structures based on the obtained QD coordinates from both PL and CL image processing to evaluate the final positioning accuracy. This yields final position offset of the QD relative to the mesa center of \(\mu_{x}\) = (-40\(\pm\)58) nm and \(\mu_{y}\) = (-39\(\pm\)85) nm with PL imaging and \(\mu_{x}\) = (-39\(\pm\)30) nm and \(\mu_{y}\) = (25\(\pm\)77) nm with CL imaging, which are comparable to the offset \(\mu_{x}\) = (20\(\pm\)40) nm and \(\mu_{y}\) = (-14\(\pm\)39) nm obtained using the in-situ EBL method. We discuss the possible causes of the observed offsets, which are significantly larger than the QD localization uncertainty obtained from simply imaging the QD light emission from an unstructured wafer. Our study highlights the influences of the image processing technique and the subsequent fabrication process on the final positioning accuracy for a QD placed inside a photonic nanostructure.
## I Introduction
Solid-state single-photon emitters are crucial building blocks for developing efficient quantum light sources and on-chip quantum circuits for quantum information processing platforms. Self-assembled semiconductor quantum dots (QDs) are one of the most promising solid-state quantum emitters for realizing quantum communication networks,[1; 2; 3] photonic quantum computation[4; 5; 6] which will enable many applications in photonic quantum technologies.[7] Here, one uses the fact that a single QD efficiently emits single photons due to its quantized two-level electronic structure. However, extracting and collecting the emitted photons for usable application is rather challenging, and in a simple planar device geometry, most of the photons are restrained in the semiconductor matrix due to total internal reflection. This problem is usually tackled by incorporating the QD within an engineered photonic nanostructure.[8; 9; 10; 11; 12; 13; 14; 15] In cavity-based quantum light sources, this device configuration allows for efficient coupling into only one single photonic mode by the Purcell effect, maximizing the photon extraction efficiency. Importantly, maximum Purcell enhancement is attained when the QD is spatially positioned at the mode's electric field maximum and in spectral resonance with the cavity mode. While there has been success in designing high-performance nanophotonic components for efficient light extraction, one very challenging aspect is the accurate spatial and spectral positioning of a single QD within the structures. In fact, the underlying self-assembled growth process results in random positions of the QDs. In addition, the QDs have different shapes and material compositions, leading to variations in their emission wavelengths. The described randomness poses a significant challenge in fabricating quantum light sources with spatially and spectrally resonant QDs, which is crucial for optimum device performance and for scaling up to larger quantum photonic systems such as large-scale integrated quantum photonic circuits.
Over the years, different technology platforms have been developed to select and determine the position and spectrum of target QDs and to integrate them deterministically into nanophotonic structures acting as high-performance quantum light sources. Methods like in-situ photolithography,[16] in-situ e-beam lithography,[17] and photoluminescence imaging[18; 19; 20; 21; 22; 23] have been developed and employed very successfully for deterministic QD-device processing. The field was pioneered by the development of in-situ optical lithography in 2008, which involves the QD localization (determining the QD location) with photoluminescence (PL) mapping and subsequent optical lithography of the photonic structure. This method can achieve localization accuracy within \(\pm\)50 nm and has been employed in developing highly efficient micropillar single-photon sources.[24] However, in-situ optical lithography can only be applied to specific geometries like micropillars. Moreover, the fabrication is limited to structures with feature sizes achievable within the optical
diffraction limit. On the other hand, the in-situ electron beam lithography (in-situ EBL) approach that combines low-temperature cathodoluminescence (CL) mapping for spatial and spectral QD selection with high-resolution electron beam lithography (EBL) has proven to be very powerful in terms of lithography resolution and geometrical flexibility. The pixel size and resolution can be picked freely and is not fixed by a camera sensor. Furthermore, the high electron energy enables efficient excitation independent of the semiconductor bandgap and does not overlay with the QD luminescence signal. The in-situ EBL is known to feature high alignment accuracy of 30-40 nm and was initially employed for the development of different high-performance single-photon sources.[14] Quite naturally, this advanced nanotechnology platform can also be used to define complex patterns, such as integrated quantum photonic circuits.[13; 25; 26] However, it is technically rather complex as it requires the combination of low-temperature CL spectroscopy with low-temperature EBL in a single system. As still a further deterministic nanofabrication technology, the PL imaging approach has become very popular in recent years. It involves the QD localization and the structure fabrication in separate process steps. In this approach, high QD preselection accuracy as low as 5 nm has been reported.[27] However, the process flow is more complex than in the in-situ lithography techniques. This is explained by the fact that it is a marker-based localization approach and involves the lithography step in a different setup, which potentially results in bigger errors in aligning the nanophotonic structures to the preselected QDs. Several studies report high localization accuracy from PL image analysis, but so far, no detailed analysis of the final alignment uncertainty has been performed, including the aforementioned in-situ lithography techniques.
Here, we report on a systematic and comparative study of the final alignment accuracy of a target QD in a photonic structure using marker-based PL and CL imaging techniques. Furthermore, we compare the two marker-based techniques with the marker-free in-situ EBL technique, with the aim of uncovering the distinct advantages and disadvantages associated with each method. Our approach starts with the determination of the QD position using the different QD imaging techniques through a series of image processing steps. We then fabricate a circular mesa test structure with a suitably large diameter around the localized QD. Finally, we extract the final position of the QDs within the mesa structures by mapping the QD emission. We study the mean offsets (deviations from the center of the mesa) and uncertainties in the QD position for the different techniques. All approaches exhibit notable alignment offsets and uncertainties much larger than those obtained from the simple image-processing fit uncertainties, which are often used as a measure of the alignment accuracy of deterministic QD device processing. We analyze the errors in each approach and propose strategies to improve the QD positioning accuracy.
## II QD localization
The QD heterostructure sample we use in this study is grown using molecular beam epitaxy (MBE) and consists of low-density InAs QDs embedded in a GaAs membrane. An array of alignment markers is fabricated on the planar sample using EBL, followed by Ti/Au deposition and a lift-off process. The alignment markers are a set of four square-cross marks with an arm length of 30 \(\mu\)m and a width of 2 \(\mu\)m. They serve as reference markers for extracting the global coordinates of the QDs for further EBL processing. To compare the different positioning techniques, QDs from the same sample regions are selected, and the QD coordinates are determined using the described techniques, including marker-based PL and CL and in-situ EBL. The determined coordinates are then used in fabricating mesa structures around the QDs (Figure 1), with the QDs divided between the different techniques to evaluate the accuracy of each separately.
Figure 1: Procedure for nanostructures positioning around preselected QDs. Images are taken using either PL or CL imaging systems and processed using rigorous image analysis algorithms to precisely extract the location of the target QDs relative to alignment markers. The determined coordinates are then used to fabricate nanostructures (here circular mesa) around the QDs using EBL. The In-situ EBL combines CL imaging for marker-free QD localization with EBL structuring in one process setup.
**QD localization via PL imaging**
First, we use the PL imaging technique to determine the center position of selected QDs with reference to the alignment marks. The setup employed for the sample imaging is based on the two-color PL imaging technique developed in Ref. [19] (see Supporting Information). The sample is mounted in a closed-cycle cryostat operating at 4 K. The cryostat is equipped with a piezoelectric positioning stage and a low-temperature microscope objective (magnification = 60\(\times\), NA = 0.82) located inside the cryostat. The PL setup utilizes two different color light-emitting diodes (LEDs) for the excitation and imaging of the QDs and alignment markers. A 470 nm LED is used for QD excitation, while a 1050 nm LED is used for imaging the alignment markers. The wavelengths of the LEDs are carefully chosen to achieve optimal contrast for both the markers and the QDs while minimizing emission and reflection from the sample background. To extract the QDs location coordinates, the QD PL and alignment marker images are taken simultaneously using
Figure 2: Determining the location of QDs with reference to alignment markers using QD imaging techniques. (a)-(d) displays the two-color PL method (Single-image 1) and (e)-(h) the CL method. Image obtained using (a) PL imaging setup and (e) CL mapping setup. (b) Intensity line cut profile (x-axis) of QD PL (blue dots) and its Gaussian fit (red line, with one standard deviation peak position error) and (c) intensity line cut profile (x-axis) of cross-correlated marker image along the located center, of the image in (a). (d) Histogram of the uncertainties in the QDs location, alignment markers location, and the combined uncertainties of the QD and marker (QD+marker) of the image in (a), measured from the line cuts from 15 images (taken at different field regions on the sample). (f) and (g) Intensity line cut (x-axis) of QD CL and marker image, respectively, of the image in (e). (h) Histogram of the uncertainties in the QD location, alignment marker location, and the combined uncertainties of the QD and marker (QD + marker) of the image in (e). The location uncertainties of the QDs were extracted from the 2D Gaussian fit of the QDs profiles for both techniques. The location uncertainties of the alignment markers in the PL technique were extracted from the polynomial fit of a cropped region of interest (20 x 20-pixel area) around the cross-correlation center (with a 68% confidence interval), while the CL marker uncertainties were determined from a straight line fit through the marker center. Combined QD+ alignment marker uncertainties are obtained by propagating the uncertainties of the QDs and alignment markers.
a CMOS camera (2048 pixels x 2040 pixels resolution) within an \(\approx\) (86 x 86) \(\mu\)m\({}^{2}\) field of view with an image acquisition time of 1 s. The images are processed using an image analysis program developed in Python (see Supporting Information for details). The program utilizes a combination of the cross-correlation algorithm to locate the center of the four markers,[29] and the Gaussian blob detection with maximum likelihood estimator (MLE) algorithm to locate the QDs.[30] The center coordinates of the markers and the QDs are retrieved in pixel units and transformed into local coordinates, with the top-left marker serving as the origin. The local coordinates are then transformed into global EBL coordinates for the subsequent fabrication of nanostructures. Furthermore, after the mapping step, the PL spectrum of each selected QD is recorded and analyzed to confirm the presence of a single QD. The 2D intensity profiles of the QDs are fitted with a Gaussian function using a nonlinear least-squares approach, and the position uncertainties are extracted from the peak position error of the fit, represented as one standard deviation. The position uncertainty of the markers is extracted from the polynomial fits of a line cut along the cross-correlation maximum (with a 68% confidence interval).
To identify the most effective approach for accurate localization of the QDs in relation to the markers, we acquire and analyze four sets of images obtained using different imaging settings and configurations. Two sets of images are acquired using the two-color PL imaging approach, each with different contrast settings (Single-image 1 and Single-image 2). The image contrast of the QDs and alignment markers plays a critical role in achieving precise localization of the QDs during image processing. Optimal contrast is achieved by carefully adjusting the illumination and excitation LED powers. Figure 2(a) shows an image taken with the power level of the 1050 nm and 470 nm LEDs adjusted to make the QD brighter than the alignment markers (Single-image 1). This adjustment effectively reduces the background reflection from the 1050 nm LED, improving the QD contrast. An analysis of the uncertainty for the QD and marker locations, as depicted in Figure 2(d), reveals a mean uncertainty of (1.38 \(\pm\) 1.07) nm for the QD location and (4.62 \(\pm\) 1.65) nm for the marker location, leading to a combined uncertainty (QD+marker) of (4.51 \(\pm\) 1.25). The QD+ marker uncertainty is obtained by adding the QD and marker uncertainties using the error propagation formula. The low uncertainty for both the markers and the QDs can be attributed to the enhanced signal-to-noise ratio achieved by utilizing a low 1050 nm LED illumination power, which is crucial for minimizing fitting errors. The QD uncertainties depicted here are extracted from a 2D Gaussian fit of the QD intensity profile. The uncertainty extracted from the MLE method is as low as 0.5 nm. However, this value is unreliable as it depends strongly on the QD pixel intensity, which varies from image to image (see Supporting Information)
The effect of the marker and QD contrast in the overall uncertainty of the QD position for the PL imaging approach is studied with different imaging configurations (see Table 1). Single-image 2 is acquired using the same imaging configuration as Single-image 1, with the power level of the 1050 nm LED adjusted to capture a bright marker image and enhance the marker contrast (see Supporting Information for images and histograms of uncertainties). The improvement in marker contrast results in better accuracy in the marker location as compared to Single-image 1. However, an increase in the uncertainty of the QD location is observed. This is attributed to the low QD's luminescence contrast caused by the increased background reflection of the illumination LED. In addition, two sets of images are also acquired using a single-color approach (one LED at a time). The first set (Merged-images) involves acquiring two separate images (marker with 1050 nm LED and QD with 470 nm LED), which are later merged during image processing (see Supporting Information for images and histograms of uncertainties). With this approach, the contrast of the QDs and markers can be optimized independently to reduce the uncertainty in the final QD position. Consequently, the QD location uncertainty reduces significantly. However, the marker contrast is still limited by the background reflection of the illumination LED. In addition, one issue with
\begin{table}
\begin{tabular}{c c c c} & QD location & Marker location & QD + marker \\ & uncertainty (nm) & uncertainty (nm) & uncertainty (nm) \\ \hline PL imaging: Single-image 1 & 1.38 \(\pm\) 1.07 & 4.62 \(\pm\) 1.65 & 4.51 \(\pm\) 1.25 \\ \hline PL imaging: Single-image 2 & 2.57 \(\pm\) 0.76 & 4.16 \(\pm\) 0.21 & 4.92 \(\pm\) 0.42 \\ \hline PL imaging: Merged-images & 0.93 \(\pm\) 0.54 & 5.12 \(\pm\) 1.45 & 5.80 \(\pm\) 1.86 \\ \hline PL imaging: Dark-marker images & 2.76 \(\pm\) 1.57 & 5.75 \(\pm\) 0.89 & 6.41 \(\pm\) 1.08 \\ \hline CL imaging & 8.09 \(\pm\) 3.90 & 2.05 \(\pm\) 0.54 & 8.45 \(\pm\) 3.75 \\ \hline In-situ e-beam lithography & 8.68 \(\pm\) 2.86 & - & - \\ \hline In-situ e-beam lithography (Ref.[28]) & 25 & - & - \\ \hline In-situ photolithography (Ref.[16]) & 50 & - & - \\ \end{tabular}
\end{table}
Table 1: Localization uncertainties for different positioning approaches.
this approach is that it is susceptible to image drift errors caused by the transition between two LEDs of different colors, which could potentially lead to larger localization errors. Alternatively, a second set of images (dark marker) is acquired with only the 470 nm LED to capture the bright QDs with a dark marker image. This approach involves leveraging the PL emission of the wetting layer to generate a dark image of the Au markers combined with the QD PL emission (see Supporting Information for images and histograms of uncertainties). This is achieved using a 780 nm LED for QD excitation to maximize the wetting layer emission. The enhanced wetting layer emission increases the contrast of the alignment markers, resulting in a dark image representation of the markers. Consequently, this approach achieves good localization accuracy for the markers. However, the decrease in the signal-to-noise ratio caused by the wetting layer emission increases the uncertainty in the QD location. Our analysis reveals that the Single-image 1 configuration has the best balance in terms of the accuracy of both QD and marker locations. However, it is noteworthy that the effectiveness of different image-acquisition approaches may vary depending on the design structure of the QD sample. The reflectivity contrast between the sample and the marker under the illumination wavelengths plays a significant role in determining the marker imaging quality, which subsequently influences marker location accuracy. Therefore, conducting reflectivity measurements for both the sample and the marker across a broad spectral range is advisable. This can help in selecting the optimal wavelength for the illumination LED to maximize the
Figure 3: Finding the accuracy of the determined QD location after nanostructure fabrication. (a) 3D sketch illustration of the QD misalignment relative to the center of the mesa structure. (b) SEM image of fabricated mesa around determined QDs coordinates. (c) CL map and SEM of QD in mesa structure, showing the QD emission profile within the mesa structure (QD center obtained from 2D Gaussian fit, and the mesa edges fitted with an ellipse). QD offset distribution around the mesa center of CL imaging method in (d), and PL imaging (Single-image 1) method (shaded region) using cross-correlation, edge detection marker, and auto-cross correlation localization approaches in (e), (f), and (g), respectively. (h) Histogram of the offsets’ distribution of all the methods with their mean offsets and standard deviation (uncertainty). N10, O10, L11, and J11 are the fields where the QDs have been located.
imaging contrast and, consequently, achieve the best accuracy in both QD and marker localization.
**QD localization via CL imaging**
Similarly, we use the CL imaging technique to determine the coordinates of the same set of QDs as in the PL technique. The setup used for the CL imaging of the combined marker and QDs fields is based on a Raith eLine Plus EBL system (see Methods section and Supporting Information). This state-of-the-art system can, in addition to performing high-resolution EBL, simultaneously detect secondary electrons and spectra pixel by pixel. Therefore, it enables us to obtain a clear SEM picture whilst also detecting the corresponding CL emission spectra of the sample. This results in a perfect overlap between the SEM picture and the obtained CL map (Figure 2 (e)). We would like to note that the CL map shows an intensity maximum in the top center, which is not caused by the lateral distribution QD emission intensity. This maximum is due to an efficiency optimum of our CL-mirror adjustment and is visible in all CL maps. The location of the QDs in the CL map is then determined via a 2D Gaussian fit (Figure 2 (f)) analog to the PL method. Based on a nonlinear least-squares approach, the peak position and one standard deviation as the fitting error is extracted. This way, it is possible to select the wavelength range for each QD individually and optimize the 2D fit accuracy. Afterward, the QD coordinates are uploaded into a python program, which extracts the center of the markers and transforms the QD coordinates into local coordinates, again with the top left marker as the origin. The center of the markers is hereby determined by a line scan across the arms of the markers followed by a straight line fit through the center of the arm. The resulting intercept of the two lines is regarded as the center. The fit is performed several times, and the error is the standard deviation based on this fitting series. Due to the positioning of the detector for the secondary electrons at the side of our system, the detection of these electrons is not uniform, leading to a brighter edge of the marker arm on one side and a darker edge or shadow on the other. The result is an asymmetry in the line scan profile, easily visible in Figure 2 (g), and based on this, a higher fit uncertainty in X-direction ((Figure 2 (h)). The measurements are performed at low temperatures of 20 K under an acceleration voltage of 20 kV. In each CL map, an area of (80 x 80) \(\mu\)m\({}^{2}\) is scanned by the electron beam with a pixel size of 500 nm, and an exposure time per pixel of only 20 ms, which highlights the high light throughput and a corresponding spatial SE image resolution of 25 nm. Under those conditions, we achieve a mean uncertainty for the QD position of (8.09 \(\pm\) 3.90) nm and (2.05 \(\pm\) 0.54) nm for the alignment marker position, leading to a combined uncertainty (QD+marker) of (8.45 \(\pm\) 3.75) nm (Table 1).
## III QD-Mesa final alignment analysis
Given that the fit uncertainties merely provide an estimate of the QD localization process accuracy and do not precisely reflect the true accuracy of QD location, we proceed to fabricate mesa structures (1.4 - 4.0 \(\mu m\) in diameter) around the determined QD locations. This allows us to investigate the QD's true position uncertainty and its variance with the different positioning techniques. The position of the QD within the mesa is obtained by recording the SEM picture of the mesa and the QD CL map simultaneously. Due to the rather large dimensions of the mesa structures, the QDs are distant enough from the edges to prevent unwanted interactions between emitters and the edged surface. This way, we can precisely visualize the QD position within the mesa structure to retrieve the true location accuracy. In the resulting overlapped image (Figure 3 (b) and (c)), the edges of the mesa are fitted with an ellipse to identify the center of the mesa, and a Gaussian fit to the QD emission allows us to determine its position relative to the center of the mesa structure. Both fits deliver separate fitting errors, which, through Gaussian propagation, lead to an error for the final position offset. The mean value of this error for all mesa structures is (2.94 \(\pm\) 0.90) nm and, therefore, is the precision of our fitting method. The small error shows the overall consistency of our fitting method. The offset of QDs from the center of the mesa for the CL and PL imaging techniques is shown in Figure 3 (d) & (e). The result shows an offset tendency for both methods toward the -X axis; however, the PL method has a higher offset along the -X and an offset tendency toward the -Y axis. The CL method shows data points more evenly distributed around \(\pm\)Y. However, it should be noted that there are fewer data points for the CL compared to the PL method. Therefore, it is not possible to conclude with certainty that the CL has no tendency to a particular direction in Y. The tendency of the CL method to have a shift in the -X direction could be due to the previously discussed unequal illumination of the marker and resulting asymmetry in the marker scan in the X direction. This could lead to a line fit that is not well located in the center of the marker arm and, therefore, introduces a shift towards the negative X direction. Moreover, the source of these offsets includes the error from the EBL fabrication and the individual localization errors. Therefore, the source of these large offset is determined by obtaining the fabrication alignment error and analyzing the errors in the localization processes.
To investigate why the PL method has larger offsets and the reason it tends more to -X/-Y direction, we reanalyze our images with the different image processing algorithms we employed and compare the final alignment offsets (see Supporting Information for details of the different methods). First, We observe that for a QD that is not far off from a Gaussian profile, there is a small offset difference between the initial Gaussian blob detection and the subsequent MLE process (see Supporting
Information). This suggests the presence of a small error source and indicates a lower contribution of the QD localization error to the overall alignment offset. This implies that the large offset values could be due to errors in the marker detection process. We investigate that by analyzing the QD offsets using different marker localization approaches, as shown in Figure 3(f)-(g). The large offset difference between the various marker localization approaches indicates the presence of a bigger error source in the process. To identify the underlying problem, we first analyze the QDs with the largest offset for the cross-correlation method and observe that the large offsets in the negative X direction (mainly from field O10) result from a QD located along the marker edges (see Supporting Information). The cross-correlation algorithm employed is highly sensitive to the quality of the marker itself. This is because the overlap between the ideal and the image marker depends on the individual pixels within the cropped image area, meaning that any bright spot (QD) or artifact along the marker image area will result in an error that translates to an offset in the final alignment position. An alternative approach using the edge detection technique utilizes a line detection algorithm to locate the center of the marker. This method reduces reliance on individual pixels and minimizes the impact of bright spots or artifacts, providing a less sensitive and more robust marker localization approach. This is evident from the lower final alignment offsets, as seen in Figure 3(f). Nonetheless, sharp marker edges are also critical for accurate localization using this approach. Another approach based on the cross-correlation method reported in [27] involves taking the difference between the auto-correlation of the ideal marker with the cross-correlation of the ideal marker and marker image. This approach shows lower uncertainty in the QD position; however, it has a bigger offset in the X-axis, as shown in Figure 3(g). The offsets distribution of all the techniques are summarized in 3(h). Each of these approaches shows a different offset tendency. The reason for this difference is not very obvious and requires a deeper investigation into each approach. The CL imaging technique shows slightly lower offset and uncertainty values than the PL imaging techniques. This could be partly due to the more resolved marker image in the CL map, which results in more accurate marker localization.
While we can see different offset values for the different techniques, it is unclear how much is either from the localization error or the fabrication alignment. To evaluate this, we characterize seven etched holes that were patterned simultaneously with the mesas. These holes are strategically positioned at the centers of optical fields in various corners of the sample area. The EBL error is determined by measuring the deviation between the actual position of each hole and the center of the corresponding markers' field using the same image processing approach (see Supporting Information). We find that each hole has a different deviation due to the rotational misalignment during the alignment procedure. A mean offset of \(\approx\) (17 \(\pm\) 22) nm and (-35 \(\pm\) 14) nm from the corners within the located QDs' fields are obtained in the X and Y axes, respectively. That is, on average, the EBL alignment process tends to move the mesa position away from the QD by 17 nm in -X and 35 nm in +Y (QD center as the origin). It is noteworthy to point out that these values are estimates of the actual EBL offset to help us investigate the contribution of the EBL misalignment to the final position accuracy. A more detailed analysis of the EBL alignment rotation is needed for an accurate description. Figure 4 (a) shows the offset and uncertainty of both approaches after the EBL misalignment is compensated. The result implies that for both techniques, the offsets
Figure 4: Comparing the accuracy of marker-based positioning technique with in-situ EBL technique. (a) Comparison plot of the QDs offset distribution around the mesa center for marker-based PL (edge detection) and CL imaging after EBL error is compensated (to remove user-dependent error). (b) QDs offset distribution around the mesa center for the in-situ EBL technique.
in -X result from errors in the localization process. For the PL imaging, almost half of the offset in -Y results from EBL misalignment. However, for the CL imaging technique, the offset tendency towards -Y results from the large EBL offset in -Y, and in fact, the localization process results in an offset tendency towards +Y values. Furthermore, the uncertainty in the offset values of all the techniques suggests a lower contribution of the EBL error in the final position uncertainty.
## IV QD localization and sample structuring via in-situ electron beam lithography:
Furthermore, to compare the marker-based approach with the marker-free approach, we use the in-situ EBL technique to simultaneously locate the QDs and structure mesas around the QDs. In this case, the selected QDs are on a different sample field than those used for the marker-based techniques. By integrating CL spectroscopy and EBL within a single setup (detailed in the methods section), in-situ EBL offers a much simpler and time-efficient alternative to the imaging-based procedures since no additional alignment markers are required, and potential uncertainties due to the marker fitting process are eliminated. Moreover, possible imaging errors in the optical system are also eliminated. However, the process is more susceptible to drifts of the cryostat's cold finger (and thus the sample) at low temperatures. An additional challenge lies in the requirement for the sample to be spin-coated with an EBL-resist during QD preselection via CL mapping. This introduces restrictions for the exposure time and the overall handling of the sample, an issue which, however, can be tackled by using machine learning enhanced in-situ EBL.[31] The so-achieved uncertainty for the QD position is \(8.68\) nm based on the same 2D Gaussian fit process described for the CL imaging.
The alignment accuracy of the in-situ EBL technique is analyzed identically to the marker-based methods, and the results are shown in Figure 4 (b). Notably, the in-situ EBL technique demonstrated smaller offsets and uncertainty values in comparison to the EBL-compensated marker-based methods. This can be attributed to its marker-free nature, effectively mitigating the substantial errors originating from the localization process. Nonetheless, an inclination towards an offset in the +X and -Y directions is observed. The source of this offset is attributed to the drift of the cryostat's cold finger at low temperatures. The whole process of CL mapping, QD fitting, and EBL takes up to \(3\) minutes, which is time enough to lead to a \(20-30\) nm drift (as we determined independently) and, therefore, can explain a majority of the offset. For future considerations, a strategic approach to achieve better alignment accuracy could involve actively monitoring and compensating for this cryostat drift during the in-situ EBL nanofabrication procedure.
## V Discussion
In the realm of QD positioning research, as far as our knowledge extends and despite the fact the knowledge about the alignment accuracy is a key parameter of deterministic nanoprocessing, prior to our work, only Pregnolato et al. reported the final alignment offsets and uncertainties after QD integration into nanostructures using the PL imaging approach.[22] However, their structures were smaller than the QD emission profile, and the optical diffraction limit constrained the positioning accuracy. In our systematic study, we implemented a more robust approach by fabricating mesa structures of sufficient size to contain the QD emission profile and utilized a high-resolution CL mapping technique to extract the accurate positions of the QD within the mesas. We investigated and compared the final alignment accuracy of the QDs to the mesa structures using marker-based PL and CL imaging techniques. Both methods yielded an average localization uncertainty of \(<10\) nm, consistent with previous studies. However, we encountered significant alignment offsets and uncertainties following the alignment of mesa nanostructures to the localized QDs. Notably, both approaches exhibited tendencies for offsets along specific axis directions, with a final alignment uncertainty of \(<100\) nm for both techniques. The considerable discrepancy between the localization uncertainty and the final alignment uncertainty, which is the crucial figure for the later device performance, suggests the presence of larger unaccounted errors arising from the localization and fabrication processes. This finding raises concerns about the reliability of the localization fit uncertainties obtained in this work and prior studies. We found that the main contributor to the large mean offset in the PL imaging technique was the marker localization process. Thus, the low uncertainty obtained from the fitting functions does not equate to the QD location accuracy, as the uncertainty is also limited by the imaging system's resolution (pixel size). The CL imaging approach shows better accuracy due to the mapping system resolution. Therefore, a highly resolved image of the marker is needed for more accurate localization. Moreover, alternative marker geometry with a new image-processing approach could be employed to detect the position of the markers with high accuracy, even with low-quality markers. Table 2 summarizes the offset and uncertainties of the different QD positioning approaches. While the marker-based approaches are simpler to implement, the errors that arise from the marker localization result in lower position accuracy with offset tendencies to a certain direction. In addition, a more precise fabrication alignment is necessary for accurate positioning. Though the accuracy varies from user to user, a more automated approach would be more reliable for high reproducibility.
For the in-situ EBL process, we see an improvement in the alignment accuracy, which we attribute to the simpler process flow without the need for alignment markers. We assume that the offset here is caused mainly by temperature-induced drifts of the cryostat's cold finger, and therefore, compensating for those drifts could be an easy approach to further improve the alignment accuracy in the future. Although we obtained comparable results for all approaches, they each have unique advantages and challenges. While the marker-based approaches require marker fabrication steps and extensive pre-characterization, the in-situ EBL technique enables simple pre-selection of QDs during a prior CL mapping step in an easy-one-coordinate system. In return, the dwell time per pixel during the mapping procedure is strongly constrained since the sample is already coated with a resist, a problem that can be mitigated by using in-situ EBL with machine learning.[31] Still, the flexibility in terms of illumination time during optical imaging provides an essential advantage for the marker-based methods, especially in the case of darker QDs (e.g. in the telecom O- and C-band), and additionally, the PL-based approach is more flexible in the choice of different excitation schemes.
## VI Conclusion
In conclusion, we have studied and compared the overall alignment offset and uncertainty of a QD integrated into a circular mesa structure using PL imaging, CL imaging, and in-situ EBL positioning techniques. Our results revealed that the localization accuracy of the marker-based techniques, given by the fit errors of the QD position, does not fully represent the accuracy of QD integration process. This conclusion is drawn from the observed mean offsets and large uncertainty in the final QD position within the mesa structures, which are significantly larger than the accuracy of the QD position obtained in the preselection. These inaccuracies primarily stem from the presence of unaccounted errors during the marker localization process and the EBL fabrication alignment. On the other hand, the in-situ EBL technique demonstrated comparable accuracy of the QD position but better final QD alignment accuracy with the mesa structure, primarily because it does not rely on markers for positioning. Our study is a crucial step in understanding the optimal approach for high-throughput fabrication of highly efficient single-QD-based quantum devices, which is essential for the advancement of scalable photonic quantum information technologies. By shedding light on the limitations and strengths of different positioning techniques, our research contributes to the further development of advanced QD integration techniques for QD-based photonic structures.
## VII Methods
**Sample preparation:**
The QD sample is grown on an undoped GaAs wafer using molecular beam epitaxy (MBE). It consists of low-density InAs QDs embedded in a GaAs membrane with a thickness of 242.4 nm, followed by a 1.5 \(\mu\)m thick Al0.9GaAs grading layer and a 300 nm buffer layer. The alignment markers are fabricated through standard EBL and lift-off processing. The sample is spin-coated with a positive electron-beam resist (CSAR AR-P 6200.09), which is exposed with an EBL machine (JEOL 9500) and developed with n-Amyl acetate (ZED). The sample is then deposited with 5/50 nm titanium/gold using an electron-beam evaporator. The Ti/Au and resist in the unexposed area are lift-off using Microposit Remover 1165 with a gentle sample agitation and then rinsed with Acetone, IPA, and DI water.
**QD localization: PL**
After the fabrication of the alignment markers, the sample is mounted in a closed-cycle cryostat operating at 4 K (Attodry). The cryostat is equipped with a piezoelectric positioning stage and a low-temperature microscope objective (magnification = 60\(\times\), \(\mathrm{NA}=0.82\)) located inside the cryostat. Low-temperature PL imaging of the QDs is performed to determine the spectral and spatial position of the individual QDs. The micro-PL setup is based on the two-color PL imaging technique, which utilizes two LEDs at 470 nm and 1050 nm to image the QD PL and marker, respectively (see Supporting Information for setup sketch). All the sets of images are acquired and analyzed with an image analysis program (developed with Python, details in Supporting Information), and the individual QDs' positions with respect to the alignment markers are extracted. Different image acquisition approaches are
\begin{table}
\begin{tabular}{c c c c} \hline \hline & Positioning & Positioning & Reference \\ & offset in X & offset in Y & \\ \hline PL imaging & \((-9\pm 46)\) nm & \(-\) & Ref.[22] \\ & \((-24\pm 54)\) nm & \((-73\pm 84)\) nm & (This work) \\ \hline CL imaging & \((-22\pm 20)\) nm & \((-9\pm 76)\) nm & (This work) \\ \hline In-situ EBL & \((20\pm 40)\) nm & \((-14\pm 39)\) nm & (This work) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison of the QDs positioning mean offsets and uncertainties for the different positioning techniques
employed to investigate the most accurate technique. Cross-correlation and edge detection algorithms are employed for the marker localization, while Gaussian blob detection combined with a maximum likelihood estimation algorithm is employed for the QD localization. The uncertainties in locating the actual position of the QDs for the subsequent mesa fabrication are obtained from the fit uncertainties of the QD and marker intensity profiles.
**QD localization: CL**
The setup used to perform the CL measurements is based on a Raith eLine Plus electron beam lithography system equipped with a CL extension. It includes a He-flow cryostat to enable low-temperature measurements, and the CL emission is collected by a parabolic mirror (NA = 0.88) and is directed into a spectrometer. A Si-CCD and a 1D InGaAs diode array are used to cover the spectral range of 300 - 1700 nm. Whilst scanning a marker area, we obtain simultaneously CL and SEM data through a secondary electron detector. As the next step we determined via a 2D Gaussian fit the position of the QDs in the CL map (LabView) and uploaded the coordinates into an image analysis program (Python) which fits the center of the markers through line scans and transforms the QD coordinates into local coordinates. All uncertainties are obtained through the fit uncertainties of the CL and SEM images.
**Mesa patterning:**
After the central position of the QDs is determined, the coordinates are transformed into the global sample coordinates, and mesa structures are fabricated around the transformed preselected coordinates. The mesa is fabricated through EBL and an etching process. The sample is spin-coated with a positive tone electron-beam resist (CSAR AR-P). The exposure is done with a 100 keV EBL machine (JEOL 9500) with a 6 nA current. The mesa structures are aligned to the sample using the JEOL EBL automatic alignment procedure with two diagonal P and Q markers. The resist was developed with AR-600 developer and the pattern was defined through ICP-RIE etching down to a depth of approximately 230 nm.
**EBL alignment error:**
The EBL error was measured by characterizing 7 etched holes patterned in the same lithography step as the mesas. The holes were positioned at the centers of the optical fields in various corners of the sample. To determine the spatial variation in the EBL error (taken as the deviation in the hole's actual position from the center of the field), an SEM image of the different fields is taken. The center locations of the markers and holes are found using the same image analysis program. The misalignment offset of the holes from the center of the four markers field is obtained as the EBL error (See Supporting Information for details).
**QD positioning: in-situ EBL**
For the QD positioning through in-situ EBL we used the same setup as for the CL marker method. The sample is first spin coated in a clean room facility with the resist CSAR 6200.13 at 6000 rpm. It is followed by the main process consisting of three consecutive steps. First, a CL map scan of a plain area on the sample is performed with a map size of (20 x 20) \(\mu\)m\({}^{2}\), a pixel size of 500 nm and an exposure time of 30 ms. Second, the position of a desired QD is determined through a 2D Gaussian fit (LabView). Third, the preselected QD is integrated via EBL into a mesa structure by utilizing the negative-tone regime of the EBL resist.
## Associated content
**Supporting Information**
The Supporting Information is available free of charge at [https://pubs.acs.org/doi/10.1021/acsphotonics.xxxxx](https://pubs.acs.org/doi/10.1021/acsphotonics.xxxxx).
Figure S1: PL and In-situ/CL imaging setups sketch. Figure S2: Markers localization image analysis procedures. Figure S3: Position uncertainties obtained from the maximum-likelihood estimator (MLE) technique. Figures S4-S6: Camera images and position uncertainties of the other PL imaging approaches (Single-image 2, Merged images, and Dark marker image). Figure S7: Final position uncertainties from mesas fabricated using coordinates obtained from Single-image 2, Merged images, and Dark marker image. Figure S8: Processed markers crop images of the different optical fields. Figure S9: QD offsets distribution from QDs localized using Gaussian blob detection only. Figure S10: EBL offset at different sample regions, showing the rotation offset during the EBL manual alignment.
The Python script used for the PL image analysis is available at 10.5281/zenodo.xxxxxx.
## Funding
The authors acknowledge the European Research Council (ERC-CoG "Unity", grant no.865230) and support from the Independent Research Fund Denmark (Grant DFF-9041-00046B). N.G. acknowledges support from the European Union's Horizon 2020 Research and Innovation Program under the Marie Sklodowska-Curie Grant Agreement no. 861097. B.M. also acknowledges support from the European Research Council (ERC-StG "TuneTMD", grant no. 101076437), and the Villum Foundation (grant no. VIL53033). S.L., H.L., D.D., X.Su., X.Sh., H.N., and Z.N. acknowledge support from National Key Technologies R&D Program of China, 2018YFA0306101, Key-Area Research and Development Program of Guangdong Province (Grant No.
2018B030329001) and National Natural Science Foundation of China, 62035017, 61505196. J.D., S.L., J.S. and S.R. acknowledge funding support from the German Research Foundation (Nos. Re2974/25-1 and INST 131/795-1 320 FUGG), and via the SEQUME project (20FUN05) from the EMPIR program cofinanced by the Participating States and from the European Union's Horizon 2020 research and innovation program.
## Notes
The authors declare no competing financial interest.
## Acknowledgement
The authors thank Jonas Winther and Kasper Steinmuller for their helpful contributions to the image processing program.
## Author contributions
A.M. and J.D. contributed equally to this work. S.L., H.L., D.D., X.Su., X.Sh., H.N., and Z.N. grew the QD samples. A.M. and B.M. performed photoluminescence imaging experiments. A.M., B.G., and B.M. contributed to image analysis and data processing. J.D., J.S., and S.Ro. performed the cathodoluminescence imaging and in-situ EBL experiments, including image analysis and data processing. N.G., S.R., and B.M. conceived the idea and coordinated the project. All authors wrote the paper.
|
2309.05698 | Restoring Naturalness via Conjugate Fermions | We propose a novel mechanism for cancelling the leading order contribution to
the potential in composite Higgs scenarios. The mechanism relies on the
splitting of a real representation of the global symmetry into a complex
representation and its conjugate of the unbroken group. We identify two cosets
one of which includes a custodial symmetry. A numerical analysis is performed
in a phenomenological three-site model and the resulting fine-tuning is
analysed. The cancelling of the leading order potential results in a drastic
reduction of the fine-tuning. For a symmetry breaking scale of the strong
sector as high as $f=1600$ GeV, fine-tuning can be as good as $10\%$ or even
better. We discuss a possible interpretation in the 5D holographic dual. Unique
signatures of the model include quarks with baryon number $B=2/3$ with highly
distinctive decays which can be looked for at the LHC. | Andrei Angelescu, Andreas Bally, Florian Goertz, Maya Hager | 2023-09-11T18:00:01Z | http://arxiv.org/abs/2309.05698v1 | # Restoring Naturalness via Conjugate Fermions
###### Abstract
We propose a novel mechanism for cancelling the leading order contribution to the potential in composite Higgs scenarios. The mechanism relies on the splitting of a real representation of the global symmetry into a complex representation and its conjugate of the unbroken group. We identify two cosets one of which includes a custodial symmetry. A numerical analysis is performed in a phenomenological three-site model and the resulting fine-tuning is analysed. The cancelling of the leading order potential results in a drastic reduction of the fine-tuning. For a symmetry breaking scale of the strong sector as high as \(f=1600\) GeV, fine-tuning can be as good as 10% or even better. We discuss a possible interpretation in the 5D holographic dual. Unique signatures of the model include quarks with baryon number \(B=2/3\) with highly distinctive decays which can be looked for at the LHC.
## I Introduction
The origin of the weak scale, parametrised by the Higgs field, remains unknown. Due to its quadratic sensitivity to UV scales, an unnaturally large tuning seems necessary to separate the weak scale from, for instance, the Planck scale. Understanding if and how the solution to this hierarchy problem (HP) can be found at the CERN LHC is one of the most important objectives of high energy physics.
The HP can be elegantly addressed in the framework of Composite Higgs (CH) [1; 2; 3], in which the Higgs is not an elementary scalar anymore. Quadratically sensitive loop corrections are tamed above the scale of compositeness, \(m_{\star}\), at which the Higgs resolves into its more fundamental constituents. In such models, the weak scale is no longer an input but dynamically generated by a new force which condenses at a symmetry breaking scale \(f\), therefore producing the Higgs as a pseudo-Nambu-Goldstone boson (pNGB) of a spontaneous breaking of a flavor symmetry of this new sector, \(G/H\), akin to the pions of QCD (see [4; 5; 6] for reviews). However, signs of compositeness are lacking at the LHC, requiring the symmetry breaking scale to reside at larger values, which increases the generic fine-tuning of CH models. In particular, light top partners at the symmetry breaking scale, necessary to generate a large mass for the top quark, are an ubiquitous prediction of CH models and a driving force behind the increased fine-tuning [7; 8; 9; 10; 11; 12; 13; 14; 15]. This has resulted in model-building addressing the anomalously light top partners [11; 14; 16; 17; 18] and the tuning [17; 18; 19; 20; 21] (see also [23; 24; 25] for top Yukawa-based solutions). Moreover, the situation is significantly worsened in generic CH due to the feature that the radiative Higgs potential generates the quartic interaction at subleading order with respect to the quadratic, the so-called double-tuning problem [11].
In this paper, we propose a novel mechanism that reduces the tuning by cancelling the leading order contribution to the Higgs potential, so that both quadratic and quartic arise only at fourth order in couplings. We show that the tuning is even less than the conventional minimal estimate and furthermore explain why the lightest composite resonances have not yet been observed. The mechanism is explained in Sec. II.1, while Sec. II.2 details a holographic completion. For the numerical analysis within the three-site model in Sec. II.4 we focus on the two cosets \(SU(6)/SU(5)\) and \(SO(11)/SO(10)\) explained in Sec. II.3. Lastly, in Sec. III expected signatures are laid out, some of which are still unexplored at the LHC. In Sec. IV we conclude.
## II Mirror Fermions
### Mechanism
_The quadratic contribution of a chiral fermion \(\psi\) to the pNGB potential of a coset \(G/H\) is cancelled when a new chiral fermion \(\psi^{\prime}\) with conjugated gauge quantum numbers is added, called mirror fermion, if the fermions talk to the same composite operator in a real representation_ **R** _of the group \(G\) which decomposes as \(\mathbf{R}\to\mathcal{C}\oplus\bar{\mathbf{C}}\) under \(H\), with \(\mathbf{C}\) a complex representation and \(\bar{\mathbf{C}}\) its complex conjugate._
The statement is proven by considering the general coset \(G/H\). In addition to the spontaneous breaking \(G/H\), \(G\) is explicitly broken by partial compositeness [26; 27; 28] of the Standard Model (SM) fields - a linear mixing of strength \(\lambda\) between elementary fields \(\psi\) and composite operators \(\mathcal{O}^{\mathbf{R}}\)
\[\mathcal{L}_{\mathrm{PC}}=\lambda\,\bar{\psi}\Delta\,\mathcal{O}^{\mathbf{R}}+ \lambda^{\prime}\,\bar{\psi}^{\prime}\Delta^{\prime}\,\mathcal{O}^{\mathbf{R}}+ \ \mathrm{h.c.}\,. \tag{1}\]
Here, the spurion \(\Delta(\Delta^{\prime})\) parametrises the incomplete embedding of \(\psi(\psi^{\prime})\) in \(G\), see below. In the following, Roman numerals will denote an index transforming under \(\mathbf{R}\), whereas undotted (dotted) Greek numerals correspond to an index transforming under \(\mathbf{C}\) (\(\bar{\mathbf{C}}\)). Using curly brackets such as \(\{\alpha\}\) implies that the indices only
span a subset of the full representation. The spurion \(\Delta\) (\(\Delta^{\prime}\)) for an elementary field \(\psi\) (\(\psi^{\prime}\)) in \(\mathbf{C}\) (\(\bar{\mathbf{C}}\)) takes the form
\[\Delta^{(\prime)i}=\begin{cases}1&i\in\{\alpha\}\quad(i\in\{\dot{\alpha}\})\\ 0&\text{otherwise}\end{cases}\quad, \tag{2}\]
suppressing the elementary-field index. We employ the Callan-Coleman-Wess-Zumino mechanism ([29]) to calculate the contribution to the Goldstone potential. The spurions are dressed with the Goldstone Matrix \(U\), which is the exponential of the broken generators \(T^{\dot{\alpha}}\) of \(G\), each of which corresponds to a NGB degree of freedom \(\Pi_{\dot{\alpha}}\) (see, for example, [6])
\[U=\exp\big{(}i\Pi_{\dot{a}}T^{\dot{a}}\big{)}. \tag{3}\]
After dressing, the spurions decompose under \(H\) representations \(\mathbf{C}\) and \(\bar{\mathbf{C}}\) as \(U^{\dagger}\Delta\equiv(\Delta^{\mathbf{C}}_{D},\Delta^{\mathbf{C}}_{D})\) where we use the subscript \(D\) to differentiate the dressed spurions from the undressed counterparts.
We note that, after contracting the \(H\) indices, the product of spurions parametrising the embedding of the field \(\psi\) in the representation \(\bar{C}\) is the same as the one parametrising the embedding of the conjugate field \(\psi^{\prime}\) in the conjugate representation \(C\), i.e.
\[(\Delta^{\bar{\mathbf{C}}}_{D})^{\dagger}\Delta^{\bar{\mathbf{C}}}_{D}=( \Delta^{\mathbf{C}}_{D})^{\dagger}\Delta^{\mathbf{C}}_{D}. \tag{4}\]
Now, we compute the lowest order Feynman diagrams that contribute to the potential and show how they cancel. These diagrams can be seen in Fig. 1 where in the first (second) diagram a \(\psi\) (\(\psi^{\prime}\)) runs in the loop. Moreover, the loop is closed on the composite side via either a \(\mathbf{C}\) or \(\bar{\mathbf{C}}\) two-point function. We will only show the cancellation for the \(\mathbf{C}\) diagram, the \(\bar{\mathbf{C}}\) diagram follows analogously. Summing both diagrams gives the following two spurion combinations which contain the pNGB dependence
\[V^{\mathbf{C}}\propto\lambda^{2}(\Delta^{\mathbf{C}}_{D})^{\dagger}\Delta^{ \mathbf{C}}_{D}+\lambda^{\prime 2}(\Delta^{\mathbf{C}}_{D})^{\dagger}\Delta^{ \mathbf{C}}_{D}, \tag{5}\]
whereas the dependence on the composite sector factorises out. Using Eq. (4) one can rewrite the above expression as
\[V^{\mathbf{C}}\propto\lambda^{2}(\Delta^{\mathbf{C}}_{D})^{\dagger}\Delta^{ \mathbf{C}}_{D}+\lambda^{\prime 2}(\Delta^{\bar{\mathbf{C}}}_{D})^{\dagger}\Delta^{ \bar{\mathbf{C}}}_{D}. \tag{6}\]
If \(\lambda=\lambda^{\prime}\), the pNGB dependence drops out due to the unitarity of the Goldstone matrix
\[V^{\mathbf{C}}\propto\lambda^{2}\Delta^{\dagger}UU^{\dagger}\Delta=\lambda^{ 2}N \tag{7}\]
and we are left with a contribution to the vacuum energy proportional to the fermionic degrees of freedom N. We emphasise that the origin of this cancellation mechanism is inherent in the decomposition of the real representation \(\mathbf{R}=\mathbf{C}\oplus\bar{\mathbf{C}}\) into a complex and its conjugate under the unbroken group \(H\).
In realistic scenarios it is clearly not feasible to add a new massless chiral fermion. Therefore, it becomes necessary to introduce the opposite-chirality fermion \(\tilde{\psi}^{\prime}\) and an elementary Dirac mass \(m_{E}\) between them. We assume that the opposite-chirality fermion does not talk to the composite sector, since otherwise there could be additional quadratic contributions to the Higgs potential. Still, a quadratic contribution remains, which is however suppressed by \(\left(\frac{m_{E}^{2}}{m_{*}^{2}}\right)\), shown in Fig. 2. Here, \(m_{*}=g_{*}f\gg f\) is the resonance scale, with \(g_{*}\) the coupling in the strong sector.
### Holographic Completion
Although a light Dirac mass for the mirror fermion is technically natural and thus no hierarchy problem is introduced by its presence, it does beg the question: What should be its natural scale? The cancellation mechanism requires \(m_{E}\lesssim m_{*}\) which could introduce a _coincidence_ problem. It turns out \(m_{E}\) has a very elegant origin once possible UV completions for the above model are considered. In the holographic dual of these models, inspired by the AdS/CFT correspondence [30; 31], where the pNGBs arise as the fifth component of a five dimensional gauge field in warped space with a UV/IR brane at \(z=R/R^{\prime}\)[27; 28], the partial compositeness hypothesis from Eq.(1) is equivalent to embedding the elementary fermions within 5D bulk fermions transforming under \(\mathbf{R}\)[32]. In contrast, the opposite-chirality fermion
Figure 2: Remaining quadratic contribution in the presence of a Dirac mass for the mirror fermion.
Figure 1: Cancellation mechanism of the quadratic contribution in terms of Feynman diagrams.
\(\tilde{\psi}^{\prime}\) does not talk to the composite sector and therefore correspond to a UV brane-localised fermion in the holographic dual. Then, the Dirac mass corresponds to a UV brane-localised mass mixing between the brane fermion \(\tilde{\psi}^{\prime}\) and the bulk fermion \(\psi^{\prime}\):
\[\int\mathrm{d}^{4}x\frac{M_{UV}}{\sqrt{R}}\,\bar{\tilde{\psi}}^{ \prime}(x)\psi^{\prime}(x,z=R)+\text{ h.c.}, \tag{8}\]
with \(M_{UV}\sim\mathcal{O}(1)\). Due to the 5D nature of \(\psi^{\prime}(x,z)\), the resulting 4D mass depends on its localisation along the extra dimension, which is commonly parametrised by its dimensionless 5D mass \(m\equiv c/R\). We find two regimes depending on whether the bulk fermion is UV-localised (\(c>0.5\)) or IR-localised (\(c<0.5\)):
\[m_{E}\sim\frac{M_{\text{UV}}}{R}\times\begin{cases}1&(c>0.5)\\ (R^{\prime}/R)^{c-1/2}\,(1-c)&(c<0.5)\end{cases}, \tag{9}\]
and we see that for an IR-localised bulk fermion, one can recover exponentially smaller masses than the natural expectation of \(\sim 1/R\) for UV masses. Furthermore, we expect an IR-localisation since the heavy SM fields are IR-localised, and it is the contribution of those to the Higgs potential that we wish to cancel.
### Concrete Model
Two minimal cosets fulfilling the above criteria include the color gauge group \(SU(3)_{c}\) as part of the flavor symmetry \(G\). These models, motivated by charge quantisation as in 4D Grand Unified Theories [33; 34] (GUTs), are known as composite GUTs [35; 36; 37] or their 5D warped duals of gauge-Higgs grand unification [38; 39; 40; 41; 42; 43; 44; 45]. Interestingly, these models predict extra colored pseudo-Nambu-Goldstone bosons. The non-custodial coset \(SU(6)/SU(5)\) provides a minimal realisation, with the pseudoreal representation
\[\mathbf{20}\to\mathbf{10}\oplus\mathbf{\bar{I0}} \tag{10}\]
where the decomposition of the \(\mathbf{10}\) of \(SU(5)\) under the SM gauge group is
\[\mathbf{10}\to\left(\mathbf{3,2}\right)_{\mathbf{1/6}}\oplus\left(\mathbf{3}^ {*},\mathbf{1}\right)_{-\mathbf{2/3}}\oplus\left(\mathbf{1},\mathbf{1} \right)_{\mathbf{1}}. \tag{11}\]
However, the model is constrained by large corrections to the \(T\) parameter. Instead, the custodial coset \(SO(11)/SO(10)\), with the pseudoreal representation
\[\mathbf{32}\to\mathbf{16}\oplus\mathbf{\bar{16}}, \tag{12}\]
also satisfies the criteria but does not generate a T parameter at tree-level.
Since the biggest source of explicit \(G\)-breaking stems from the top quark, we can focus on the right-handed top singlet \(t_{R}\) and the left-handed quark doublet \(q_{L}\) in the following analysis, whose contribution will be cancelled by two mirror fermions \(\omega_{R}\) and \(\theta_{L}\) respectively. As the \(\mathbf{16}\) contains a \(\mathbf{10}\) of \(SU(5)\), all four fields fit into the \(\mathbf{20}\) of \(SU(6)\) or the \(\mathbf{32}\) of \(SO(11)\). The linear mixing strength \(\lambda_{R/L}\) in the IR is expected to depend on the scaling dimension \(d_{L/R}\) of the composite operator \(\mathcal{O}^{\mathbf{R}}_{L/R}\) and its UV value \(\lambda_{\text{UV}}\)[4; 6; 27; 28; 32]
\[\left(\lambda_{\text{IR}}\right)_{R/L}\sim\left(\lambda_{\text{UV} }\right)_{R/L}\left(\frac{\Lambda_{\text{IR}}}{\Lambda_{\text{UV}}}\right)^{d_ {L/R}-5/2}. \tag{13}\]
If the partial compositeness Lagrangian in the UV is generated with same strength for the SM field and its mirror fermion and the right-handed (left-handed) fields couple to the same left-handed (right-handed) composite operator, in line with the full global \(G\) symmetry, we can safely assume that the IR values will be also the same. Then, the Lagrangian becomes
\[\mathcal{L}_{\text{PC}}= \lambda_{R}\left(\bar{t}_{R}\Delta^{t_{R}}+\bar{\omega}_{R} \Delta^{\omega_{R}}\right)\mathcal{O}^{\mathbf{R}}_{L}+\text{h.c.} \tag{14}\] \[+\,\lambda_{L}\left(\bar{q}_{L}\Delta^{q_{L}}+\bar{\theta}_{L} \Delta^{\theta_{L}}\right)\mathcal{O}^{\mathbf{R}}_{R}+\text{h.c.}\] \[+\,m_{\omega}\,\bar{\omega}\omega+m_{\theta}\,\bar{\theta}\theta,\]
including the Dirac masses for the mirror fermions.
Gauge boson contributions will be neglected as they are subleading. Moreover, the numerical analysis in the three-site model in the next section II.4 for the fermion sector is identical for both considered cosets, fully determined by the symmetry properties of the real representation, namely \(\mathbf{R}\to\mathbf{C}\oplus\mathbf{\bar{C}}\).
For a complete modelling of the third generation of quarks (the lighter two generations can be modelled similarly), one must include the right-handed bottom quark \(b_{R}\) in the partial compositeness Lagrangian with an associated composite operator \(\mathcal{O}^{\mathbf{R}^{\prime}}_{L}\) in a representation \(\mathbf{R}^{\prime}\). Although its contribution to the Higgs potential is negligible due to the small bottom mass, the associated composite operator will mix with the ones of the top and exotic sector therefore impacting their mass spectrum. In order for the \(b_{R}\) to connect to the \(q_{L}\), the representation should decompose as \(\mathbf{R}^{\prime}\to\mathbf{C}\oplus...\) under \(H\). Once \(G\) is spontaneously broken, the composite operators \(\mathcal{O}^{\mathbf{R}}_{R}\) and \(\mathcal{O}^{\mathbf{R}^{\prime}}_{L}\) mix and induce a mass for the bottom quark.
For the coset \(SO(11)/SO(10)\), the minimal choice for \(\mathbf{R}^{\prime}\) is another \(\mathbf{32}\). If we attempted to use the same \(\mathbf{32}\) for the bottom-right and top-right, there would be a degeneracy in their masses. For the \(SU(6)/SU(5)\) the minimal option is a \(\mathbf{15}\) which decomposes as \(\mathbf{10}\oplus\mathbf{5}\) under \(SU(5)\).
As mentioned, the cosets contain more broken generators besides the ones of the Higgs doublet \(\left(\mathbf{1,2}\right)_{\mathbf{1/2}}\), and there will be more pNGBs. Both scenarios predict a scalar leptoquark \(\left(\mathbf{3,1}\right)_{-\mathbf{1/3}}\), and in \(SU(6)/SU(5)\) there is an additional scalar singlet, whose generator corresponds to an unbroken global symmetry. Therefore, it remains massless unless the symmetry is broken by a different mechanism, i.e. by introducing a Majorana neutrino sector. For the leptoquark potential the cancellation in
the fermion sector proceeds identically, and neither \(q_{L}\) nor \(t_{R}\) generate the potential at leading order. However, the gauging of the strong sector is a large source for the potential of the leptoquark. Using NDA [46], the leading potential can be estimated as [44]
\[V(S)\approx m_{*}^{4}\frac{3\times 5}{64\pi^{2}}\frac{g_{s}^{2}}{g_{s}^{2}} \sin^{2}\left(\frac{\sqrt{2S^{\dagger}S}}{f}\right). \tag{15}\]
The resulting mass for the leptoquark is then \(m_{S}=(15\alpha_{s}/8\pi)^{1/2}m_{*}\approx(0.25m_{*})\).
### Numerical Analysis
We proceed with a numerical analysis of the above setup in a multi-site model [47; 48]. These phenomenological models are inspired by dimensional deconstruction [49] and by 5D models [27], retaining the useful features of finiteness of the Higgs potential while being computationally easier.
We will work in the three-site model, in which the first site models the elementary sector while the other two sites represent the composite sector. It is the lowest site-model in which the Higgs potential is fully calculable. The Higgs potential is determined with the Coleman-Weinberg formula
\[V_{i}(h)=-\frac{2N_{c}}{8\pi^{2}}\int\mathrm{d}pp^{3}\log\left(\det\left(M_{i }^{\dagger}(h)M_{i}(h)+p^{2}\mathbb{1}\right)\right) \tag{16}\]
where \(N_{c}=3\), and \(M_{i}\), with \(i=T,E\), the mass matrices for the top and exotic sector respectively. As exotics we denote all mass eigenstates arising through the mirror fermions.
The numerical analysis is performed by scanning the composite masses over a range of \([-5f,5f]\), for a symmetry breaking scale of \(f=1600\,\mathrm{GeV}\). We assume \(\lambda_{L}=\lambda_{R}\equiv\lambda\) for simplicity and match to the correct top mass \(m_{t}(f)\approx 150\,\mathrm{GeV}\).
The results for the lightest top-partner mass, \(m_{T}^{\mathrm{min}}\), plotted against the Higgs mass are shown in Fig. 3. The current LHC limit \(m_{T}\gtrsim 1500\) GeV ([50; 51; 52; 53; 54]) is indicated by the red region. The spectrum of the lightest exotic, \(m_{E}^{\mathrm{min}}\), can be seen in Fig. 4, while in Fig. 5 the correlation with the top-partner mass is shown. We observe that the exotic is strictly lighter than the top partner, providing an attractive collider target (see Sec. III).
The necessary fine-tuning to achieve the correct Higgs potential is assessed by employing the Barbieri-Giudice measure [55]
\[\Delta_{\mathrm{BG}}=\max_{i}\left|\frac{\partial\log O(x_{i})}{\partial\log x _{i}}\right|\,, \tag{17}\]
i.e. the maximal sensitivity of observable \(O\) to parameters \(x_{i}\). We choose the Higgs mass and vacuum expectation value (vev) as observables to fully characterise the potential and take the maximum. We will compare the
Figure 3: The lightest top–partner mass, \(m_{T}^{\mathrm{min}}\), versus the Higgs mass \(m_{h}\). The shaded blue region is highlighting the correct Higgs mass \(m_{h}\in(125\pm 15)\) GeV, whereas the shaded red region shows current experimental limits on the lightest top partner, \(m_{T}\gtrsim 1500\) GeV ([50; 51; 52; 53; 54]).
Figure 4: The lightest exotic mass, \(m_{E}^{\mathrm{min}}\), versus the Higgs mass \(m_{h}\). The shaded blue region is highlighting the correct Higgs mass \(m_{h}\in(125\pm 15)\) GeV.
obtained tuning, \(\Delta_{\rm BG}\), with the so-called minimal tuning [11]\(\Delta_{\rm min}=f^{2}/v^{2}\), which is the minimal tuning in composite Higgs models that do not feature a double-tuning problem. In Fig. 6 the tuning is plotted against the mass of the lightest top partner, while filtering for correct Higgs mass and vev, showing \(\Delta_{\rm min}\sim 42\) for \(f=1600\,\)GeV as a dashed black line. There is a clustering of points with \(\Delta_{\rm BG}\sim 10-20\), which is comparable to minimal-tuning Composite Higgs models, but for the much lower (and phenomenologically problematic) scale of \(f=800\) GeV for the latter. Furthermore, we see that heavy top partners are rather uncorrelated with the amount of tuning.
## III Phenomenology
Both \(SO(11)/SO(10)\) and \(SU(6)/SU(5)\) feature a global symmetry that corresponds to baryon number. This property stems from the incomplete filling of elementary fermions into \(G\) multiplets in the framework of partial compositeness as opposed to 4D GUTs in which the perfect filling of fermion representations break baryon number (see e.g. [38; 43]). As a consequence, the proton remains stable, while the exotic fermions get charged with unusual baryon number \(B=2/3\).
Due to its peculiar baryon number, the exotic possesses unusual decay channels. In general, since it is lighter than both pNGB and vector LQs, the leading branching ratio corresponds to a three-body decay proceeding through an off-shell scalar or vector LQ. By imposing baryon number and electromagnetic charge conservation, the two possible 3-body decay channels are \(\omega\to tb\tau^{-}\) and \(\omega\to bb\nu\), where we expect decays to the more elementary 1st and 2nd generation of SM fermions to be suppressed. Moreover, since all three generations of lepton doublets are expected to be elementary, we envisage the \(\omega\to bb\nu\) decay width to be suppressed by at least \(m_{\tau}^{2}/m_{top}^{2}\) with respect to \(\omega\to tb\tau^{-}\). Therefore, we can safely take BR\((\omega\to tb\tau^{-})=1\).
The main production mechanism of the exotics at the LHC is through QCD pair production, \(pp\to\omega\bar{\omega}\), leading to a peculiar \(t\bar{t}b\bar{b}\tau^{+}\tau^{-}\) final state. To the best of our knowledge, there is no dedicated search of the LHC collaborations for such a process, which is why the exotic can be much lighter than the conventional \(B=1/3\) top partners. Nevertheless, from Fig. 5 we observe that it could also be as heavy as \(1500\,\)GeV, while keeping the tuning small.
Another potential signature of CH models is a change to Higgs production and decay. We note that for \(f=1600\,\)GeV these are in general safely below experimental limits, with the potential exceptions of new contributions due to (light) exotics. Importantly, we find that the latter do not spoil the gluon-fusion cross section because the opposite-chirality partners \(\tilde{\psi}^{\prime}\) are elementary and do not interact directly with the Higgs.
## IV Discussion and Conclusions
In this letter we proposed a novel mechanism for generating the Higgs potential at subleading order in the fermion contributions by using a remarkable property of group representations. In contrast to twin Higgs models [56] (see [57; 58; 59] for composite scenarios) the quadratic contribution is cancelled via colored partners, which however carry a different global charge, therefore resulting in a different phenomenology. The cancellation relies on the decomposition of the real representation \({\bf R}\to{\bf C}\oplus\bar{\bf C}\) under \(H\).
We analysed the setup in a three-site model showing a large reduction in fine-tuning to the \(\sim 10\%\) level in comparison with the naive expectation which is at the percent level, or, in models that feature double-tuning, even worse. By virtue of the reduction in fine-tuning we could double the symmetry breaking scale to \(f=1600\,\)GeV and thus evade all top partner bounds, while keeping the fine-tuning comparable to minimal-tuning CH models with \(f=800\) GeV. As a consequence of the unusual baryon number \(B=2/3\) of the exotics, the expected signature of their decay is a six particle final state, which has not yet been targeted at the LHC. The search for signatures of natural models of electroweak symmetry breaking continues at the collider frontier.
###### Acknowledgements.
We are grateful to Yi Chung, Lucia Masetti, Alvaro Pastor Gutierrez, Aika Tada, and Stefan Tapprogge for useful discussion and comments.
Figure 6: The lightest top–partner mass \(m_{T}^{\rm min}\) versus the amount of tuning \(\Delta_{\rm BG}\) (Eq. 17). The dashed black line shows the expected value for conventional minimal tuning \(\Delta_{\rm min}\sim 42\). |
2309.08394 | Muons for cultural heritage | Non-destructive subsurface imaging methods based on the absorption or
scattering of photons or neutrons are becoming increasingly popular in cultural
asset conservation. However, these techniques are limited by physical and
practical issues: their penetration depth may be insufficient for large items,
and they usually necessitate transferring the objects of interest to
specialised laboratories. The latter issue is recently being addressed by the
development of portable sources, but artificial radiation can be harmful and is
thus subjected to strict regulation. Muons are elementary particles that are
abundantly and freely created in the atmosphere by cosmic-ray interactions.
Their absorption and scattering in matter are respectively dependent on the
density and elemental composition of the substance they traverse, suggesting
that they could be used for subsurface remote imaging. This novel technique,
dubbed "muography", has been used in applications ranging from geophysics to
archaeology, but has remained largely unexplored for a wide range of cultural
heritage objects that are small by muography standards but whose size and
density are too large for conventional imaging methods. This document outlines
the general arguments and some early simulation studies that aim at exploring
the low-size limit of muography and its relevance for cultural heritage
preservation. | Marwa Moussawi, Andrea Giammanco, Vishal Kumar, Maxime Lagrange | 2023-09-15T13:39:24Z | http://arxiv.org/abs/2309.08394v1 | # Muons for cultural heritage
###### Abstract
Non-destructive subsurface imaging methods based on the absorption or scattering of photons or neutrons are becoming increasingly popular in cultural asset conservation. However, these techniques are limited by physical and practical issues: their penetration depth may be insufficient for large items, and they usually necessitate transferring the objects of interest to specialised laboratories. The latter issue is recently being addressed by the development of portable sources, but artificial radiation can be harmful and is thus subjected to strict regulation. Muons are elementary particles that are abundantly and freely created in the atmosphere by cosmic-ray interactions. Their absorption and scattering in matter are respectively dependent on the density and elemental composition of the substance they traverse, suggesting that they could be used for subsurface remote imaging. This novel technique, dubbed "muography," has been used in applications ranging from geophysics to archaeology, but has remained largely unexplored for a wide range of cultural heritage objects that are small by muography standards but whose size and density are too large for conventional imaging methods. This document outlines the general arguments and some early simulation studies that aim at exploring the low-size limit of muography and its relevance for cultural heritage preservation.
_Proceedings of the Muon4Future Conference, 29-31 mai. 2023 at Venice, Italy. Submitted to Proceeding in Science._
CP3-23-50
## 1 Introduction
Imaging methods based on X-rays have been widely used in the context of cultural heritage preservation [1] due to their ability to penetrate various ma
terials. However, X-ray imaging has limitations when dealing with large or dense objects like compact stone or metal, since they do not penetrate deep enough. Alternative radiation types, such as MeV-range X-rays and neutrons, offer some improvement but face challenges in transporting valuable objects to specialized imaging facilities due to size, weight, and preservation concerns. Various portable setups, such as X-ray fluorescence analysis (XRF) [2, 3] and portable X-ray computed tomography (CT) systems [4], are available for cultural heritage studies, but they have limitations in depth penetration and radiation hazards. Neutron sources [5] offer greater depth [6], but raise concerns about material activation. A recent advancement using a portable proton accelerator [7] shows promise but also suffers from radiation hazard concerns. In contrast, muography [8], which utilizes muons (\(\mu\)), elementary particles naturally generated by cosmic-ray interactions in the atmosphere, represents a promising solution. Cosmogenic muons have remarkable penetrating capabilities, making them ideal for sub-surface imaging in a variety of contexts including, as we argue in this paper, cultural heritage applications. This technique includes two main methods: scattering-based and absorption-based, sketched in Fig.1 and Fig.2 respectively. Absorption-based muography measures muons absorption rate within materials, providing insights into their density and composition, while scattering-based muography utilizes the diffusion of muons to discriminate elements in multi-material objects [9].
Muography has proven to be highly effective in investigating cultural heritage sites. The ScanPyramids project, utilizing absorption-based muography, made the headlines by revealing an unexpected low-density anomaly deeply inside Khufu's Great Pyramid in 2017 [10], and then precisely characterizing a previously unknown corridor in 2023 [11], which has been confirmed by visual inspection via an endoscope. In another example, density anomalies (potentially posing safety hazards) have been found in a rampart of a defensive wall of Xi'an (China) in 2022 [12]. Furthermore, scattering-based muography has been
proposed to search for iron chains within the brickwork of the Florence cathedral's dome in Italy [13], and a proof-of-principle test on a mock-up wall was successfully conducted to demonstrate the conceptual validity of the method.
While most examples so far are applications to very large volumes of interest, this paper advocates for the adoption of portable and safe muography as a promising imaging approach for cultural heritage studies in a regime that is new for muography (relatively low size) while being beyond reach for methods based on other radiation sources. A preliminary simulation study using Geant4 [14] illustrates the potential applications and limitations of muography.
## 2 Simulated case studies
Each of the two muography techniques has its own sensitivity, applicability, and limits.
In absorption muography, a single muon tracker is able to measure the 2D projection of matter density, and the combination of measurements from different viewpoints can give a 3D density map. However, it provides no material discrimination apart from density, and small-size or low-density objects do not stop enough muons to provide sufficient contrast.
In scattering muography, at least two muon trackers are needed, upstream and downstream of the object of interest, to reconstruct the \(\mu\) trajectory before and after passing through it. This method naturally yields 3D information, and is sensitive to elemental composition because the width of the scattering angle distribution is a function of atomic number Z. However, it is impractical for human-sized statues, as the object of interest must fit between the two trackers.
Either the object of interest is moved inside the set-up, or a rather complex installation of the detectors must be performed around the object. Therefore, this method is appropriate for relatively low-size objects.
We perform a Monte Carlo simulation using CRY [15] to generate muons and a Geant4 [14] model of an African statue (Fig. 3) made of hardwood. This object, 40 cm tall, has been studied with X-rays but its size nears the limit of that method, while it is very small by muography standards. To investigate the potential of muography for material identification, we scale the statue's size by factors two and four and introduce hidden cylinders of different materials within its internal structure, as summarized in Table 1. In this first exploratory study, we model an ideal detector (i.e. with 100% efficiency and perfect resolution) made up of nine planes, as illustrated in Fig. 4. Among these planes, the six indicated in green surround the target and are used for scattering reconstruction, while the remaining three (in blue) are used in the absorption reconstruction study.
**Scattering reconstruction**
Scattering muography is based on the measurement of muon deflections when passing through an object. The deflection angle is measured by extrapolating the incoming and outgoing trajectories observed by the two trackers and determining their point of closest approach (POCA). This approach relies on the interpretation of a POCA point as the actual place where the muon had a single high-energy elastic interaction with a nucleus, neglecting the occurrence of other electromagnetic interactions along its trajectory, which is rough approximation of reality but has proven to be effective in many applications. Figure 5 shows the distribution of POCA points obtained in the three simulated scenarios denoted as I (a, b, c) in Table 1. These plots are based on 5 million muons, roughly corresponding to an acquisition time of \(\sim 8\) hours, and they show how challenging it is to find a cavity within this kind of statue, as opposed to finding a high-density insertion.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**Scenario** & **Statue size [cm\({}^{3}\)]** & **Cylinder material** & **Cylinder radius [cm]** \\ \hline I (a) & \(80\times 30\times 30\) & / & / \\ \hline I (b) & \(80\times 30\times 30\) & Air & 5 \\ \hline I (c) & \(80\times 30\times 30\) & Bronze bar & 5 \\ \hline II & \(160\times 60\times 60\) & Bronze bar & 10 \\ \hline \end{tabular}
\end{table}
Table 1: Different simulation scenarios.
The output of the muon-scattering reconstruction algorithm is a 3D distribution of POCA points, each associated to a scattering angle. Based on those raw data, some clustering algorithms can be used in order to discriminate between different material densities and elements. At present two methods are employed to analyze the object's content: DBSCAN [16] and a Neighborhood Sum [17] algorithm. DBSCAN only relies on the density of the POCA points, while the Neighborhood Sum method can consider both the density of POCA points and the scattering angle of the tracks. We apply DBSCAN in two steps, to first remove noise points and then separate the volumes corresponding to different materials using tighter clustering criteria; the result is shown in Fig. 6. We applied the Neighborhood Sum without (Fig. 7 left) and with (Fig. 7 right) considering the additional information from the scattering angles, and we obtain in both cases a good discrimination of the two materials. This discrimination is not as precise as DBSCAN, however this method is more appropriate for scenarios with low exposure times, where POCA points are scarce, and the quantitative results it provides are still reliable.
Figure 5: Distribution of POCA points, projected to a 2D plane for clarity, in three simulated scenarios: (left) actual wooden sculpture, (middle) with a cylindrical cavity, (right) with a cylindrical bronze rod.
**Absorption reconstruction**
With the scenario II described in Table 1, we explore a challenging regime in which the statue is very big for scattering muography and very small for absorption muography. It is customary in this method, when the volume of interest is very distant from the detector (e.g. when imaging the summit of a volcano), to approximate the latter with a point, meaning that only the zenith and azimuth angles \((\theta,\phi)\) are important while the entry point of the muon in the detector is not. However, to study human-sized sculptures we have in general the possibility to position the detectors very close to the statue, in order to maximize the resolution within the object, and this approximation is no longer valid. For this study we develop a custom back-projection reconstruction algorithm inspired by the methods of Refs. [18, 19]. As illustrated in Fig.8, we extrapolate each muon track onto a voxelized volume, and we count the number of times a voxel is hit by this backprojected trajectory. Figure 9, based on the equivalent of two hours of data acquisition, shows the 3D transmission map slice by slice in the voxelized volume after selecting only muons with \(E<800\) MeV, assuming that the detector setup also contains a way to discriminate the muons above and below this energy threshold. Energy discrimination can be achieved cheaply by introducing a passive absorber before the last detector layer, used as a veto for energetic muons, or more precisely by combining absorption, scattering, time of flight, or other variables.
## 3 Conclusion and prospects
In this paper, we outlined the strengths and limitations of muography for cultural heritage applications, and detailed a preliminary simulation study with both scattering and absorption muography for the imaging of statues with different size and containing different hidden materials. The next step will be a systematic comparison of several more scenarios, in terms of material and size of the statue and of the hidden volume. We will also take into account various realistic scenarios for the detector resolution and the geometry of the setup (e.g. distances between planes), to identify the best trade-off between cost and statistical identification power, in preparation for actual measurements with test
objects.
Muography is inexpensive and portable; thanks to the muon penetration power, it is complementary to other imaging methods. Absorption and scattering have complementary strengths and weaknesses, but some limitations are in common for both: long acquisition times are necessary, due to the relatively low natural rate, and muon direction and energy cannot be controlled.
At the workshop, we were invited to comment on what an artificial muon beam could do for this kind of studies. One could, indeed, overcome the aforementioned muography drawbacks by using a muon beam where both muon energy and direction could be controlled. Even a modest precision in these two variables and a modest beam luminosity, by accelerator standards, would allow to do much better than with muons from cosmic rays. The beam energy could be optimized a priori based on the size and on the main material of the object; if the inner composition is completely unknown, particularly interesting is the possibility to scan the same object with beams at various energies. We would benefit most if a transportable artificial muon source became accessible. However, this would bring radiological hazards, like all methods based on an artificial particle source, because of the byproducts of the collisions needed in order to produce muons and antimuons.
## Acknowledgements
We are indebted to Tim De Kock of the Antwerp Cultural Heritage Sciences (ARCHES) department at the University of Antwerp, Judy De Roy and Sam Huysmans of the Royal Institute for Cultural Heritage (KIK-IRPA), and Matthieu Boone of the Ghent University Centre for X-ray Tomography at the University of Ghent, for their guidance on the definition of the targets of interest for this potential application of muography. We thank the Africa Museum of Tervuren and the project TOCOWO ([https://tocowo.ugent.be/](https://tocowo.ugent.be/)) for the model of a wooden statue (Figure 3). This work was partially supported by the Fonds de la Recherche Scientifique - FNRS under Grants No. T.0099.19 and J.0070.21, and by the EU Horizon 2020 Research and Innovation Programme under the Grant Agreements No. 822185 ("INTENSE") and No. 101021812 ("SilentBorder").
|
2306.17431 | Defense against Adversarial Cloud Attack on Remote Sensing Salient
Object Detection | Detecting the salient objects in a remote sensing image has wide applications
for the interdisciplinary research. Many existing deep learning methods have
been proposed for Salient Object Detection (SOD) in remote sensing images and
get remarkable results. However, the recent adversarial attack examples,
generated by changing a few pixel values on the original remote sensing image,
could result in a collapse for the well-trained deep learning based SOD model.
Different with existing methods adding perturbation to original images, we
propose to jointly tune adversarial exposure and additive perturbation for
attack and constrain image close to cloudy image as Adversarial Cloud. Cloud is
natural and common in remote sensing images, however, camouflaging cloud based
adversarial attack and defense for remote sensing images are not well studied
before. Furthermore, we design DefenseNet as a learn-able pre-processing to the
adversarial cloudy images so as to preserve the performance of the deep
learning based remote sensing SOD model, without tuning the already deployed
deep SOD model. By considering both regular and generalized adversarial
examples, the proposed DefenseNet can defend the proposed Adversarial Cloud in
white-box setting and other attack methods in black-box setting. Experimental
results on a synthesized benchmark from the public remote sensing SOD dataset
(EORSSD) show the promising defense against adversarial cloud attacks. | Huiming Sun, Lan Fu, Jinlong Li, Qing Guo, Zibo Meng, Tianyun Zhang, Yuewei Lin, Hongkai Yu | 2023-06-30T07:06:13Z | http://arxiv.org/abs/2306.17431v2 | # Defense against Adversarial Cloud Attack on Remote Sensing
###### Abstract
Detecting the salient objects in a remote sensing image has wide applications for the interdisciplinary research. Many existing deep learning methods have been proposed for Salient Object Detection (SOD) in remote sensing images and get remarkable results. However, the recent adversarial attack examples, generated by changing a few pixel values on the original remote sensing image, could result in a collapse for the well-trained deep learning based SOD model. Different with existing methods adding perturbation to original images, we propose to jointly tune adversarial exposure and additive perturbation for attack and constrain image close to cloudy image as Adversarial Cloud. Cloud is natural and common in remote sensing images, however, camouflaging cloud based adversarial attack and defense for remote sensing images are not well studied before. Furthermore, we design DefenseNet as a learn-able pre-processing to the adversarial cloudy images so as to preserve the performance of the deep learning based remote sensing SOD model, without tuning the already deployed deep SOD model. By considering both regular and generalized adversarial examples, the proposed DefenseNet can defend the proposed Adversarial Cloud in white-box setting and other attack methods in black-box setting. Experimental results on a synthesized benchmark from the public remote sensing SOD dataset (EORSSD) show the promising defense against adversarial cloud attacks.
## 1 Introduction
The cross-domain research of computer vision and remote sensing has wide applications in the real world, such as hyperspectral image classification [1, 2], cross-view geolocation [3, 4], scene classification [5, 6], change detection [7, 8], aerial-view object detection [9, 10], and so on. Salient Object Detection (SOD) in remote sensing images is to extract the salient objects in a satellite or drone image, which might benefit many research works mentioned above.
Some existing methods have been proposed for the SOD task in remote sensing images [11, 12] using Convolutional Neural Network (CNN) based network architecture, whose efforts are mainly focused on multi-scale feature aggregation [11] and representative context feature learning [12]. However, in some scenarios, these deep learning based remote sensing SOD models might suffer from the attacks by the adversarial examples on deep neural networks. Recent research [13] shows that the adversarial noises can be added to fool the deep learning based SOD models, leading to the low SOD performance. For example, by adding a small portion of adversarial noises on the original remote sensing image between the image acquisition and data processing,, during the communication, the salient objects in the remote sensing image might be hided or missed to some extents by the deep SOD model. This kind of malicious attack exposes a potential security threat to the remote sensing.
Many researches have been proposed for the adversarial examples based attack and defense in deep learning [14, 15, 16, 17]. Meanwhile, some attack and defense researches have been proposed for remote sensing tasks, such as the remote sensing scene classification [18]. Different with existing methods adding the perturbation on the original image, we propose to generate Adversarial Cloud as attack to the deep learning based remote sensing SOD model. Cloud is widely common in remote sensing images [19]. However, cloud based adversarial attack and defense for remote sensing images has not been well studied. The proposed Adversarial Cloud has realistic appearance close to a normal cloud, which might be difficult to be perceived but will be malicious in the remote sensing applications.
In this paper, we propose a novel DenfenseNet to defend the proposed Adversarial Cloud attack to preserve the advanced SOD performance. In general, the adversarial attack and defense networks will be trained with an adversarial
deep learning by iteratively training the Adversarial Cloud and DenfenseNet. However, the already deployed deep remote sensing SOD model is kept unchanged to simplify the real-world setting. Thus, the proposed DefenseNet is designed as a learn-able pre-processing technique to preserve the SOD performance. In specific, the adversarial examples will go through the DefenseNet to become clean examples as the input to SOD models. Based on the publicized remote sensing SOD dataset (EORSSD [12]), we build a benchmark by synthesizing the Adversarial Cloud to test the performance of attack and defense for the SOD problem in the remote sensing images. As shown in Fig. 1 (b), our proposed method could defend different adversarial attack methods. Experimental results on the built benchmark show the effectiveness and accuracy of the proposed method. The contributions of this paper are summarized as follows.
* This paper proposes a novel attack method by jointly tuning adversarial exposure and additive perturbation and constraining image close to cloudy image as Adversarial Cloud for the SOD in remote sensing images.
* This paper proposes a novel DefenseNet as learn-able pre-processing against the adversarial cloud attack for the safety-ensured SOD in remote sensing images, without tuning the already deployed deep learning based SOD model.
* By considering both regular and generalized adversarial examples, the proposed DefenseNet can defend the proposed Adversarial Cloud in white-box setting and other attack methods in black-box setting.
## 2 Related Work
### Salient Object Detection for Remote Sensing
Salient object detection (SOD) is to automatically extract the salient objects in an image. Many existing methods have been proposed for SOD in natural images, while the SOD in optical remote sensing images is more challenging due to the unique, complex and diverse environments [11]. SOD in satellite or drone images has wide applications in remote sensing, such as building extraction [20], Region-of-Interest extraction [21], airport detection [22], oil tank detection [23], ship detection [24], _etc_.
Some traditional methods have been proposed for SOD in remote sensing images by employing the bottom-up SOD models [25, 26, 27, 28, 21]. Recently, more deep learning based SOD methods are proposed for the optical remote sensing images [11, 12, 29, 30, 31]. The efforts of these deep learning based methods are mainly focused on multi-scale feature aggregation, _e.g._, [11] and representative context feature learning, _e.g._, [12]. Different with the existing methods to improve the SOD performance on remote sensing images, this paper is focused on the adversarial attack and defense of the deep learning based SOD models.
### Adversarial Attack
There are two types of adversarial attacks: _white-box_ attacks, where the adversary has full access to the target model, including its parameters, _i.e._, the model is transparent to the adversary, and _black-box_ attacks, where the adversary has little knowledge of the target model. As the white-box attacks are usually more destructive than black-box ones in practice, the literature more focuses on the white-box attacks. Among these white-box attacks, Szegedy _et al_. [32] used a box-constrained L-BFGS method to generate effective adversarial attacks for the first time. After that, the fast gradient sign method (FGSM) [14] used the sign of the gradient to generate attacks, with \(\ell_{\infty}\)-norm bound. As a multi-step attack method, the projected gradient descent (PGD) was proposed in [33]. Carlini and Wagner [34] proposed the so-called CW attack which is a margin-based attack. More recently, Croce _et al_. introduced a parameter-free attack named AutoAttack [35], which is an ensemble of four diverse attacks, including two proposed variants of PGD attacks and two existing complementary attacks, _i.e._, FAB [36] and Square Attack [37]. Besides the perturbation ones, the attacks could also be the small geometric transformations [38, 39] or designed adversarial patches [40, 41].
Figure 1: (a) Illustration of the proposed defense against the adversarial cloud attacks for remote sensing salient object detection. (b) Performance (F Measure) of the proposed DefenseNet against different adversarial cloud attacks. Bigger area means better defense.
### Adversarial Defense
With the development of adversarial examples, studies on how to defend against those attacks and improve the robustness of the neural networks emerge. Among them, the most effective and widely used defense model is adversarial training (AT), although the most straightforward way is simply by attaching a detection network to detect and reject adversarial examples [42]. AT based models, which aim to minimize the loss function to the strongest adversarial attacks within a constraint, were first proposed by [14]. After that, a number of defending methods [17, 43, 44, 45, 46, 47, 48, 43] based on adversarial training were proposed. For example, [43] and [44] built a triplet loss to enforce a clean image and its corresponding adversarial example has a short distance in feature space. TRADES [46] optimized the trade-off between robustness and accuracy. In addition to focusing on the on-training model that utilizes adversarial examples, [48] proposed to explore the information from the model trained on clean images by using an attention guided knowledge distillation. Besides the adversarial training, there are also a number of other defense models have been designed. For example, Xie _et al_. [49] proposed feature denoising models by adding denoise blocks into the architecture to defend the adversarial attacks, while Cohen _et al_. [50] proposed to use randomized smoothing to improve adversarial robustness. Several methods aimed to reconstruct the clean image by using a generative model [51, 52, 53].
## 3 Methodology
### Cloud Synthesizing for Remote Sensing
Given a clean remote sensing color image \(\mathbf{I}\in\mathds{R}^{H\times W\times 3}\), we aim to simulate a cloudy image via \(\hat{\mathbf{I}}=\text{Cloud}(\mathbf{I},\mathbf{E},\mathbf{M})\), where \(\mathbf{E}\in\mathds{R}^{H\times W\times 1}\) is an exposure matrix to define exposure degree, \(\mathbf{M}\in\mathds{R}^{H\times W\times 1}\) is a cloud mask to simulate clouds, and \(\text{Cloud}(\cdot)\) represents the cloudy image synthesis function. The cloud mask \(\mathbf{M}\) can be synthesized via a summation of multi-scale random noises, and is defined as
\[\mathbf{M}=\sum_{s}\mathbf{R}\left(\mathbf{f}(2^{s})\right)/2^{s}, \tag{1}\]
where \(\mathbf{f}\) represents a randomizing function, \(\mathbf{R}\) denotes a resize process and \(s\) is a scale factor. \(\mathbf{f}\) produces random noises with the image size \(2^{s}\) followed by being resized by \(\mathbf{R}\). \(s\) is a natural number with range \(\in\) [1, \(\text{log}_{2}N\)], where \(N=H\times W\) is the image size. Given a clean image \(\mathbf{I}\), exposure matrix \(\mathbf{E}\), and cloud mask \(\mathbf{M}\), we could synthesize a cloudy image \(\hat{\mathbf{I}}\) via
\[\hat{\mathbf{I}}=\text{Cloud}(\mathbf{I},\mathbf{E},\mathbf{M})=\mathbf{I} \odot\mathbf{E}\odot(1-\mathbf{M})+\mathbf{M}, \tag{2}\]
where \(\odot\) denotes pixel-wise multiplication.
With this cloudy image synthesis, we could study the effects of cloud from the viewpoint of adversarial attack by tuning the exposure matrix \(\mathbf{E}\) and cloud mask \(\mathbf{M}\) to render the synthesized cloudy images to fool the deep learning based SOD models. Later, we also employ these adversarial examples, obtained by the proposed attack method, to study the defense performance.
### Network Architecture
In this section, we show the whole pipeline of adversarial cloud attack (AdvCloud), and DefenseNet as attack and defense stages to fully explore the cloud effects to a deployed deep SOD model in Fig. 2. In the attack stage, given a clean image \(\mathbf{I}\), an exposure matrix \(\mathbf{E}\), a cloud mask \(\mathbf{M}\), a pre-trained deep remote sensing SOD model \(\phi(\cdot)\), and a well-trained discriminator \(\mathcal{D}(\cdot)\), we aim to generate adversarial cloudy image examples via the proposed AdvCloud. Then, we analyze how the synthetic adversarial cloudy images
Figure 2: Structure of the proposed Adversarial Cloud (AdvCloud) based attack and the proposed DefenseNet as the defense against the AdvCloud for the remote sensing Salient Object Detection (SOD). \(\mathcal{N}_{\mathbf{E}},\mathcal{N}_{\mathbf{M}}\) are Gaussian noises for the Attack Generalizion Module (AGM). Given a clean image \(\mathbf{I}\) multiplied by Exposure matrix \(\mathbf{E}\) and summation of cloud mask \(\mathbf{M}\), the synthesized cloudy image \(\hat{\mathbf{I}}\) could be obtained. The DefenseNet is a learn-able pre-processing for the SOD network.
hurt the SOD performance. As the other main step of the pipeline, we perform defense process, DefenseNet, as a pre-processing stage for the adversarial images to generate cloud-removed images as defense for the SOD model. The proposed DefenseNet can avoid retraining the deep SOD model and make the salient object detector process adaptive to cloudy images. For optimization, the proposed pipeline aims to maximize the detection loss of SOD model and minimize the adversarial loss of the discriminator in the attack stage to generate adversarial cloudy images which are close to normal cloudy images, while minimizing the detection loss of salient object detector by predicting a clean image in the defense stage to maintain the accuracy of the SOD model.
### Adversarial Cloud based Attack
In general, adversarial attack fails a deep model by adding an imperceptible noise-like perturbation to an image under the guidance of the deep model. In this work, we propose a novel adversarial attack method, AdvCloud, to generate adversarial cloudy remote sensing images that can fool the SOD model to verify the robustness of the SOD model.
By intuition, we can tune \(\mathbf{E}\) and \(\mathbf{M}\) to generate adversarial cloudy images. Specifically, given \(\mathbf{I}\), \(\mathbf{E}\), \(\mathbf{M}\), and a pre-trained SOD detector \(\phi(\cdot)\), we aim to tune the \(\mathbf{E}\) and \(\mathbf{M}\) under a norm ball constraint by
\[\operatorname*{arg\,max}_{\mathbf{E},\mathbf{M}}\mathcal{J}( \phi(\text{Cloud}(\mathbf{I},\mathbf{E},\mathbf{M})),y),\] \[\text{subject to }\|\mathbf{M}-\mathbf{M}_{0}\|_{\text{p}}\leq \operatorname*{\textsl{e}}_{\text{M}},\|\mathbf{E}-\mathbf{E}_{0}\|_{p}\leq \operatorname*{\textsl{e}}, \tag{3}\]
where \(\mathcal{J}(\cdot)\) is the loss function of the SOD model \(\phi(\cdot)\) under the supervision of the annotation label \(y\). We set \(\operatorname*{\textsl{e}}_{\text{E}}\) and \(\operatorname*{\textsl{e}}_{\text{M}}\) as the ball bound under \(L_{\text{p}}\) around their initialization (, \(\mathbf{E}_{0}\) and \(\mathbf{M}_{0}\)) for the parameters \(\mathbf{E}\) and \(\mathbf{M}\) to avoid the clean image \(\mathbf{I}\) being changed significantly.
Similar to existing perturbation based adversarial attacks (, [33]), the object function, Eq. (3), can be optimized by gradient descent-based methods. In specific: We initialize \(\mathbf{E}_{0}\) as a mask with all elements as 1 and set \(\mathbf{M}_{0}\) via Eq. (1). Then, we get the initial synthesized cloudy image by Eq. (2). We feed the synthesized image to the SOD model \(\phi(\cdot)\) and calculate the SOD loss \(\ell\). We perform back-propagation to obtain the gradient of \(\mathbf{E}\) and \(\mathbf{M}\) with respective to the loss function. We calculate the sign of the gradient to update the variables \(\mathbf{E}\) and \(\mathbf{M}\) by multiplying the sign of their gradients with the corresponding step sizes for the next iteration, which is formulated to
\[\ell =\mathcal{J}(\phi(\text{Cloud}(\mathbf{I},\mathbf{E}_{i},\mathbf{M }_{i})),y),\] \[\mathbf{M}_{i+1} =\mathbf{M}_{i}+\alpha_{\text{M}}\cdot\text{sign}(\nabla_{\mathbf{ M}_{i}}(\ell)),\] \[\mathbf{E}_{i+1} =\mathbf{E}_{i}+\alpha_{\text{E}}\cdot\text{sign}(\nabla_{ \mathbf{E}_{i}}(\ell))), \tag{4}\]
where \(\alpha_{\text{M}}\) and \(\alpha_{\text{E}}\) represents the step sizes, and \(i\in\{0,1,\dots,K-1\}\) is the iteration number. We generate a new adversarial cloudy image and loop from to for \(K\) iterations.
To make the adversarial cloudy image \(\hat{\mathbf{I}}\) have close visualized perception to the normal cloudy image, we also incorporate a discriminator \(\mathcal{D}\) to align the distribution of normal cloudy images and adversarial cloudy images to avoid artifacts which might be introduced by Eq. (3). The inputs of the discriminator are an adversarial cloudy image \(\hat{\mathbf{I}}\) and a normal cloudy image \(\mathbf{I}_{c}\), obtained by \(\mathbf{I}_{c}=\text{Cloud}(\mathbf{I},\mathbf{M})=\mathbf{I}\odot(1-\mathbf{ M})+\mathbf{M}\), then the adversarial training loss of the discriminator \(\mathcal{D}\) is
\[\mathcal{L}_{\mathcal{D}}(\hat{\mathbf{I}},\mathbf{I}_{c}) =\mathbf{E}_{\mathbf{I}_{c}\sim\mathbf{X}_{c}}[\log(\mathcal{D}( \mathbf{I}_{c}))]\] \[+\mathbf{E}_{\mathbf{I}\sim\hat{\mathbf{X}}}[\log(1-\mathcal{D} (\hat{\mathbf{I}}))], \tag{5}\]
where \(\mathbf{I}_{c}\) and \(\hat{\mathbf{I}}\) are instances from normal cloudy images set \(\mathbf{X}_{c}\) and adversarial cloudy images set \(\hat{\mathbf{X}}\), respectively.
The whole attack pipeline, incorporating AdvCloud and discriminator \(\mathcal{D}\), is trained on the training set of the remote sensing SOD dataset EORSSD [12]. The above setting has
```
0: Clean images from the training set of EORSSD, \(\operatorname*{\textsl{e}}_{\text{M}}=0.03\), \(\operatorname*{\textsl{e}}_{\text{E}}=0.06\), iteration \(K=10\), \(\alpha_{\text{M}}=0.003\), \(\alpha_{\text{E}}=0.015\), a pre-trained remote sensing SOD detector \(\phi(\cdot)\)[12], and a pre-trained discriminator \(\mathcal{D}(\cdot)\) obtained by pre-processing on training set. Output: Adversarial Cloudy Images, parameter \(\theta\) for DefenseNet.
1:repeat
2:Attack Step:
3:\(\bullet\) Initial cloudy image synthesizing by Eq. (2) with \(\mathbf{E}_{0}\) and \(\mathbf{M}_{0}\).
4:\(\bullet\) Solve Eq. (6) via Eq. (7) to obtain optimal \(\mathbf{E}\) and \(\mathbf{M}\) with \(K\) iterations for each image to learn the corresponding adversarial cloudy image \(\hat{\mathbf{I}}\).
5:Defense Step:
6:\(\bullet\) Obtain the generalized adversarial cloudy image \(\hat{\mathbf{I}}_{g}\) via Eq. (8).
7:\(\bullet\) Solve Eq. (9) via AdamW optimizer [54] to obtain optimal \(\theta\) by fixed \(\mathbf{E}\) and \(\mathbf{M}\) (, an adversarial cloudy image \(\hat{\mathbf{I}}\), the generalized adversarial cloudy image \(\hat{\mathbf{I}}_{g}\)).
8:until convergence or maximum epochs reached.
```
**Algorithm 1** **Defense** algorithm against the Adversarial Cloud based attack for remote sensing SOD.
Figure 3: Structure of the proposed DefenseNet.
an assumption for a reliable discriminator \(\mathcal{D}\) ahead for the following inference stage. Specifically, we alternatively freeze adversarial parameters \(\mathbf{E}\), \(\mathbf{M}\) and the discriminator \(\mathcal{D}\) to optimize the other one to get the reliable discriminator \(\mathcal{D}\) in the training set of EGRSSD\({}_{c}\) before the following inference stage.
For the inference stage of the proposed AdvCloud attack, we attack the testing set of EGRSSD guided by the pre-trained discriminator \(\mathcal{D}(\cdot)\) and the SOD detector \(\phi(\cdot)\). Given a clean image \(\mathbf{I}\) from the testing set of EGRSSD, exposure matrix \(\mathbf{E}\) and cloud mask \(\mathbf{M}\), a well-trained discriminator \(\mathcal{D}(\cdot)\), and a SOD detector \(\phi(\cdot)\), we tune \(\mathbf{E}\) and \(\mathbf{M}\) for \(K\) iterations based on back-propagation, while the optimization function Eq. (3) is reformulated to
\[\operatorname*{arg\,max}_{\mathbf{E},\mathbf{M}}(\mathcal{J}( \phi(\text{Cloud}(\mathbf{I},\mathbf{E},\mathbf{M})),y)-\mathcal{L}_{\mathcal{ D}}(\hat{\mathbf{I}},\mathbf{I}_{c})),\] \[\text{subject to }\|\mathbf{M}-\mathbf{M}_{0}\|_{\text{p}}\leq \epsilon_{\text{M}},\|\mathbf{E}-\mathbf{E}_{0}\|_{\text{p}}\leq\epsilon_{ \text{E}}, \tag{6}\]
which means the adversarial cloudy image \(\hat{\mathbf{I}}\) could fail the SOD detector and have the realistic cloud appearance and pattern close to normal cloudy images. Then, the updating process of variables \(\mathbf{E}\) and \(\mathbf{M}\), in Eq. (4), is reformulated to
\[\ell =\mathcal{J}(\phi(\text{Cloud}(\mathbf{I},\mathbf{E}_{i},\mathbf{ M}_{i})),y),\] \[\mathbf{M}_{i+1} =\mathbf{M}_{i}+\alpha_{\mathbf{M}}\cdot\text{sign}(\nabla_{ \mathbf{M}_{i}}(\ell-\mathcal{L}_{\mathcal{D}}(\hat{\mathbf{I}},\mathbf{I}_{c }))),\] \[\mathbf{E}_{i+1} =\mathbf{E}_{i}+\alpha_{\text{E}}\cdot\text{sign}(\nabla_{ \mathbf{E}_{i}}(\ell-\mathcal{L}_{\mathcal{D}}(\hat{\mathbf{I}},\mathbf{I}_{c })))). \tag{7}\]
After obtaining the updated \(\mathbf{E}\) and \(\mathbf{M}\) for each image from the testing set of EGRSSD, we can get the corresponding adversarial cloudy images via Eq. (2).
### Defense against Adversarial Cloud
The proposed AdvCloud attack can easily hurt the SOD performance, while performing defense against adversarial attack is an effective way to alleviate such performance drop. In this section, we propose a DefenseNet as a learnable pre-processing for adversarial cloudy images to acquire cloud-removed images for SOD models to improve the robustness. The proposed DefenseNet contains the two following branches as the inputs.
**Vanilla AdvCloud Branch.** Given the updated adversarial attacks \(\mathbf{E}\) and \(\mathbf{M}\), we can obtain an adversarial cloudy image \(\hat{\mathbf{I}}\). Then, it is the first-branch input to the DefenseNet to perform the reconstruction for adversarial cloud removal. This is a simple white-box defense setting to make DefenseNet see the proposed AdvCloud attack so as to defend it.
**Generalized AdvCloud Branch.** To benefit a black-box defense making DefenseNet robust to other cloud based adversarial examples generated by different attack methods which are never seen before, we design an Attack Generalization Module (AGM) to include the generalized AdvCloud images. We use two different levels of Gaussian noise to
Figure 4: Defense against the remote sensing salient object detection attacks. From top to bottom: normal cloudy image, attacked cloudy images by FGSM [14], PGD [33], and the proposed AdvCloud. From left to right: cloudy images, defense images by JPG Compression [55], FFA-Net [56], proposed DefenseNet, each of which is followed by its corresponding SOD result.
simulate the changes produced by the gradient-based learned exposure matrix (\(\mathbf{E}\)) and cloud mask (\(\mathbf{M}\)) under a specified budget. Specifically, we add Gaussian noise \(\mathcal{N}_{\text{E}}\) = \(\omega_{\text{E}}\cdot\mathcal{N}(\cdot)\) and \(\mathcal{N}_{\text{M}}\) = \(\omega_{\text{M}}\cdot\mathcal{N}(\cdot)\) to \(\mathbf{E}\) and \(\mathbf{M}\) respectively to obtain \(\mathbf{E}_{g}\) and \(\mathbf{M}_{g}\) so as to extend the distribution space of parameters around the gradient direction, where \(\mathcal{N}(\cdot)\) is a standard Gaussian random noise generation function in the range of [-1, 1]. Then, we could acquire a generalized adversarial cloudy image \(\hat{\mathbf{I}}_{g}\) with the generalized \(\mathbf{E}_{g}\) and \(\mathbf{M}_{g}\) via Eq. (2), _i.e_.,
\[\hat{\mathbf{I}}_{g}=\text{Cloud}(\hat{\mathbf{I}},\mathbf{E}_{g},\mathbf{M}_{ g}), \tag{8}\]
as the second-branch input to the DefenseNet.
**DefenseNet Loss.** We feed adversarial cloudy images \(\hat{\mathbf{I}}\) and \(\hat{\mathbf{I}}_{g}\) to the DefenseNet to output the cloud-removed images \(\mathbf{I^{\prime}}=\text{DefenseNet}(\hat{\mathbf{I}};\theta)\) and \(\mathbf{I^{\prime}}_{g}=\text{DefenseNet}(\hat{\mathbf{I}}_{g};\theta)\), respectively. \(\theta\) means the parameters of DefenseNet. In the defense stage, the output cloud-removed images are optimized by the image reconstruction loss function \(L_{r}\) and regularization loss item \(L_{reg}\). The object function is shown below:
\[\mathcal{L}=L_{r}(\mathbf{I^{\prime}},\mathbf{I})+L_{r}(\mathbf{I^{\prime}} _{g},\mathbf{I})+wL_{reg}(\mathbf{I^{\prime}},\mathbf{I^{\prime}}_{g}), \tag{9}\]
where \(\mathbf{I}\) is the clean image for \(\hat{\mathbf{I}}\) and \(\hat{\mathbf{I}}_{g}\), and \(w\) is the balance weight which is set to 0.1. \(L_{r}\) and \(L_{reg}\) loss functions are both implemented as \(L_{1}\) loss.
The whole algorithm flow for the defense against the Adversarial Cloud based attack for remote sensing salient object detection is summarized in Algorithm 1.
### Structure of Proposed DefenseNet
For implementation, we design the proposed DefenseNet shown in Fig. 3. DefenseNet consists of 6 basic residual blocks, where each block includes 2 convolution layers, one ReLu layer, and one Batch Normalization layer. The first four stages are adopted from ResNet, but the first convolution layer has 64 filters with a size of \(3\times 3\) and stride of 1. This makes that the early feature map has the same resolution as the input image, which can lead to a bigger receptive field. There is also a bottleneck stage after the encoder part, and it consists of three convolutions layers with 512 dilated \(3\times 3\) filters, and all these convolutions layers are also followed by a batch normalization and a ReLu activation function. There is a residual block from the input to the output, making the network to focus on the residual learning.
## 4 Experiments
### Experimental Setting
**Benchmark Datasets:** To evaluate the salient object detection in remote sensing images, we use the public EORSSD dataset [12] to perform experiments. It has 2,000 remote sensing satellite images and corresponding pixel-level labeled salient object detection ground truth, which includes 1,400 images for training and 600 images for testing. The EORSSD dataset includes the objects of Aircraft, Building, Car, Island, Road, Ship, Water, None, and Other in the satellite images. This dataset is quite challenging with complicated scene types, complex object attributes, comprehensive real-world satellite circumstances, and some small-size objects, therefore it is more difficult than the normal salient object detection datasets with natural images. Using each clean image in EORSSD dataset, we generate its corresponding image with the normal cloud, leading to a new synthetic dataset named EORSSD\({}_{c}\). Similarly, adding the proposed Adversarial Cloud (AdvCloud) to each clean image of EORSSD dataset, we could generate a new synthetic dataset named EORSSD\({}_{adv}\). Figure 5 shows some example images of the datasets EORSSD, EORSSD\({}_{c}\), and EORSSD\({}_{adv}\).
**Evaluation Metrics:** We evaluate the remote sensing salient object detection performance using F-measure (F\({}_{\beta}\)), Mean Absolute Error (MAE) score and S-measure (S\({}_{m}\)), same as those in [12]. The larger F-measure, S-measure values and lower MAE score mean the better remote sensing SOD performance. Based on these metrics, we could also compare the performance of attack and defense for the remote sensing SOD task.
**Comparison Methods:** For the attack experiment, we compare the proposed AdvCloud method with five additive perturbation based white-box attack methods on the EORSSD\({}_{c}\) dataset, _i.e_., FGSM [14], MIFGSM [57], PGD [33], VMIFGSM [58], and NIFGSM [59]. The maximum perturbation for these comparison methods is set to be 8 with pixel values in [0, 255]. These comparison attack methods are applied on the testing images of EORSSD\({}_{c}\).
For the defense experiment, we compare our proposed DefenseNet with JPEG Compression [55], FFA-Net [56], and Defense\({}_{FFA}\) (using FFA-Net as the backbone). **The defense methods are all trained on EORSSD\({}_{adv}\) generated by attacking DArNet which aims to remove the adversarial attack to obtain the clean image.**
For evaluating the generalization ability of the proposed
Figure 5: Example images of remote sensing datasets EORSSD, EORSSD\({}_{c}\), EORSSD\({}_{adv}\). (a) clean image of EORSSD, (b) synthesized normal cloud, (c) clean image with normal cloud leading to EORSSD\({}_{c}\), (d) proposed adversarial cloud, and (e) clean image with proposed adversarial cloud leading to EORSSD\({}_{adv}\).
attack and defense methods, we additionally employ three SOD detectors, _i.e_., BasNet [60], U\({}^{2}\)Net [61], and RRNet [31]. All SOD models are trained on EORSSD dataset until convergence.
Since the proposed AdvCloud are generated based on cloud, to ensure fairness in evaluating the effectiveness of different SOD (Salient Object Detection) models in attacking and defending against these adversarial examples, **the performance of 4 different SOD models should treat the EORSSD\({}_{c}\) as the starting point for attacking rather than EORSSD.**
**Implementation Details:** The SOD Network to be attacked is the deep learning based remote sensing salient object detection network DAFNet [12] pre-trained on the clean training images of EORSSD dataset. For the proposed AdvCloud attack, we set \(\epsilon_{\text{M}}=0.03\), \(\epsilon_{\text{E}}=0.06\), and the generalization random noise range of \(\omega_{\text{M}}\), \(\omega_{\text{E}}\) are 0.05 and 0.1, respectively. The input image is resized to \(256\times 256\). We use the AdamW optimization algorithm [54] for the network training with the following hyper parameters: learning rate as 0.0001, batch size as 8, and training epoch as 80. All the experiments were run on a single NVIDIA RTX 3090 GPU card (24G). We use PyTorch to implement the proposed method.
### Experimental Results
**Attack Result.** Table 1 shows the quantitative SOD performance for the baseline attack. When the dataset is clean, _i.e_., no cloud is added, the target SOD network, DAFNet [12], achieves 0.9049 overall F-measure on EORSSD dataset. After normal clouds are added to the EORSSD dataset, the F-measure decreases to 0.8253. When the proposed AdvCloud is added to the EORSSD dataset, the SOD network is misled by the adversarial examples and the F-measure is 0.2572. This demonstrates that the proposed AdvCloud severely reduces the performance of the SOD network. Furthermore, we compare the proposed AdvCloud with other attack methods, as shown in Table 1. It shows that each attack method could effectively reduce the SOD performance Moreover, the white-box attacks on DAFNet can be effective to other SOD detectors with varying degrees of decline.
Fig. 4 shows the qualitative comparisons among different attack methods and their corresponding SOD map. Due to the attack, some objects predicted by the SOD model are ignored (a, b, d) and misidentified (c) in Fig. 4. As we can observe, the proposed attacked image is very similar to the normal cloud in human perception compared to that from other attack methods. We can see visible defect and moire on the attacked images by other attack methods in Fig. 6. Therefore, the proposed AdvCloud is more visually close to normal cloud but with very competitive attack performance.
**Defense Result.** Table 4 shows the defense remote sensing SOD performance under different attack methods. It shows the defense methods effectively improve the SOD performance after applying defense methods to adversarial examples generated by attack strategies in Table 1. Fig. 7 shows the comprehensive defense results on all of the attack strategies. We can clearly see that the proposed defense method, _i.e_., as a pre-processing step, achieves better F\({}_{\beta}\) and S\({}_{m}\) gains comparing with FFA-Net. The proposed DefenseNet could not only predominantly defend the proposed AdvCloud attack (_i.e_., white-box defense) but also effectively
\begin{table}
\begin{tabular}{l l|c c|c c c|c c c c|c c} \hline \hline \multirow{2}{*}{**Attack Performance**} & \multicolumn{3}{c|}{DAFNet [12]} & \multicolumn{3}{c|}{BasNet [60]} & \multicolumn{3}{c|}{U\({}^{2}\)Net [61]} & \multicolumn{3}{c}{RRNet [31]} \\ \cline{3-13} & MAE \(\uparrow\) & F\({}_{\beta}\)\(\downarrow\) & S\({}_{m}\)\(\downarrow\) & MAE \(\uparrow\) & F\({}_{\beta}\)\(\downarrow\) & S\({}_{m}\)\(\downarrow\) & MAE \(\uparrow\) & F\({}_{\beta}\)\(\downarrow\) & S\({}_{m}\)\(\downarrow\) & MAE \(\uparrow\) & F\({}_{\beta}\)\(\downarrow\) & S\({}_{m}\)\(\downarrow\) \\ \hline Clean Image & 0.0060 & 0.9049 & 0.9058 & 0.0162 & 0.8071 & 0.8871 & 0.0157 & 0.7890 & 0.8516 & 0.0077 & 0.9086 & 0.925 \\ Normal cloud & 0.0126 & 0.8253 & 0.8540 & 0.0295 & 0.7270 & 0.8323 & 0.0395 & 0.6170 & 0.7410 & 0.0100 & 0.8345 & 0.8917 \\ \hline FGSM & 0.0432 \({}^{\circ}\) & 0.2880 \({}^{\circ}\) & 0.5773 \({}^{\circ}\) & 0.0381 & 0.5974 & 0.7488 & 0.0441 & 0.5027 & 0.6743 & 0.0022 & 0.6815 & 0.7937 \\ MIFGSM & 0.0497 \({}^{\circ}\) & 0.1292 \({}^{\circ}\) & 0.5247 \({}^{\circ}\) & 0.0452 & 0.5176 & 0.7063 & 0.0641 & 0.4666 & 0.6611 & 0.0208 & 0.6344 & 0.7695 \\ PGD & 0.0680 \({}^{\circ}\) & 0.1376 \({}^{\circ}\) & 0.5166 \({}^{\circ}\) & 0.0401 & 0.5860 & 0.7478 & 0.0426 & 0.5142 & 0.6869 & 0.0169 & 0.7026 & 0.8060 \\ VMIFGSM & 0.0497 \({}^{\circ}\) & 0.1325 \({}^{\circ}\) & 0.5267 \({}^{\circ}\) & 0.0463 & 0.4924 & 0.6952 & 0.0463 & 0.4564 & 0.6561 & 0.0245 & 0.5807 & 0.7416 \\ NIFGSM & 0.0472 \({}^{\circ}\) & 0.1519 \({}^{\circ}\) & 0.5360 \({}^{\circ}\) & 0.0439 & 0.5176 & 0.7108 & 0.0456 & 0.4698 & 0.6623 & 0.0213 & 0.6354 & 0.7735 \\ AdvCloud w/o Noise & 0.0256 \({}^{\circ}\) & 0.6583 \({}^{\circ}\) & 0.7565 \({}^{\circ}\) & 0.0311 & 0.7080 & 0.8198 & 0.0373 & 0.5903 & 0.7286 & 0.0120 & 0.8018 & 0.8671 \\ AdvCloud w/o Exposure Matrix & 0.0484 \({}^{\circ}\) & 0.4265 \({}^{\circ}\) & 0.6453 \({}^{\circ}\) & 0.0317 & 0.7026 & 0.8145 & 0.0379 & 0.5953 & 0.7265 & 0.0116 & 0.8103 & 0.8765 \\ AdvCloud & 0.0714 \({}^{\circ}\) & 0.2572 \({}^{\circ}\) & 0.5609 \({}^{\circ}\) & 0.0361 & 0.6396 & 0.7771 & 0.0404 & 0.5504 & 0.7072 & 0.0143 & 0.7484 & 0.8370 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Baseline remote sensing SOD performance before and after the proposed adversarial cloud (AdvCloud) attack. The budget for the perturbation cloud/noise is 8 pixels. We mark white-box attacks with * and highlight the best performance in red. The gray part means the black-box attacking.
Figure 6: Visualization of normal cloudy image and attacked cloudy examples by different attack methods.
defend other attack methods (_i.e._, black-box defense). As shown in Table 4, the F\({}_{\beta}\) performance gain by the proposed DefenseNet and DefenseNet\({}_{FFA}\) can be generalization to other defense methods under each attack method. **Despite of the proposed Defense method never seen other adversarial attack images created by other attack methods during training, the proposed Defense method trained on AdvCloud can still achieve better generalization performance to defend against other attack methods**, with the help of the proposed Attack Generalization Module (AGM) shown in Table 2.
**Ablation Study for Proposed DefenseNet.** The proposed DefenseNet has two input branches, _i.e._, regular attack image branch and generalized attack image branch. Table 2 shows both the regular attack branch and the generalized attack branch contribute to the final defense SOD performance, where the best defense performance is obtained when combining the two branches. If the branch of generalized attack is removed, it will lead to more significant defense performance drop. The DefenseNet contain AGM module can provide a promising and effective solution for generative defense on different adversarial attacks.
\begin{table}
\begin{tabular}{c|c c c|c c c|c c c} \hline \multirow{2}{*}{Attack Methods} & \multicolumn{3}{c|}{DefenseNet\({}^{\ddagger}\)} & \multicolumn{3}{c|}{DefenseNet\({}^{\dagger}\)} & \multicolumn{3}{c}{DefenseNet} \\ \cline{2-10} & MAE \(\downarrow\) & F\({}_{\beta}\uparrow\) & S\({}_{m}\uparrow\) & MAE \(\downarrow\) & F\({}_{\beta}\uparrow\) & S\({}_{m}\uparrow\) & MAE \(\downarrow\) & F\({}_{\beta}\uparrow\) & S\({}_{m}\uparrow\) \\ \hline FGSM [14] & 0.0373 & 0.4734 & 0.6652 & 0.0279 & 0.6161 & 0.7395 & 0.0260 & 0.6468 & 0.7548 \\ MIFGSM [57] & 0.0554 & 0.3144 & 0.5966 & 0.0600 & 0.4010 & 0.6399 & 0.0569 & 0.4534 & 0.6651 \\ PGD [33] & 0.0400 & 0.5256 & 0.6986 & 0.0267 & 0.6770 & 0.7783 & 0.0213 & 0.7244 & 0.8039 \\ VMIFGSM [58] & 0.0659 & 0.2271 & 0.5535 & 0.0754 & 0.2844 & 0.5760 & 0.0762 & 0.3268 & 0.5917 \\ NIFGSM [59] & 0.0517 & 0.3187 & 0.6004 & 0.0553 & 0.4027 & 0.6386 & 0.0516 & 0.4698 & 0.6689 \\ Proposed AdvCloud & 0.0249 & 0.7033 & 0.8011 & 0.0182 & 0.7477 & 0.8227 & 0.0128 & 0.8226 & 0.8572 \\ \hline Mean & 0.0459 & 0.4271 & 0.6526 & 0.0439 & 0.5215 & 0.6992 & 0.0408 & 0.5740 & 0.7236 \\ \hline \end{tabular}
\end{table}
Table 2: Ablation study for the defense SOD performance of proposed DefenseNet under different attack methods. DefenseNet\({}^{\ddagger}\): DefenseNet w/o Generalized AdvCloud, DefenseNet\({}^{\dagger}\): DefenseNet w/o Vanilla AdvCloud. The white-box defense is highlighted in red color.
\begin{table}
\begin{tabular}{c|c c c} \hline Methods & MAE \(\downarrow\) & F\({}_{\beta}\uparrow\) & S\({}_{m}\uparrow\) \\ \hline Clean Image & 0.0060 & 0.9049 & 0.9058 \\ Normal Cloud & 0.0126 & 0.8253 & 0.8540 \\ \hline JEPG Compression [55] & 0.0139 & 0.7913 & 0.8367 \\ DefenseNet & 0.0171 & 0.7747 & 0.8315 \\ FFA-Net [56] & 0.0144 & 0.8079 & 0.8492 \\ DefenseNet\({}_{FFA}\) & **0.0126** & **0.8320** & **0.8620** \\ \hline \end{tabular}
\end{table}
Table 3: Defense remote sensing SOD performance of normal cloudy images of EORSSD\({}_{c}\) with SOD detector D/FNet.
Figure 7: Visualization of defense performance across various SOD detection methods, including D/FNet, BasNet, U\({}^{2}\)Net, and RRNet, in which each column represents the mean testing performance under different attack methods on EORSSD\({}_{c}\) and defense scenarios. The EORSSD and EORSSD\({}_{c}\) represent each detector’s performance under clean image and cloudy image (both without attack); the Attack column shows the mean performance under FGSM, MIFGSM, PGD, VMIFGSM, NIFGSM, and AdvCloud attacks (Generated on D/FNet); and the subsequent columns show the mean defense results when applying JPEG, FFA-Net, DefenseNet, and Defense\({}_{FFA}\) methods, respectively. The gray stripes indicate the black-box defenses directly applied on attacked images without training.
**Discussion about Defense on Normal Cloudy Images.**
The DefenseNet\({}_{FFA}\)'s performance in defense remote sensing SOD was assessed using normal cloudy images of EORSSD\({}_{c}\). The results in Table 3 indicate that the proposed defense mechanism is capable of effectively defending against anonymous types of attacks, while maintaining strong performance on normal images. This suggests that our defense method is reliable and effective in both attack and non-attack scenarios.
**Discussion about Visual quality.** The image quality comparison results are shown in Table 5. It turns out that the proposed AdvCloud has better image quality after attack. We use 8-pixel as the budget for the perturbation attack noise \(M\), same as all of the comparison methods. Combing with the observation in Fig. 4, although our proposed attack method can not achieve the best attack performance, our AdvCloud attack is more imperceptible comparing with other attack methods.
## 5 Conclusion
In this paper, we proposed a new Adversarial Cloud to attack the deep learning based remote sensing salient object detection model, meanwhile a new DefenseNet as pre-processing defense is proposed to purify the input image without tuning the deployed remote sensing deep SOD model. To study this research problem, we synthesized new benchmarks EORSSD\({}_{c}\) with normal cloud and EORSSD\({}_{adv}\) with the proposed adversarial cloud from the existing remote sensing SOD dataset EORSSD. The extensive experiment on 4 SOD networks shows that the proposed DefenseNet could well pre-process the attacked cloudy images as defense against different adversarial attack methods without changing the deployed remote sensing deep SOD model, while the SOD performance on the remote sensing normal cloudy images without attack is still promising.
\begin{table}
\begin{tabular}{l l|c c c|c c c|c c c|c c} \hline \hline \multirow{3}{*}{Methods} & \multicolumn{3}{c|}{Compare with EORSSD\({}_{c}\)} & \multicolumn{3}{c|}{Compare with EORSSD} \\ & \multicolumn{1}{c}{SSIM\(\uparrow\)} & \multicolumn{1}{c|}{PSNR\(\uparrow\)} & \multicolumn{1}{c|}{L2\(\downarrow\)} & \multicolumn{1}{c|}{SSIM\(\uparrow\)} & \multicolumn{1}{c|}{PSNR\(\uparrow\)} & \multicolumn{1}{c}{L2\(\downarrow\)} \\ \hline Normal Cloud & 1 & - & 0.00 & 0.64 & 10.01 & 331.85 & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline FGSM & 0.63 & 30.25 & 181.57 & 0.44 & 9.96 & **330.46** \\ MIFGSM & 0.70 & 31.45 & 137.95 & 0.47 & 9.99 & 330.87 & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ PGD & 0.79 & 33.54 & 85.49 & 0.53 & 9.99 & 331.15 & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ VMIFGSM & 0.70 & 31.37 & 121.55 & 0.47 & 9.99 & 330.79 & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ NIFGSM & 0.69 & 31.24 & 137.26 & 0.47 & 9.99 & 330.78 & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ ADvCloud & **0.88** & **36.24** & **46.91** & **0.58** & **10.00** & 331.32 & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline \hline \end{tabular}
\end{table}
Table 4: Defense performance on EORSSD\({}_{c}\) dataset. DefenseNet\({}_{FFA}\) means the proposed DefenseNet using FFA-Net as backbone. The gray part means the black-box defensing.
\begin{table}
\begin{tabular}{l l|c c c c|c c c|c c c} \hline \hline \multirow{3}{*}{Methods} & \multicolumn{3}{c|}{Compare with EORSSD\({}_{c}\)} & \multicolumn{3}{c|}{Compare with EORSSD} \\ & \multicolumn{1}{c}{SSIM\(\uparrow\)} & \multicolumn{1}{c|}{PSNR\(\uparrow\)} & \multicolumn{1}{c|}{L2\(\downarrow\)} & \multicolumn{1}{c|}{SSIM\(\uparrow\)} & \multicolumn{1}{c}{PSNR\(\uparrow\)} & \multicolumn{1}{c}{L2\(\downarrow\)} \\ \hline Normal Cloud & 1 & - & 0.00 & 0.64 & 10.01 & 331.85 & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline FGSM & 0.63 & 30.25 & 181.57 & 0.44 & 9.96 & **330.46** \\ MIFGSM & 0.70 & 31.45 & 137.95 & 0.47 & 9.99 & 330.87 & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ PGD & 0.79 & 33.54 & 85.49 & 0.53 & 9.99 & 331.15 & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ VMIFGSM & 0.70 & 31.37 & 121.55 & 0.47 & 9.99 & 330.79 & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ NIFGSM & 0.69 & 31.24 & 137.26 & 0.47 & 9.99 & 330.78 & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ ADvCloud & **0.88** & **36.24** & **46.91** & **0.58** & **10.00** & 331.32 & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{}{} & \multicolumn{1}{c}{} \\ \hline \hline \end{tabular}
\end{table}
Table 5: Image quality comparison of different cloudy attack methods with DAFNet as the SOD detector. EORSSD\({}_{c}\): normal cloudy images, EORSSD: original clean images. |
2307.16360 | Probabilistically robust conformal prediction | Conformal prediction (CP) is a framework to quantify uncertainty of machine
learning classifiers including deep neural networks. Given a testing example
and a trained classifier, CP produces a prediction set of candidate labels with
a user-specified coverage (i.e., true class label is contained with high
probability). Almost all the existing work on CP assumes clean testing data and
there is not much known about the robustness of CP algorithms w.r.t
natural/adversarial perturbations to testing examples. This paper studies the
problem of probabilistically robust conformal prediction (PRCP) which ensures
robustness to most perturbations around clean input examples. PRCP generalizes
the standard CP (cannot handle perturbations) and adversarially robust CP
(ensures robustness w.r.t worst-case perturbations) to achieve better
trade-offs between nominal performance and robustness. We propose a novel
adaptive PRCP (aPRCP) algorithm to achieve probabilistically robust coverage.
The key idea behind aPRCP is to determine two parallel thresholds, one for data
samples and another one for the perturbations on data (aka
"quantile-of-quantile" design). We provide theoretical analysis to show that
aPRCP algorithm achieves robust coverage. Our experiments on CIFAR-10,
CIFAR-100, and ImageNet datasets using deep neural networks demonstrate that
aPRCP achieves better trade-offs than state-of-the-art CP and adversarially
robust CP algorithms. | Subhankar Ghosh, Yuanjie Shi, Taha Belkhouja, Yan Yan, Jana Doppa, Brian Jones | 2023-07-31T01:32:06Z | http://arxiv.org/abs/2307.16360v1 | # Probabilistically Robust Conformal Prediction
###### Abstract
Conformal prediction (CP) is a framework to quantify uncertainty of machine learning classifiers including deep neural networks. Given a testing example and a trained classifier, CP produces a prediction set of candidate labels with a user-specified coverage (i.e., true class label is contained with high probability). Almost all the existing work on CP assumes clean testing data and there is not much known about the robustness of CP algorithms w.r.t natural/adversarial perturbations to testing examples. This paper studies the problem of probabilistically robust conformal prediction (PRCP) which ensures robustness to most perturbations around clean input examples. PRCP generalizes the standard CP (cannot handle perturbations) and adversarially robust CP (ensures robustness w.r.t worst-case perturbations) to achieve better trade-offs between nominal performance and robustness. We propose a novel adaptive PRCP (aPRCP) algorithm to achieve probabilistically robust coverage. The key idea behind aPRCP is to determine two parallel thresholds, one for data samples and another one for the perturbations on data (aka "_quantile-of-quantile_" design). We provide theoretical analysis to show that aPRCP algorithm achieves robust coverage. Our experiments on CIFAR-10, CIFAR-100, and ImageNet datasets using deep neural networks demonstrate that aPRCP achieves better trade-offs than state-of-the-art CP and adversarially robust CP algorithms.
## 1 Introduction
Deep learning has shown significant success in diverse real-world applications. However, to deploy these deep models in safety-critical applications (e.g, autonomous driving and medical diagnosis), we need uncertainty quantification (UQ) tools to capture the deviation of the prediction from the ground-truth output. For example, producing a subset of candidate labels referred to as _prediction set_ for classification tasks. Conformal prediction (CP) (Vovk et al., 1999, 2005; Shafer and Vovk, 2008) is a framework for UQ that provides formal guarantees for a user-specified _coverage_: ground-truth output is contained in the prediction set with a high probability \(1-\alpha\) (e.g., 90%). There are two key steps in CP. First, in the prediction step, we use a black-box classifier (e.g., deep neural network) to compute _(non-)conformity_ scores which measure similarity between calibration examples and a testing input. Second, in the calibration step, we use the conformity scores on a set of calibration examples to find a threshold to construct prediction set which meets the coverage constraint (e.g., \(1-\alpha\)=90%). The _efficiency_ of CP (Sadinle et al., 2019) is measured in terms of size of the prediction set (the smaller the better) which is important for human-ML collaborative systems (Rastogi et al., 2022).
In spite of the recent successes of CP (Vovk et al., 2005), there is little known about the robustness of CP to adversarial perturbations of clean inputs. Most CP methods (Cauchois et al., 2020; Gibbs and Candes, 2021; Tibshirani et al., 2019; Podkopaev and Ramdas, 2021; Guan and Tibshirani, 2022) are brittle as they assume clean input examples and cannot handle _any_ perturbations. The recent work on adversarially robust CP (Gendler et al., 2022) ensures robustness to _all_ perturbations bounded by a norm ball with radius \(r\). However, this conservative approach of dealing with _worst-case_ perturbations can degrade the nominal performance (evaluation on only clean inputs) of the CP method. For example, the prediction set size can be large even for clean and easy-to-classify inputs, which increases the burden of human expert in human-ML collaborative systems (Cai et al., 2019; Rastogi et al., 2022). The main research question of this paper is: _how can we develop probably correct CP algorithms for ensuring robustness to most perturbations for (pre-trained) deep classifiers?_1
Footnote 1: \(=\)Equal contribution by first two authors
To answer this question, we present a general notion for probabilistically robust coverage that balances the standard conformal coverage and the adversarial (worst-case) coverage as the fundamental setting. To address this challenge, we develop the adaptive PRCP algorithm (aPRCP) which is based on the principle of "_quantile-of-quantile_" design: consists of two parallel quantiles as illustrated in Figure 1: one defined in the perturbed noise space (see (8)), the other one in the data space (9). Our analysis fixes one quantile probability as a given hyper-parameter, and finds the other one to achieve the target probabilistically robust coverage. We provide theoretical analysis for probabilistic correctness of aPRCP at the population level and the approximation error of empirical quantiles as a function of the number of samples. As a result, aPRCP achieves improved trade-offs between nominal performance (evaluation on clean inputs) and robust performance (evaluation on perturbation inputs) for both probabilistic and worst-case settings as illustrated in Figure 2, which is analogous to the recent work on probabilistically robust learning Robey et al. (2022).
**Contributions.** The key contribution of this paper is the development, theoretical analysis, and empirical evaluation of the aPRCP algorithm. Our specific contributions include:
* A general notion of probabilistically robust coverage for conformal prediction against perturbations of clean input examples.
* Development of the adaptive PRCP algorithm based on the principle of "_quantile-of-quantile_" design.
* Theory to show that aPRCP algorithm achieves probabilistically robust coverage for adversarial examples.
* Experimental evaluation of aPRCP method on classification benchmarks using deep models to demonstrate its efficacy over prior CP methods on CIFAR-10, CIFAR-100, and ImageNet.
## 2 Background and Problem Setup
We consider the problem of uncertainty quantification (UQ) of pre-trained deep models for classification tasks in the presence of adversarial perturbations. Suppose \((X,Y)\) is a data sample where \(X\) is an input from the space \(\mathcal{X}\) and \(Y\in\mathcal{Y}\) is the corresponding ground-truth output. For classification tasks, \(\mathcal{Y}\) is a set of \(C\) discrete class-labels \(\{1,2,\cdots,C\}\). Let \(\epsilon\) denote the \(l_{2}\)-norm bounded noise, i,e,. \(\mathcal{E}_{r}=\{\epsilon\in\mathcal{X}:\|\epsilon\|_{2}\leq r\}\) that is independent from data sample \((X,Y)\). Let \(\mathcal{P}_{X,Y}\) and \(\mathcal{P}_{\epsilon}\) denote the underlying distribution of \((X,Y)\) and \(\epsilon\), respectively. We also define \(Z=(X,Y,\epsilon)\) as the joint random variable and the perturbed input example \(\widetilde{X}=X+\epsilon\) for notational simplicity.
**Uncertainty Quantification.** Let \(\mathcal{D}_{\text{tr}}\) and \(\mathcal{D}_{\text{cal}}\) correspond to sets of training and calibration examples drawn from a target distribution \(\mathcal{P}_{X,Y}\). We assume the availability of a pre-trained deep model \(F_{\theta}:\mathcal{X}\mapsto\mathcal{Y}\), where \(\theta\) stands for the parameters of the deep model. For a given testing input \(\widetilde{X}\), we want to compute UQ of the deep model \(F_{\theta}\) in the form of a prediction set \(\mathcal{C}(\widetilde{X})\), a subset of candidate class-labels \(\{1,2,\cdots,C\}\). The performance of UQ for clean data samples (i.e., \(\epsilon\)=0) is measured using two metrics. First, the (marginal) _coverage_ is defined as the probability that the ground-truth output \(Y\) is contained in \(\mathcal{C}(X)\) for a testing example \((X,Y)\) from the same data distribution \(\mathcal{P}_{X,Y}\), i.e., \(\mathbb{P}(Y\in\mathcal{C}(X))\). The empirical coverage Cov is measured over a given set of testing examples \(\mathcal{D}_{\text{test}}\). Second, _efficiency_, denoted by Eff, measures the cardinality of the prediction set \(\mathcal{C}(X)\). Smaller prediction set means higher efficiency. It is easy to achieve the desired coverage (say 90%) by always outputting \(\mathcal{C}(X)\)=\(\mathcal{Y}\) at the expense of poor efficiency.
**Conformal Prediction (CP).** CP is a framework that allows us to compute UQ for any given predictor through a conformalization step. The key element of CP is a score function
Figure 1: Conceptual illustration of the adaptive PRCP setting. The goal is to improve the robustness of the CP framework to handle perturbations \(\epsilon\) bounded by \(r\) for every input \(X\in\mathcal{X}\). The robust quantile corresponding to 1-\(\tilde{\alpha}\) region (blue circle around \(X\)) is computed by accounting for most of the perturbed data \(X+\epsilon\) (see (8)). \(s\) is a conservativeness parameter for the robust quantile that can be varied to achieve the target marginal coverage \(1-\alpha+s\) (see (9)). Adaptive PRCP can find a trade-off between the marginal coverage on feature space \((X,Y)\) and the robustness for perturbation \(\epsilon\) by changing the value of \(\tilde{\alpha}\) and \(s\) to achieve probabilistically robust coverage (See Definition 3).
\(S\) that computes the _conformity_ (or _non-conformity_) score, measures similarity between labeled examples, which is used to compare a given testing input to the calibration set \(\mathcal{D}_{\text{cal}}\). Since any non-conformity score can be intuitively converted to a conformity measure [25], we use non-conformity measure for ease of technical exposition. Let \(S(X,Y)\) denote the non-conformity score function of data sample \((X,Y)\). For a sample \((X_{i},Y_{i})\) from the calibration set \(\mathcal{D}_{\text{cal}}\), we use \(S_{i}=S(X_{i},Y_{i})\) as a shorthand notation of its non-conformity score.
A typical method based on split conformal prediction has a threshold \(\tau\) to compute UQ in the form of prediction set for a given testing input \(X\) and deep model \(F_{\theta}\). A small set of calibration examples \(\mathcal{D}_{\text{cal}}\) are used to select the threshold \(t\) for achieving the given coverage \(1-\alpha\) (say 90%) empirically on \(\mathcal{D}_{\text{cal}}\). Let \(Q(\alpha):=\min\{t:\mathbb{P}_{X,Y}\{S(X,Y)\leq t\}\geq 1-\alpha\}\) be the true quantile of the conformity score for \((X,Y)\). Let \(\mathcal{D}_{\text{cal}}=\{(X_{i},Y_{i})\}_{i=1}^{n}\) denote a calibration set with \(n\) exchangeably drawn random samples from the underlying distribution \(\mathcal{P}_{X,Y}\). We denote the (\(1-\alpha\))-quantile derived from \(\{S_{i}\}_{i=1}^{n}\) by \(Q(\alpha;\{S_{i}\}_{i=1}^{n})\) = \(S_{([(1-\alpha)(n+1)])}\). The prediction set for a new testing input \(X\) is given by \(\mathcal{C}(X)\)=\(\{y:S(X,y)\leq\tau\}\) using a threshold \(\tau\). CP provides valid guarantees that \(\mathcal{C}(X)\) has coverage \(1-\alpha\) on future examples drawn from the same distribution \(\mathcal{P}_{X,Y}\).
For classification, several non-conformity scores can be employed. The homogeneous prediction sets (HPS) score is defined [25, 11] as follows:
\[S^{\text{HPS}}(X,y)=1-F_{\theta}(X)_{y}, \tag{1}\]
where \(F_{\theta}(X)_{y}\in[0,1]\) is the probability corresponding to the true class \(y\) using the deep model \(F_{\theta}\). Recent work has proposed the adaptive prediction sets (APS) [12] score that is based on ordered probabilities. The score function of APS is defined as follows:
\[S^{\text{APS}}(X,y)= \sum_{y^{\prime}\in\mathcal{Y}}F_{\theta}(X)_{y^{\prime}}\mathds{1 }\left\{F_{\theta}(X)_{y^{\prime}}>F_{\theta}(X)_{y}\right\}\] \[+u.F_{\theta}(X)_{y}, \tag{2}\]
where \(u\) is a random variable uniformly distributed over \([0,1]\) and \(\mathds{1}\) is the indicator function.
**Problem Definition.** The high-level goal of this paper is to study methods to improve the robustness of the standard CP framework to adversarial/noisy examples of the form \(\widetilde{X}=X+\epsilon\), where \(\epsilon\) is the additive perturbation from \(\mathcal{E}_{r}=\{\epsilon\in\mathbb{R}^{d}:\|\epsilon\|_{p}\leq r\}\). Specifically, we propose a novel adaptive probabilistically robust conformal prediction (aPRCP) algorithm which accounts for \((1-\tilde{\alpha})\) (see \(\tilde{\alpha}\) for robust quantile in (8)) fraction of perturbations in \(\mathcal{E}_{r}\) for each data \((X,Y)\). Setting \(\tilde{\alpha}=0\) as an extreme case makes aPRCP handle all perturbations (i.e., worst-case), similar to RSCP [10]. We theoretically and empirically analyze aPRCP to demonstrate improved trade-offs between nominal performance (evaluation on clean inputs) and robust performance (evaluation on perturbation inputs). Figure 1 conceptually illustrates the PRCP problem setting.
## 3 Robust Conformal Prediction
This section describes our proposed adaptive probabilistically robust conformal prediction (aPRCP) algorithm. First, we introduce the notion of adversarially robust coverage and extend it to probabilistically robust coverage. Next, we motivate the significance of aPRCP algorithm and study the theoretical connection between aPRCP and adversarially robust CP setting [10] in terms of probabilistically robust coverage and prediction set size. Finally, we analyze the gap between empirical and population level
Figure 2: Results on CIFAR100 dataset using a ResNet model to illustrate the trade-offs between nominal performance (evaluation on clean data) and robust performance (evaluation on adversarial examples) for Vanilla CP, RSCP, and variants of the aPRCP algorithm. (a) and (c) show the evaluation against clean examples and their corresponding noisy samples (i.e., \(\widetilde{X}=X+\epsilon;||\epsilon||_{2}\leq r\)) w.r.t probabilistic robustness. (b) and (d) show the evaluation against clean examples and their corresponding bounded adversarial examples. aPRCP(worst-adv) is the variant of aPRCP that works for worst adversarial data. Vanilla CP fails to achieve coverage for worst-case adversarial data. RSCP achieves a robust coverage much higher than the target (nominal) coverage, resulting in large prediction sets. aPRCP achieves better results (tighter coverage and smaller prediction set size) than vanilla CP and RSCP in terms of the joint performance on clean, noisy, and worst-adversarial data.
quantiles in terms of the number of data samples.
### Probabilistically robust coverage
This section introduces the expanded notation of inflation condition on the conformity scoring function from the worst-case adversarial robustness setting to the more general probabilistic robustness setting. We start with the following definitions that are originally introduced for the ARCP setting [10] and capture the inflation property of the score function for deriving adversarial robustness.
**Definition 1**.: _(Adversarially robust coverage) A prediction set \(\mathcal{C}(\widetilde{X})\) provides (\(1-\alpha\))-adversarially robust coverage if for a desired coverage probability \(1-\alpha\in(0,1)\):_
\[\mathbb{P}_{X,Y}\{Y\in\mathcal{C}(\widetilde{X}=X+\epsilon),\forall\epsilon \in\mathcal{E}_{r}\}\geq 1-\alpha. \tag{3}\]
**Definition 2**.: _(\(M_{r}\)-adversarially inflated score function) \(S:\mathcal{X}\times\mathcal{Y}\rightarrow\mathbb{R}\) is an \(M_{r}\)-adversarially inflated score function if the following inequality holds:_
\[S(X+\epsilon,Y)\leq S(X,Y)+M_{r},\\ \forall X\in\mathcal{X},Y\in\mathcal{Y}\text{ and }\epsilon\in \mathcal{E}_{r}. \tag{4}\]
The strategy of RSCP algorithm [1] for the ARCP setting is to directly add an inflated quantity \(M_{r}\) to the quantile determined from the clean data \((X,Y)\),
\[\tau^{\text{AR}}(\alpha):=Q(\alpha)+M_{r}, \tag{5}\]
and construct a prediction set with \(\mathcal{C}^{\text{AR}}(X)=\{y\in\mathcal{Y}:S(X+\epsilon,y)\leq\tau^{\text{AR }}(\alpha)\}\). To this end, since \(Q(\alpha)\) provides \((1-\alpha)\) marginal coverage on clean data \((X,Y)\), \(\tau^{\text{AR}}(\alpha)\) thus guarantees \((1-\alpha)\)-adversarially robust coverage on adversarial data \((X+\epsilon,Y)\).
This result is summarized in the following proposition.
**Proposition 1**.: _(Adversarially robust coverage of RSCP, Theorem 1 in [1]) Assume the score function \(S\) is \(M_{r}\)-adversarially inflated. Let \(\mathcal{C}^{\text{AR}}(X)=\{y\in\mathcal{Y}:S(\widetilde{X},y)\leq\tau^{ \text{AR}}(\alpha)\}\) be the prediction set for a testing sample \(\widetilde{X}\). Then RSCP achieves (\(1-\alpha\))-adversarially robust coverage._
Now we extend the notion of adversarially robust coverage to the more general and relaxed condition, i.e., probabilistically robust coverage, by introducing the definition below.
**Definition 3**.: _(Probabilistically robust coverage) A prediction set \(\mathcal{C}(\widetilde{X})\) provides (\(1-\alpha\))-probabilistically robust coverage if for a desired coverage probability \(1-\alpha\in(0,1)\):_
\[\mathbb{P}_{X,Y,\epsilon}\{Y\in\mathcal{C}(\widetilde{X}=X+\epsilon)\}\geq 1 -\alpha. \tag{6}\]
We highlight that the key difference between adversarially robust coverage (Definition 1) and probabilistically robust coverage (Definition 3) is whether the distribution of the perturbation \(\epsilon\) is involved in the comparison with the target probability \(1-\alpha\): probabilistically robust coverage goes though the joint distribution involving \(\epsilon\), i.e., \(\mathbb{P}_{X,Y,\epsilon}\{\cdot\}\) in (6) instead of \(\mathbb{P}_{X,Y}\{\cdot,\forall\epsilon\in\mathcal{E}_{r}\}\) in (3). Based on this understanding, we can see that a conformal prediction method can achieve (\(1-\alpha\))-probabilistically robust coverage if it can satisfy (\(1-\alpha\))-adversarially robust coverage. For the same target probability \((1-\alpha)\), adversarially robust coverage is more difficult to achieve than probabilistically robust coverage. Hence, the notion of probabilistic robustness for CP is more general and relaxed.
Naturally, we now extend the definition of the uniform inflated score function (Definition 2) to the following one.
**Definition 4**.: _(\(M_{r,\eta}\)-probabilistically inflated score function) \(S:\mathcal{X}\times\mathcal{Y}\rightarrow\mathbb{R}\) is an \(M_{r,\eta}\)-probabilistically inflated score function if the following inequality holds for \(\eta\in[0,\alpha]\):_
\[\mathbb{P}_{Z}\big{\{}S(X+\epsilon,Y)\leq S(X,Y)+M_{r,\eta}\big{\}}\geq 1-\eta. \tag{7}\]
The above definition regarding the inflation of the score function is general and includes (4) given in Definition 2 as a special case: By simply setting \(\eta=0\), we get \(\mathbb{P}_{Z}\{S(X+\epsilon,Y)\leq S(X,Y)+M_{r,0}\}\geq 1\), i.e., \(M_{r,0}=M_{r}\).
Again, we highlight that the above condition involves the joint distribution on \(Z\), as in Definition 3.
Based on the extension from adversarial to probabilistic robustness setting, it is easy to develop a similar principle on the _inflated_ score function to derive probabilistically robust coverage, which we refer to as inflated probabilistically robust conformal prediction (iPRCP). To this end, let
\[\tau^{\text{iPR}}(\alpha;\eta):=Q(\alpha^{*}_{\text{iPR}})+M_{r,\eta},\]
where \(\alpha^{*}_{\text{iPR}}=1-(1-\alpha)/(1-\eta)\). \(\tau^{\text{iPR}}(\alpha;\eta)\) is the threshold determined by iPRCP that treats \(\eta\) from probabilistically inflated score function as a hyper-parameter. We use \(\alpha^{*}_{\text{iPR}}\) as the probability for deriving the quantile on clean data, as (5) in ARCP.
**Proposition 2**.: _(Probabilistically robust coverage of iPRCP) Assume the score function \(S\) is an \(M_{r,\eta}\)-probabilistically inflated. Let \(\mathcal{C}^{\text{iPR}}(\widetilde{X})=\{y\in\mathcal{Y}:S(\widetilde{X},y) \leq\tau^{\text{iPR}}(\alpha;\eta)\}\) be the prediction set for a testing sample \(\widetilde{X}=X+\epsilon\). Then iPRCP achieves (\(1-\alpha\))-probabilistically robust coverage._
This result shows that we can guarantee the (\(1-\alpha\))-probabilistically robust coverage if we use \(\tau^{\text{iPR}}(\alpha;\eta)\) to construct the prediction set \(\mathcal{C}^{\text{iPR}}\). While the idea is simple and follows the inflation quantile used in the ARCP setting,
it implies that we _have to know_\(M_{r,\eta}\), the inflated quantity on the clean quantile. This requires us to know the score function very well. Otherwise, we have to design a score function that satisfies the desired condition, similar to how the randomly smoothed score function was designed by RSCP algorithm to work for the ARCP setting [10]. It was carefully designed to offer a uniform Lipschitz continuity with the requirement of an additional set of Gaussian random samples. This design may introduce additional restrictions, since extra samples are required every time the score function is applied, including each calibration and testing sample. Therefore, we would like to address the following question: _Can we design an adaptive algorithm to fit the underlying distribution without any prior knowledge or special design of the score function?_
### Adaptive Procp Algorithm
This section presents our adaptive algorithm for achieving probabilistically robust coverage (aPRCP). We summarize it in Algorithm 1 and elaborate it below. First, we define the \((1-\tilde{\alpha})\)-_robust quantile_ for a given \(X\) as follows
\[Q^{\text{rob}}(X,Y;\tilde{\alpha})\\ :=\min\{t:\mathbb{P}_{\epsilon}\{S(\widetilde{X},Y)\leq t\}\geq 1- \tilde{\alpha}\}. \tag{8}\]
Given \((X,Y)\) and \(\tilde{\alpha}\), \(Q^{\text{rob}}(X,Y;\tilde{\alpha})\) returns the quantile from all randomly perturbed \(\widetilde{X}=X+\epsilon\) over \(\epsilon\in\mathcal{E}_{r}\). It acquires the inflated quantity from a local region of \(X\) as \(\tilde{\alpha}\) indicates how conservative this inflation can be. We denote the empirical robust quantile (in Line 5 of Algorithm 1) by \(\widehat{Q}^{\text{rob}}\).
Next, we define the threshold of the proposed adaptive PRCP (aPRCP) for a hyper-parameter \(s\in[0,\alpha]\) as follows.
\[\tau^{\text{aPR}}(\alpha;s)=\min\{t:\\ \mathbb{P}_{X,Y}\{Q^{\text{rob}}(X,Y;\alpha_{\text{aPR}}^{*}) \leq t\}\geq 1-\alpha+s\}, \tag{9}\]
where \(\alpha_{\text{aPR}}^{*}=1-(1-\alpha)/(1-\alpha+s)\) is a conservativeness parameter for the robust quantile in (8) that depends on the target probability \(\alpha\) and the hyper-parameter \(s\). In practice, the empirical threshold \(\widehat{\tau}^{\text{aPR}}=\widehat{Q}^{\text{rob}}_{((1-\alpha+1)(1-\alpha+ s))}\) is selected from empirical robust quantiles \(\{\widehat{Q}^{\text{rob}}_{i}\}_{i=1}^{n}\) (in Line 6 of Algorithm 1). Our aPRCP algorithm is adaptive since it finds \(\alpha_{\text{aPR}}^{*}\) that is adaptive to the underlying distribution of \((X,Y)\) as long as \(\alpha\) and \(s\) are fixed apriori. The following formal result guarantees the probabilistically robust coverage for the aPRCP algorithm.
**Theorem 1**.: _(Probabilistically robust coverage of aPRCP) Let \(\mathcal{C}^{\text{aPR}}(\widetilde{X}=X+\epsilon)=\{y\in\mathcal{Y}:S( \widetilde{X},y)\leq\tau^{\text{aPR}}(\alpha;s)\}\) be the prediction set for a testing sample \(\widetilde{X}\). Then aPRCP achieves \((1-\alpha)\)-probabilistically robust coverage._
**Remark 1**.: In fact, \(\tau^{\text{aPR}}(\alpha;s)\) is the \((1-\alpha+s)\)-th quantile (going through \((X,Y)\)) of the \((1-\alpha_{\text{aPR}}^{*})\)-robust quantiles (going through \(\epsilon\)). One benefit of aPRCP is the transfer of the inflation from the score function to the specified probability (i.e., an \(s\) increase in probability). Therefore, it is not required to have a prior knowledge of either \(M_{r}\) as in ARCP or \(M_{r,\eta}\) as in iPRCP. Instead, aPRCP requires finding a feasible and a good value for \(\alpha_{\text{aPR}}^{*}\) by treating \(s\) as a hyper-parameter, though it inflates the specified probability, i.e., \(1-\alpha+s\geq 1-\alpha\), and \(1-\alpha_{\text{aPR}}^{*}\geq 1-\alpha\).
**Theorem 2**.: _(Probabilistically robust coverage of aPRCP for cross-domain noise) Let \(\mathcal{P}_{\epsilon}^{test}\) and \(\mathcal{P}_{\epsilon}^{cal}\) denote different distributions of \(\epsilon\) during the testing and calibration phases, respectively. Assume \(\mathbb{P}_{\epsilon\sim\mathcal{P}_{\epsilon}^{cal}}\{\epsilon\}-\mathbb{P}_{ \epsilon\sim\mathcal{P}_{\epsilon}^{test}}\{\epsilon\}\leq d\) for all \(\|\epsilon\|\leq r\). Set \(\alpha_{\text{aPR}}^{*}=1-d-(1-\alpha)/(1-\alpha+s)\) in (9). Let \(\mathcal{C}^{\text{aPR}}(\widetilde{X}=X+\epsilon)=\{y\in\mathcal{Y}:S( \widetilde{X},y)\leq\tau^{\text{aPR}}(\alpha;s)\}\) be the prediction set for a testing sample \(\widetilde{X}\). Then aPRCP achieves \((1-\alpha)\)-probabilistically robust coverage._
**Remark 2**.: The key assumption we make is \(\mathbb{P}_{\epsilon\sim\mathcal{P}_{\epsilon}^{cal}}\{\epsilon\}-\mathbb{P}_{ \epsilon\sim\mathcal{P}_{\epsilon}^{test}}\{\epsilon\}\leq d\), which is analogous to \(L^{1}\)-distance used in the domain adaptation literature [11, 12]. One can interpret it as the maximal gap of the density probability between the calibration and testing distributions when fixing \(\epsilon\). As per our analysis, when this gap can be bounded by a sufficiently small constant \(d\), with an inflated nominated coverage in the robust quantile (i.e., setting \(\alpha_{\text{aPR}}^{*}=1-d-(1-\alpha)/(1-\alpha+s)\) in (9)), we can guarantee probabilistically robust coverage for aPRCP.
### Connection Between Arcp and Prcp
Although ARCP algorithm can achieve adversarially robust coverage, we can still connect ARCP and PRCP in the sense of _probabilistically robust coverage_ and understand their performance in terms of _efficiency_. Recall that efficiency of conformal prediction algorithms refers to the measured size of prediction sets for testing samples when some desired coverage is achieved. For example, for the same target
probability \(1-\alpha\), a smaller threshold indicates better efficiency. The following result shows the possibly improved efficiency of iPRCP and aPRCP when compared to ARCP after that their hyper-parameters were tuned properly (i.e., \(\eta\) for iPRCP and \(s\) for aPRCP).
**Corollary 3**.: _To achieve the same (\(1-\alpha\))-probabilistically robust coverage on \(Z\), the following inequalities hold:_
\[\min_{\eta\in[0,\alpha]}\tau^{iPR}(\alpha;\eta)\leq\tau^{AR}(\alpha),\ \ \min_{s\in[0,\alpha]}\tau^{aPR}(\alpha;s)\leq\tau^{AR}(\alpha).\]
When all three algorithms achieve (\(1-\alpha\))-probabilistically robust coverage, smaller thresholds yield better efficiency, i.e., iPRCP and aPRCP. The idea of the above result is to particularly set \(\eta=0\) and \(s=0\), which makes iPRCP and aPRCP degenerate to ARCP, resulting in the same threshold. For aPRCP with \(s=0\), we have \(\alpha^{*}_{\text{APR}}=0\), i.e., \(1\)-robust quantile for each \((X,Y)\) used, which recovers ARCP.
### Approximation Error of Empirical Quantiles
In the above sections, we presented algorithms and their analysis directly in the population sense, including the true quantile \(Q(\alpha)\) and \(Q^{\text{rob}}(X;\alpha)\). However, when executing a given conformal prediction method on exchangeable samples \(\mathcal{D}_{\text{cal}}\), we employ empirical quantiles in practice. To close this gap between theory and practice, we additionally discuss the concentration inequalities for empirical approximation to these quantities (i.e., the gap between empirical and true quantiles) as a function of the number of samples.
**Proposition 3**.: _(Concentration inequality for quantiles) Let \(Q(\alpha)=\max\{t:\mathbb{P}_{V}\{V\leq t\}\geq 1-\alpha\}\) be the true quantile of a random variable \(V\) given \(\alpha\), and \(\widehat{Q}_{n}(\alpha)=V_{(\lceil(n+1)(1-\alpha)\rceil)}\) be the empirical quantile estimated by \(n\) randomly sampled set \(\{V_{1},...,V_{n}\}_{i=1}^{n}\). Then with probability at least \(1-\delta\), we have \(\widehat{Q}_{n}(\alpha+\tilde{O}(1/\sqrt{n}))\leq Q(\alpha)\leq\widehat{Q}_{n }(\alpha-\tilde{O}(1/\sqrt{n}))\) where \(\tilde{O}\) hides the logarithmic factor._
The above result shows that more data samples from the underlying distribution for \((X,Y)\) or \(\epsilon\) will help in improving the approximation of empirical quantiles on score function \(S\) at a rate of \(\tilde{O}(1/\sqrt{n})\), where \(n\) is number of samples. Note that we only use this proposition to fill the gap between empirical and true quantiles. Some prior work also studied similar concentration results [20].
## 4 Experiments and Results
In this section, we present the empirical evaluation of our proposed aPRCP algorithm along different dimensions.
### Experimental Setup
**Classification Datasets.** We consider three benchmark datasets for evaluation: CIFAR10 [13], CIFAR100 [13], and ImageNet [11] using the standard training and test split.
**Deep Neural Network Models.** We consider ResNet-110 [12] as the main model architecture for CIFAR10 and CIFAR100 and ResNet-50 for ImageNet in our experiments. We provide results on additional deep neural networks in the Appendix due to space constraints noting that we find similar patterns. We train each model using two different approaches : _1) Standard training:_ The training is only performed using clean training examples; and _2) Gaussian augmented training:_ The training procedure employs Gaussian augmented examples [1] parameterized by a given standard deviation \(\sigma=0.125\).
**Methods and Baselines.** We consider two relevant state-of-the-art CP algorithms as our baselines. First, we employ Vanilla CP[12] designed for clean input examples. Second, we use randomly smooth conformal prediction (RSCP) [1] which is designed to handle worst-case adversarial examples. We employ the publicly available implementations of Vanilla CP2 and RSCP3 using the best settings suggested by their authors.
Footnote 2: [https://github.com/mseisa/arc](https://github.com/mseisa/arc)
Footnote 3: [https://github.com/Asafgendler/RSCP](https://github.com/Asafgendler/RSCP)
We consider different configurations of our proposed adaptive probabilistically robust CP (aPRCP) algorithm. aPRCP(worst-adv) refers to the configuration where the evaluation of aPRCP is performed over adversarial examples generated using an adversarial attack algorithm. aPRCP(\(\tilde{\alpha}\)) refers to the configuration where the evaluation is performed over noisy examples with a bounded perturbation on the test data. We provide additional results using different values for \(\tilde{\alpha}\) in the Appendix.
**Adversarial Attack Algorithms.** To generate adversarial examples, we employ the white-box PGD attack algorithm [1] to evaluate Vanilla CP algorithm. For RSCP and aPRCP(worst-adv), we employ an adapted PGD algorithm for smoothed classifiers as proposed in Salman et al. salman2019efficient. We provide additional results using different adversarial algorithms in the Appendix.
**Evaluation Methodology.** We present all our experimental results for desired coverage as \((1-\alpha)\)=90%. We report the average metrics (coverage and prediction set size) over 50 different runs for all datasets. We consider two different evaluation settings at the inference time as described below.
(a) **Probabilistic robustness evaluation**: We randomly sample \(n_{s}=128\) examples for each clean testing input: \(X^{j}=X+\epsilon_{j}\) (_j_=1 to \(n_{s}\)), where \(||\epsilon_{j}||_{2}\leq r=0.125\) for the CIFAR data and \(||\epsilon_{j}||_{2}\leq r=0.25\) for the ImageNet
data. For a better span during the sampling procedure for each clean testing input, we sample two perturbations \(\epsilon_{j}\) for each \(r^{(k)}\) in \(0<r^{(1)}<\cdots<r^{(k)}\leq r\) such that \(\|\epsilon_{j}\|_{2}=r^{(k)}\).
We define both coverage and prediction set size metrics to adapt to the probabilistic robustness setting as follows: _Coverage_: fraction of examples for which prediction set contains the ground-truth output.
\[\text{Coverage}=\frac{1}{n_{s}}\sum_{j=1}^{n_{s}}\mathbb{1}[Y_{n+1}\in\tilde{ C}(X_{n+1}+\epsilon_{j})]. \tag{10}\]
_Efficiency_: average prediction set size, small values mean high efficiency.
\[\text{Prediction Set Size}=\frac{1}{n_{s}}\sum_{j=1}^{n_{s}}|\tilde{C}(X_{n+1 }+\epsilon_{j})|, \tag{11}\]
where \(||\epsilon_{j}||_{2}\leq r=0.125\) for CIFAR dataset, and \(||\epsilon_{j}||_{2}\leq r=0.25\) for the ImageNet dataset. These re-defined metrics allow us to evaluate aPRCP(\(\tilde{\alpha}\)) with different values of probability parameters \(\tilde{\alpha}\) for probabilistic robustness. We provide additional results explaining the impact of the choice of the sampling distributions in the Appendix.
(b) **Worst-case evaluation:** We employ adversarial attack algorithms as mentioned above to create one worst-case adversarial example (\(\tilde{X}\)) for each clean testing input (\(X\)). We define both metrics for this setting as follows:
\[\text{Coverage}=\mathbb{1}[Y_{n+1}\in\tilde{C}(\tilde{X}_{n+1})]. \tag{12}\] \[\text{Prediction Set Size}=|\tilde{C}(\tilde{X}_{n+1})|. \tag{13}\]
### Results and Discussion
**Probabilistic Robust Coverage Performance.** Figure 3 shows the probabilistic robustness performance (in terms of coverage and prediction set size) obtained by Vanilla CP, RSCP, and aPRCP(\(\tilde{\alpha}=0.1\)) for all three datasets using standard training. We make the following observations. 1) Vanilla CP algorithm fails in achieving the target probabilistic robust coverage. 2) RSCP algorithm achieves the desired probabilistic coverage, but has an empirical coverage significantly larger then 90%. This yields very large prediction sets. Using APS, RSCP yields on average a prediction set of 30 labels for CIFAR100 and 60 for ImageNet. 3) aPRCP(\(\tilde{\alpha}=0.1\)) produces smaller prediction sets by keeping the actual coverage close to the target coverage. aPRCP(\(\tilde{\alpha}=0.1\)) reduces the prediction set by an average of 20 labels for CIFAR100 and ImageNet compared to RSCP method using any of the two non-conformity scores.
**Adversarially Robust Coverage Performance.** Figure 4 shows the robust coverage and prediction set size obtained by Vanilla CP, RSCP, and aPRCP(worst-adv) achieved on the worst-case examples for three different datasets using Gaussian augmented training. We observe similar patterns as the probabilistic robust coverage results. 1) Vanilla CP fails to achieve the target coverage empirically. For all datasets, it achieves empirical coverage lower then 80%. 2) Similar to the probabilistic robustness results, RSCP method achieves an empirical coverage larger then 95% for all datasets, yielding significantly large prediction sets for all datasets. 3) aPRCP(worst-adv) produces smaller prediction sets by keeping the actual coverage close to the target coverage (by a margin of 2%) on worst-case adversarial ex
Figure 3: Probabilistic robust coverage (top) and prediction set size (bottom) constructed by Vanilla CP, RSCP, and aPRCP(\(\tilde{\alpha}=0.1\)) using HPS and APS scoring functions (target coverage is \(90\%\)). Results are reported over 50 runs.
amples. aPRCP (worst-adv) reduces the prediction set by more then 10 labels for CIFAR100 and ImageNet compared to RSCP method using any of the two non-conformity scores (HPS and APS).
## 5 Related Work
**Conformal Prediction.** CP is a general framework for uncertainty quantification that provides marginal coverage guarantees without any assumptions on the underlying data distribution [21]. CP can be used for regression [22, 23, 24, 25, 26, 27, 28, 29, 21] to produce prediction intervals and for classification [23, 24, 25, 26, 27] to produce prediction sets. Prior work has also considered instantiations of the CP framework to handle the differences between training and test distributions that is caused by long-term distribution shift [24], covariate shift[25], and label-distribution shift [22]. However, none of these existing works focus on the robustness setting where the distributional shift is caused by a bounded adversarial perturbation. While using adversarial training seems intuitive to mitigate this problem, it was shown that vanilla CP cannot achieve the target coverage on adversarial data [22].
**Robust Conformal Prediction.** CP methods for robust coverage due to natural or adversarial perturbations is a new line of research that requires theoretical and empirical analysis. Very few works have proposed variants of CP to handle adversarial robust settings. The work on cautious deep learning [23] proposed a CP-based prediction set construction that accounts for adversarial examples. However, this method does not provide any theoretical guarantees. Recently, randomly smoothed conformal prediction (RSCP) [22] was proposed as a generalization for adversarial examples using randomized smoothing. This generalization is achieved by introducing a constant inflation condition that adjusts the CP quantile to adversarial perturbations. This adjustment is proportional to the potential adversarial perturbations that can affect the test data. Hence, RSCP is prone to produce large prediction sets along with high marginal coverage to achieve robustness.
We study the general setting of probabilistically robust CP and develop probably correct algorithms to achieve improved trade-offs for nominal and robust performance over vanilla CP and RSCP. The key differences between our work (aPRCP) and RSCP are: 1) aPRCP uses a _quantile-of-quantile_ design and does not require finding a score inflation constant like RSCP. 2) RSCP requires the design of a specialized scoring function while aPRCP can employ any existing score function. 3) aPRCP does not have test-time overhead unlike RSCP due to the generation of samples.
## 6 Summary and Future Work
This paper studied the novel problem of probabilistic robustness for conformal prediction (PRCP) based uncertainty quantification of deep classifiers. We developed the adaptive PRCP (aPRCP) algorithm based on the principle of quantile-of-quantile design and theoretically analyzed its
Figure 4: Adversarially robust coverage (top) and prediction set size (bottom) constructed by Vanilla CP, RSCP, and aPRCP(worst-adv) using HPS and APS scoring functions (target coverage is \(90\%\)). Results are reported over 50 runs.
effectiveness to achieve improved trade-offs between performance on clean data and robustness to adversarial examples. Our experiments on multiple image datasets using deep classifiers demonstrated the effectiveness of aPRCP over vanilla CP methods and adversarially robust CP methods. Future work should study and analyze end-to-end PRCP algorithms.
## Acknowledgements
This research is supported in part by Proofpoint Inc. and the AgaID AI Institute for Agriculture Decision Support, supported by the National Science Foundation and United States Department of Agriculture - National Institute of Food and Agriculture award #2021-67021-35344. The authors would like to thank the feedback from anonymous reviewers who provided suggestions to improve the paper.
|
2304.00052 | Effects of Thermal Modification on the Flexure Properties, Fracture
Energy, and Hardness of Western Hemlock | This study investigates the effect of thermal modification on the flexural
properties, transverse fracture energy, and hardness of western hemlock, a
material which is finding increasing applications in construction. Flexure
tests on specimens featuring longitudinal and transverse grains showed that
thermal modification at 167C slightly improves the flexural modulus and
strength and leads to less statistical variability compared to unmodified
samples. On the other hand, the fracture and Janka hardness tests revealed a
more pronounced brittleness of the thermally modified samples. In fact, the
total mode I fracture energy of modified Single Edge Notch Bending (SENB)
samples was about 47% lower for radial-longitudinal systems and 60% lower for
tangential-longitudinal systems. Similarly, the average Janka hardness in the
tangential, radial, and transverse planes was 8.5%, 3.9%, and 9.4% lower in the
modified specimens, respectively. The results presented in this work show that
thermal modification can have a significant effect on the fracturing behavior
of western hemlock and its energy dissipation capabilities. For design, this
must be taken into serious consideration as these properties significantly
influence the damage tolerance of this wood in the presence of stress
concentrations such as e.g., those induced in bolted joints and cut outs.
Fracture energy and hardness are also strongly correlated to ballistic
performance. | Troy Nakagawa, Erik Poulin, Talbot Rueppel, Zhisong Chen, Juliet Swinea, Mark O'Brien, Guy Houser, Geoffrey Wood, Malloree Weinheimer, Pouria Bahmani, Peter Stynoski, Marco Salviato | 2023-03-31T18:14:21Z | http://arxiv.org/abs/2304.00052v1 | **Effects of Thermal Modification on the Flexure Properties, Fracture Energy, and Hardness of Western Hemlock**
###### Abstract
This study investigates the effect of thermal modification on the flexural properties, transverse fracture energy, and hardness of western hemlock, a material which is finding increasing applications in construction. Flexure tests on specimens featuring longitudinal and transverse grains showed that thermal modification at 167\({}^{\circ}\)C slightly improves the flexural modulus and strength and leads to less statistical variability compared to unmodified samples. On the other hand, the fracture and Janka hardness tests revealed a more pronounced brittleness of the thermally modified samples. In fact, the total mode I fracture energy of modified Single Edge Notch Bending (SENB) samples was about 47% lower for radial-longitudinal systems and 60% lower for tangential-longitudinal systems. Similarly, the average Janka hardness in the tangential, radial, and transverse planes was 8.5%, 3.9%, and 9.4% lower in the modified specimens, respectively.
The results presented in this work show that thermal modification can have a significant effect on the fracturing behavior of western hemlock and its energy dissipation capabilities. For design, this must be taken into serious consideration as these properties significantly influence the damage tolerance of this wood in the presence of stress concentrations such as e.g., those induced in bolted joints and cut outs. Fracture energy and hardness are also strongly correlated to ballistic performance.
Thermal Modification, Fracture Energy, Janka Hardness, Work of Fracture
## 1 Introduction
Wood is a widely used natural material in various construction and design applications due to its desirable mechanical properties and aesthetic appearance (Fridley 2002). However, wood is also prone to deterioration over time due to environmental factors such as moisture,
UV radiation, and biological attack (Reinprecht 2016). To enhance the durability of wood and increase its resistance to decay, thermal modification has emerged as a promising treatment method. This process involves subjecting wood to high temperatures, usually between 160 and 240\({}^{\circ}\)C, in the absence of oxygen, which causes changes in its chemical and physical structure (Hill, Alteng and Rautkari 2021, Militz and Alteng 2014, Sandberg and Kutnar 2016). These changes lead to improvements in properties such as dimensional stability, and microbial resistance while also reducing wood's moisture absorption (Hill, Alteng and Rautkari 2021). Several studies have investigated the effects of thermal modification on wood mechanical properties and durability showing that results can vary depending on the species of wood, temperature and duration of the treatment, as well as other factors (Bourgois and Guyonnet 1988, Hillis 1984, Kubojima, Okano and Ohta 2000, Lekounougou, et al. 2011).
Hakkou et al. reported that thermal modification of beech wood increased its dimensional stability, hardness, and decay resistance (Hakkou, et al. 2006). Similarly, a study by Yildiz et al. showed that thermal modification of Spruce wood resulted in improved dimensional stability and resistance to fungal decay (Yildiz, Gezer and Yildiz 2006).
Pleschberger et al. (Pleschberger, et al. 2014) studied the fracture behavior of ash modified at a temperature of 200, 210, and 220 \({}^{\circ}\)C in radial/longitudinal and tangential/longitudinal direction at 65% air relative humidity (RH). They reported an increase in brittleness of the material along with a reduction in fracture energy. Similar results were also reported on the same material by Majano-Majano et al. (Majano-Majano, Hughes and Fernandez-Cabo 2010) and on spruce wood by Murata et al. (Murata, Watanabe and Nakano 2013).
Standfest and Zimmer (G. and B. 2008), reported an increase in Brinell hardness of ash in the longitudinal direction while in the tangential and radial direction the hardness decreased. On the other hand, Govorcin et al. (S., T. and R. 2009) showed a reduction of ash hardness in the principal anatomical directions after heat treatment at a temperature of 200 \({}^{\circ}\)C. They also reported a decrease in Modulus of Rupture (MOR) and compression strength in the longitudinal direction.
Roszyk et al (Roszyk, et al. 2020) investigated the moisture-dependent strength anisotropy of thermally modified European ash in compression. They showed that thermal treatment kept the intrinsic anisotropy of wood mechanical properties. It decreased wood hygroscopicity, which resulted in improved strength and elasticity measured for wet wood when compared to untreated and treated samples.
Nhacila et al. (Nhacila, et al. 2020) studied the effects of thermal modification on the physical and mechanical properties of Mozambique _Brachystegia spiciformis_ and _Julbernardia globiflora_ wood. For B. _spiciformis_, they showed that the Modulus of Elasticity (MOE) decreased by 10.2%, the Modulus of Rupture (MOR) by 50.8%, compression strength parallel to the grain by 29.2% and Brinell hardness by 23.5%. Timber of _J. globiflora_ followed the same trend with an MOE decrease by 6.9%, an MOR decrease by 53.2% and a decrease in compression strength parallel to the grain by 21.9%.
Boonstra et al. (Boonstra, Van Ackerb and Tjeerdsmac 2007) investigated the effects of heat treatment on a number of softwoods including Radiata pine, Scots pine, and Norway spruce. They showed that, in general, heat treatment in these softwoods leads to a large decrease in the tensile strength parallel to the grain whereas the compressive strength parallel to the fibers increased. They also found a quite significant reduction in impact strength.
While several studies have been focused on the effect of heat treatment on a number of wood species, far less attention has been devoted to the study of western hemlock, especially when it comes to its fracture energy and hardness which are important indicators of its damage tolerance and ballistic performance. A preliminary study has been published recently by Nourian and Avramidis (Nourian and Avramidis, 2021) who investigated the effects of commercial thermal modification on western hemlock by performing evaluations of basic density, hygroscopicity, water absorption, anti-swelling efficiency, color change, Janka hardness, and dynamic modulus of elasticity. Their results revealed that basic density, hygroscopicity, and water absorption decreased at higher treatment temperatures, while dimensional stability considerably increased. On the other hand, the mechanical behavior was not significantly affected by the thermal treatment. However, the results did not cover the fracture behavior which is an important aspect for design with this type of wood. The goal of the present article is to take a step in filling this knowledge gap by providing an extensive investigation on the effect of thermal modification on the longitudinal and transverse flexural behavior, fracture energy, and hardness in western hemlock.
Thermal modification offers some potential tremendous benefits to building envelopes and building science. But with the low volume of available thermally modified lumber worldwide, producers have typically focused on value applications and not made serious attempts towards structural applications. In the course of this development at the Composite Recycling Technology Center (CRTC) but outside the detail of this study, CLT panels were manufactured from thermally modified coastal western Hemlock (Tsuga Heterophylla) and machined to final dimension of.86m width and 3.35m in length, with a square-edge tongue and groove (t&g) feature along the long-edges. The clearances for the side of the t&g interlock were net 1mm, so equivalent to.5mm between each face of the tongue to groove with no taper applied. The CNC machined panels were stored indoors but in an uncontrolled environment with temperatures varying between 7\({}^{o}\)C and 30\({}^{o}\)C, and relative humidity estimated to vary between 50% and 80%. Storage time was 6 to 9 months for these panels. Coastal western hemlock typically has significant issues in dimensional stability caused by internal stresses, inconsistent drying and pockets of moisture (Song, 2019), and exhibits warping, cupping, and twisting on milling to final lumber. Application to cross-laminated timber (CLT) is limited due to these process-induced defects, and tight framing is quite difficult to achieve. One would expect, after precise machining and an extended storage time with varying environmental conditions 0.5mm of clearance would be problematic, but the results were quite encouraging.
After this storage duration a 22 sqm demonstration structure was assembled at the CRTC with 22 interlocking t&g panels. The dimensional stability of the thermally modified coastal western hemlock (CWH) was such that panels slid together without interference or any coercion required. This structure is shown in Figure 1. It's unlikely that this would have been possible with traditional CWH lumber, even kiln-dried, as hygothermal dimensional changes in the overall panel as well as in the t&g clearance dimensions would have caused misalignment resulting in either binding or loose fit. This capability enables a tight and durable building envelope seal, as well as a consistent glue line thickness and higher-performance assembly.
The results of the present article not only provides useful design guidelines to account for the effect of thermal modification on the mechanical behavior of western coastal hemlock, but also represents a first step towards the construction of rich databases to calibrate and validate computational design tools.
This research seeks to act as a reference for integrating a thermally modified undervalued US based wood species into structural design standards and computational models. Advances in the structural relevance of thermally modified wood through the complete understanding of mechanical behavior would positively impact the use of the material in mass timber elements like CLT.
## 2 Materials and Methods
### Materials and Preparation
Sample material for this study was acquired from a small harvest of coastal Western Hemlock from the Makah reservation, which is located on the northwestern tip of the Olympic Peninsula in Washington state. The forest is designated as a site class 3, low
Figure 1: 22 sqm demonstration structure machined and assembled at the CRTC with thermally modified CWH CLT panels. Panels were manufactured at the Composite Materials & Engineering Center at WSU. Note that the tight 1mm of clearance is between the side faces of the t&g joint (circled) to promote panel alignment. A larger gap was intentionally designed between the end faces of the t&g joint to account for assembly tolerances.
elevation, with a mean annual air temperature of 8.89\({}^{\circ}\)C. The forest is characteristic for a pacific northwest climate with large amounts or rainfall in the winter months, and a mean annual precipitation of 203.2 -304.8 cm. This timber was grown intentionally as part of a commercial timber harvest program, and the forest was managed for productive growth. A 55.88 cm diameter, 396 cm log was milled with a Weyerhaeuser mobile sawmill (Weyerhaeuser inc. 2023) into a variety of board sizes, to meet various testing needs.
For this study, two 3.175\(\times\)40.64\(\times\)396 cm boards were cut tangentially from the middle section of the log's radius. All longitudinal and transverse bending specimens were acquired from the outermost board, and all longitudinal and transverse Single Edge Notch Bend (SENB) specimens were acquired from the board adjacent. One 6.35\(\times\)18\(\times\)396 cm board was cut tangentially near the path of the tree for samples to test Janka hardness. Figure 2 shows the location of the two boards designated for these testing specimens.
Each 396 cm board was cut into 3 equal 132 cm lengths. All 6 lengths were conditioned at approximately 21.11\({}^{\circ}\)C and 65% relative humidity for 2 weeks before an accelerated drying regime. Due to time constraints, the wood was dried in a Wisconsin oven for up to 4 days with temperatures not exceeding 48.89\({}^{\circ}\)C. Moisture content measurements were taken daily on a freshly planed surface with an Orion 930 moisture meter. Boards were removed from the drying regime once their moisture content dropped below 13%. Some cracking and shrinkage effects were observed, but effects were minimal, with the ability to produce straight crack free samples unaffected. One length of each bending, SENB, and Janka hardness board was set aside and allowed to remain at ambient conditions until the time of individual specimen preparation. These boards constitute the un-modified (UM) samples.
After drying, the other two lengths of bending and size effect boards were sent to Therna Wood Technologies in Poulson, Montana for a thermal modification treatment. The wood underwent a standard production cycle, with a 7 % hour total run time. Temperature was increased to 167.78\({}^{\circ}\)C over 3 % hours, dwelled at 167.78\({}^{\circ}\)C for 1 % hours, and then cooled to 70\({}^{\circ}\)C. The pressure profile is proprietary but designed to complement the thermal cycle and expedite the modification process. These boards constitute the thermally modified (TM) samples.
Figure 2: Test sample board locations.
### Specim Preparation
#### 2.2.1 Longitudinal and Transverse Bending Specimens
Longitudinal and transverse bending specimens of UM and TM wood were prepared as per ASTM D143 (ASTM D143 2022) for secondary method specimens. Boards were planned on a Grizzly planar down to 25 mm thickness. Then, 25\(\times\)25\(\times\)410mm longitudinal specimens were ripped parallel to the grain on a Sawstop table saw, sampled from the edge farthest from the pith of the tree. They were run back through the planar at 25 mm to reduce dimensional variability from the table saw prior to being cut to length. Transverse specimens spanned the entire width of the board and were ripped perpendicular to the grain. All samples were cut to length on a Dewalt miter saw. Cross sections of grain structure for UM and TM longitudinal bending samples are shown in Figure 3. 25 UM and TM samples were produced for each grain orientation.
#### 2.2.2 Single Edge Notch Bending (SENB) specimens
SENB specimens were prepared for fracture testing in a similar fashion to bending specimens. The planar was used to achieve more precise dimensions whenever possible. The
Figure 3: Grain structure of (\(a\)) TM and (\(b\)) UM longitudinal bending specimens. Loading direction is down from the top of the image.
edge notch was prepared by marking out the length of the crack and using a small kerf, 0.254 mm thick blade hand saw run through a miter box to cut the crack. It is worth noting that while this implies that the initial crack has a finite width, extensive research has shown that this does not affect the fracture behavior provided that the crack tip radius is smaller than Irwin's characteristic length (Salviato, et al. 2016, Bazant, Le and Salviato 2021, Ko, Davey, et al. 2019, Ko, Yang, et al. 2019, Li, et al. 2021, Qiao and Salviato 2019, Kumagai, et al. 2020).
This will be the case in the following sections.
For the fracture tests, two configurations were tested. As shown in Figure 4, the first configuration, called Radial-Longitudinal (RL), sees the crack propagating parallel to the grain in plane LT. The second configuration, called Tangential-Longitudinal (TL), sees the crack propagating parallel to the grain in plane RL. Since the micro/mesostructure in front of the crack tip is different for the two configurations, the fracture energies are going to be different. Hence both configurations were tested. 8 samples were produced for each size and configuration.
Previous investigations on spruce woods (Murata, Watanabe and Nakano 2013) showed that the RL system typically features larger fracture energies and Fracture Process Zones (FPZs) compared to the TL system. When the size of the FPZ is not negligible compared to the structure size, size effect occur which might lead to the measurement of size dependent fracture energy if not properly accounted for (Bazant, Le and Salviato 2021). To make sure that the FPZ was fully developed and a size independent fracture energy could be measured, two sizes of RL system were tested. Since this system was the most prone to size effects, once
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & & \(D\) [mm] & \(w\) [mm] & \(L\) [mm] & \(a\) [mm] \\ \hline \multirow{2}{*}{**RL system**} & Size 1 & 12 & 20 & 80 & 6 \\ & Size 2 & 24 & 20 & 160 & 12 \\
**TL system** & & 36 & 20 & 240 & 18 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Dimensions of the SENB specimens for the two configurations investigated.
Figure 4: Configurations considered for the Single Edge Notch Bending (SENB) tests: (a) Radial-Longitudinal (RL) system and (b) Tangential-Longitudinal (TL) system. Dimensions are reported in Table 1.
it was verified that the measured fracture energy was not affected by size it was decided to test only one size for the TL system. A summary of all the specimen sizes is provided in Table 1 while Figure 4 shows a schematic representation of the configurations investigated in this work.
Dimension and weight measurements were taken, and samples were speckled for digital image correlation. Average moisture content for UM wood was 12.5% and average moisture content for TM wood was 9%.
#### 2.2.3 Janka hardness specimens
Twenty UM and twenty TM 50.8\(\times\)50.8\(\times\)27 mm wood blocks were cut from the same log for Janka hardness testing, as shown in Figure 5. The annual growth ring patterns were very similar between the two configurations and each specimen exhibited 7-8 rings having wide earlywood and thin latewood bands. The densities of the specimen were obtained at Equilibrium Moisture Content (EMC) prior to testing.
### Testing
#### 2.3.1 Flexure and fracture tests
The flexure and the Single Edge Notch Bending (SENB) tests were performed on a Test Resources 316 series UTM with a 22kN load cell with a sampling rate of 10Hz (the typical test setup is shown in Figure 6). A displacement rate of 5.08 mm/min was utilized during the tests. Samples were prepared for Digital Image Correlation (DIC) with a thin coat of spray paint primer and fine random distribution of black speckles using black spray paint. Images were taken with a Nikon D5600 DSLR camera with a Nikon DX VR lens and a sampling rate of 1Hz. Thanks to DIC it was possible to characterize the whole strain field and strain redistributions at damage locations. Through DIC it was also possible to estimate the compliance of the fixture and the machine and verify that the crosshead displacement measured by the load frame was sufficient for deflection measurements.
Figure 5: UM (left) and TM (right) 50.8\(\times\)50.8\(\times\)27mm Western hemlock blocks for Janka hardness testing.
#### 2.3.2 SEM Imaging
After fracture tests were completed, one RL and one TL specimen of the two configurations were taken for SEM imaging of the fracture surface. A Phantom ProX SEM was used with an electron beam voltage of 15 kV and a chamber pressure of 60 Pa. A razor blade was used to cut sections of the fractured surface. Images taken were 5-15 mm away from the crack time and at least 1 mm way from the edges to make sure no damage seen in the images are from the sample prep.
#### 2.3.3 \(Janka\) hardness tests
A variation of a Janka hardness indenter was machined to a 11.3 mm diameter according to ASTM D143 (ASTM D143 2022). A drawing of the indenter is shown in Figure 7 and Figure 8 is the fixture resting on a wood block. The Janka hardness indenter was fastened to a United SFM-300KN Electro-Mechanical Series Universal Testing Machine and data was
Figure 6: Test setup for SENB and longitudinal/transverse bending testing. SENB tests used 10mm rollers (shown) and longitudinal/transverse bending used 30mm rollers.
recorded via the United Datum5i software. The specimen was loaded at 6.35 mm/min to a maximum specimen penetration depth of 5.65 mm (half the diameter of the fixture). Six hardness values were obtained from each of the 40 specimens - two from the tangential face, two from the radial face, and two from the transverse face or cross section of the wood. Average values were then calculated for each face of each sample and ultimately each face for each of the two configurations.
## 3 Results and Discussion
### Longitudinal and Transverse Bending Tests
#### 3.1.1 Failure modes
Unmodified (UM) longitudinal bending samples failed in tension, although they exhibited several types of tensile failures. In accordance with the types of static bending failures described in ASTM D143 (ASTM D143 2022), UM longitudinal samples mostly failed in simple and splintering tension except for a few which failed in a cross-grain manner. Thermally Modified (TM) longitudinal bending samples failed in similar fashion according to the ASTM D143 failure types. It's notable that although the results of the bending test produced visually similar failures, the UM wood failures were typically slower and characterized by larger deformations compared to the TM wood. Part of the increased deformation in the UM samples was from crushing at the loading head. The TM wood failures were quicker, failing completely rather than slowly cracking and re-loading as seen with the UM wood. Additionally, there was less crushing in the TM wood at the loading head. In general, the TM wood behaved in a more brittle manner, while the UM wood withstood more deformation and cracking prior to ultimate failure. Figure 9 shows a typical fracture surface of the longitudinal flexure specimen.
UM and TM transverse bending samples exhibited brittle, fragile failures. All samples broke suddenly, with a complete cleavage through the cross section resulting in both halves shooting off the fixture. There was no visible or audible cracking prior to ultimate failure. The fracture surface was smooth, running along the grain.
#### 3.1.2 Flexural stiffness and strength
Modulus of rupture (MOR) values were calculated based on the maximum load measured in the tests. Bending strain and modulus of elasticity (MOE) were calculated based on cross head displacement of the loadframe during the elastic response of the wood prior to any crushing or other non-elastic behavior. MOE, MOR, and coefficient of variation (COV) values are presented in Table 2.
As shown in Figure 10a, TM wood yielded 7% higher MOE values
Figure 9: Typical fracture of a) UM and b) TM longitudinal flexure specimen, where the fracture path follows the grains of the specimen.
than UM wood for longitudinal specimens. This is consistent with the noticeably more brittle and abrupt failures exhibited by TM samples. Longitudinal MOR values were also slightly higher (+5%) in TM wood compared to UM wood (Fig. 10a). MOR and MOE values for UM and TM wood varied from those reported in the Wood Handbook (US Dept. of Agriculture 2010) for western Hemlock. Longitudinal MOE values were 24% and 32% lower for TM and UM wood respectively, but MOR values were 8% and 4% higher for TM and UM wood respectively. The variation from the properties reported in the wood handbook may be attributed to differences in grain structures, differences in density, faster tree growth, and other forest characteristics present within wood from the Makah reservation. COV values for UM and TM longitudinal bending results were consistent with a study by the FPL on mechanical properties of young growth western Hemlock collected from a similar region in Washington state (Langum, Yadama and Lowell 2009). This study reported results for flexural properties from different vertical positions within the tree and different radial sections of the tree. The trees sampled in their investigation were about 40 years younger than trees sampled for this study. They also reported lower MOR and MOE values than the wood handbook and attributed the difference to lack of maturity of the trees they used. Their reported COV values for similar sampling locations used in this study were between 9.8-15% for MOE and 10.8-12.5% for MOR. The COV values for longitudinal bending tests conducted in this study exhibited similar variability as seen in Table 2.
For transverse bending samples there was more variation within the results. Additionally, the strength of the wood was much lower than the capacity of the load cell which adds a degree of uncertainty. As shown in Figures 10b and 11b, transverse strength of the UM wood was higher than the TM wood while transverse stiffness was slightly lower. The MOR, MOE, and COV values are presented in Table 2. Transverse bending strength is not typically
\begin{table}
\begin{tabular}{l c c c c c} \hline Orientation and sample type & Number of tests & MOR [MPa] & CoV (\%) & MOE [MPa] & CoV (\%) \\ \hline TM Longitudinal bending & 25 & 88.4 & 11.0 & 9922 & 9.6 \\ UM Longitudinal bending & 15 & 84.0 & 12.2 & 9287 & 12.7 \\ TM Transverse bending & 13 & 3.6 & 15.5 & 236 & 14.9 \\ UM Transverse bending & 15 & 5.0 & 17.7 & 221 & 10.4 \\ Wood Handbook–West. Hem. & & 81 & & 12300 & \\ \hline \end{tabular}
\end{table}
Table 2: Longitudinal and transverse bending test results. Wood handbook results are taken from (US Dept. of Agriculture 2010).
Figure 10: Comparison between the MOE of thermally modified and unmodified specimens in (a) longitudinal and (b) transverse directions.
studied and there are limited published results on transverse, perpendicular to grain bending strength or stiffness and no reported values in the wood handbook. The most similar mechanical property that is commonly reported is perpendicular to grain tensile strength. This is a measure of the wood's resistance to forces acting across the grain that often cause splitting (US Dept. of Agriculture 2010). Wood handbook values for perpendicular to grain tensile strength of western hemlock is 2.3 MPa. MOR values are sometimes used as conservative or low estimate of tensile strength (US Dept. of Agriculture 2010), so a higher tested MOR value than reported tensile strength could suggest a conservatively higher than average MOR in UM and TM woods tested in this study.
### Fracture tests
Single Edge Notch Bending (SENB) specimens were tested to characterize the fracture energy of the material. The focus was on the fracturing behavior in the transverse direction which is deemed to be significantly affected by the thermal modification. As described in Figure 4, one set of specimens featured a Radial-Longitudinal (RL) configuration whereas a second set featured a Tangential-Longitudinal (TL) system. While both configurations are characterized by grains parallel to the plane of crack propagation, the different orientation of the growth ring leads to different micro/mesostructures in front of the crack tip. In turn, this leads to different damage mechanisms in the Fracture Process Zone (FPZ), leading to different energy dissipation. Since it is possible that thermal modification affects different microstructural features in different ways, it was important to characterize the fracture energy in both configurations. This also provides very useful information for the development of computational models since any anisotropic progressive damage model would require the characterization of the fracture energy in both configurations.
An interesting feature of most of the specimens tested in this work is that they exhibited stable crack propagation. This is due to the relatively high fracture energy combined with the relatively low stiffness of the samples in the transverse direction compared to the loadframe which prevented any snap-back instability (Cedolin and Z. 1991). The stable crack propagation enabled the use of the work of fracture to estimate both the initial and total fracture energy of the material (Bazant, Le and Salviato 2021). Following RILEM
Figure 11: Comparison between the MOR of thermally modified and unmodified specimens in (a) longitudinal and (b) transverse directions.
recommendation for mortar and concrete (RILEM, 1985), the fracture energy was estimated by dividing the work of fracture \(W_{\text{f}}\) by the ligament area. This leads to the following formula:
\[G_{F}=\frac{W_{f}}{(D-a)w}=\frac{mg\delta_{0}+f_{0}^{\delta_{0}}Pd\delta}{(D-a)w} \tag{1}\]
where, following Figure 4, \(D\)-\(a\) is the ligament length and \(w\) is the width of the specimen. Furthermore, m is the total mass of the sample, \(\delta\) is the vertical displacement at the loading pin, and \(g\) is the acceleration of gravity.
Eq (1) is only an approximation. The main source of error is that, near the notch tip (and also near the end of the ligament), the energy to create the crack is not the same as it is in stationary propagation (stationarity is required for the fracture energy \(G_{\text{F}}\) to be equal to the \(J\)-integral). However, using the work of fracture allows one to overcome a number of difficulties related to the estimation of the fracture energy from the rate of the elastic potential or using the \(J\)-integral (Bazant, Le and Salviato, 2021). In fact, in wood samples it is very difficult to guarantee a consistent grain orientation throughout the sample, especially if the specimen is large to allow for the FPZ to develop fully. One can have a relatively good control in the region surrounding the crack but not on the whole sample. So, assuming a uniform orientation would lead to significant errors in the calculation of the energy release rate. To properly calculate the fracture energy using such approaches, a real digital twin of the sample would have to be simulated. This would make the analysis complicated and time consuming. The use of the work of fracture, on the other hand, only requires the measurement of the vertical load and vertical displacement.
Thanks to the stable crack propagation, it was possible to estimate both the initial fracture energy, \(G_{\text{f}}\), and the total fracture energy, \(G_{\text{F}}\) of the material. The initial fracture energy drives the failure in small samples where the FPZ cannot fully develop. It represents the area under the initial tangent of the traction-separation law of the material (Fig 11). The total fracture energy represents the total energy dissipation per crack area down to complete failure. It drives the behavior of very large notched structures for which the FPZ can develop fully prior to failure (Bazant, Le and Salviato, 2021). Figure 12 shows a typical traction-separation law
Figure 12: Typical traction-separation of a quasibrittle material (Bazant, Le and Salviato, 2021) showing the initial and total fracture energies. The initial fracture energy drives the failure of relatively small structure which inhibit the full development of the Fracture Process Zone (FPZ) prior to failure. The total fracture energy drives the failure of sufficiently large structures for which the FPZ is fully developed at incipient failure.
for quasibrittle materials exhibiting an initial and total fracture energy. \(G_{\mathrm{F}}\) can be accurately determined only when the load softens down to zero. This is usually difficult to achieve because either the test would have to run for a very long time or the measured reaction force would become indiscernible from the inherent noise in the testing machine. Since the measured softening curve did not extend to zero, the stress was extrapolated. The extrapolation was assumed to be an exponential decay function \(Y=Ae^{-Bx}\) which is integrable up to \(\infty\). Here \(Y\)= force, \(x\)= vertical displacement of the beam, and \(A\),\(B\)= constants to be calibrated by fitting the lower portion of the softening load-displacement diagram. Figure 12 shows an example of the fitting obtained by this equation and shows the calculation of the initial and total fracture energies.
#### 3.2.1 Unmodified wood
The fracture tests of the unmodified western hemlock exhibited similar crack propagation for both RL and TL systems. In both cases, the plane of crack propagation was always orthogonal to the neutral axis of the sample, notwithstanding the difference in micro/mesotructures. In general, the fracture behavior was always relatively brittle with relatively clean fracture surfaces. Figure 13 and 14 shows the max principal strain right before crack propagation, the grain pattern at the bottom surface of the specimen, and the SEM image of the fracture surface of the RL and TL specimen respectively. The strain fields of both cases look similar, where there are strain concentrations at the crack tip and at the grains. However, the fracture surface looks different. The RL specimen exhibits damage in only one layer of the vertical tracheid cells. However, the TL specimen exhibits more damage, where about 5-10 layers of the tracheid cells fractured. Additionally, the ray cells now are in plane with the crack propagation, whereas the ray cells of the RL specimen are perpendicular. This results in delamination and fracturing of the ray cells, while the RL specimen only exhibits fracturing of the ray cells.
Figure 13: a) Max principal strain of the RL specimen right before crack propagation. b) Grain pattern of the specimen tested. c) SEM image of the fracture surface.
For the Radial-Longitudinal (RL) system, two specimen sizes were tested. For the small size, all the fracture tests exhibited stable crack propagation with load softening visible in the load-displacement curves. Figure 15 shows the curves measured for this configuration. The plots also show the load deflection curves after the elastic displacement is subtracted. As can be noticed, the decay function described in the foregoing section fits the shifted curves well and allows the extrapolation of the total fracture energy. For all the tests, it was possible to get a good estimation of both \(G_{\mathrm{f}}\) and \(G_{\mathrm{F}}\).
For size 2, the unmodified samples exhibited a more brittle behavior, characterized by snap-back instability for all the specimens tested (Fig. 16). In this case, the dynamic crack propagation did not allow the calculation of the fracture energy. However, this was not a problem since the results on the modified wood showed that the fracture energy calculated from Size 1 and 2 are very consistent. It is fair to assume that a similar conclusion could be made for unmodified specimens as well. This means that already for Size 1 the FPZ could develop fairly well and size effects on the fracture energy for the experiments presented in this work can be considered minimal.
Figure 14: a) Max principal strain of the TL specimen right before crack propagation. b) Grain pattern of the specimen tested. c) SEM image of the fracture surface.
Figure 16: Load as a function of the vertical displacement for unmodified size 2 samples of the RL system. All the tests exhibited snap-back instability with dynamic crack propagation which hindered the calculation of the work of fracture.
Figure 15: Load as a function of the vertical displacement for unmodified size 1 samples of the RL system. The plots also show the shifted load-displacement curves with fitted extrapolation function used to calculate the initial fracture energy and the total fracture energy.
The Tangential-Longitudinal (TL) system exhibited stable crack propagation, notwithstanding the larger size compared to both the RL systems. Fewer valid tests were available compared to the RL systems due to the presence of knots. In two samples the knots were exactly located at the crack tip effectively blunting the crack and reducing the stress intensity. Those two tests exhibited significantly larger load at failure and were discarded. A third sample featured a knot away from the crack. However, the knot was sufficiently weak that a crack initiated from it before the initial notch could start propagating. Such test was discarded as well. In any case, as Figure 17 clearly shows, the fracture tests were very consistent and allowed a full estimate of both the initial and total fracture energy for this system.
A summary of the fracture energies measured for the two unmodified systems is presented in Table 3. As can be noted, both the initial and total fracture energies of the RL system are significantly larger than the corresponding values for the TL system. This result is in agreement with similar tests on Spruce Wood (Murata, Watanabe and Nakano 2013). It is worth noting that the results are extremely consistent. The maximum CoV on the fracture energy is only 10.1%.
#### 3.2.2 Thermally modified wood
The fracture tests of the thermally modified western hemlock exhibited similar behavior to the unmodified wood. In both RL and TL systems, the plane of crack propagation was always orthogonal to the neutral axis of the sample, notwithstanding the difference in micro/mesotructures. In general, the fracture behavior was always relatively brittle with relatively clean fracture surfaces. Figure 18 and 19 shows the max principal strain right
Figure 17: Load as a function of the vertical displacement for the unmodified TL system. The plots also show the shifted load-displacement curves with fitted extrapolation function used to calculate the initial fracture energy and the total fracture energy.
\begin{table}
\begin{tabular}{l c c c c} \hline & \multicolumn{2}{c}{RL system} & \multicolumn{2}{c}{TL system} \\ \cline{2-5} & Mean & CoV (\%) & Mean & CoV (\%) \\ \hline \(G_{\text{f}}\) (N/mm) & 0.179 & 7.9 & 0.143 & 10.1 \\ \(G_{\text{F}}\) (N/mm) & 0.257 & 6.4 & 0.188 & 6.4 \\ \hline \end{tabular}
\end{table}
Table 3: Initial and total fracture energies for both the tangential-longitudinal and radial-longitudinal configurations of unmodified wood.
before crack propagation, the grain pattern at the bottom surface of the specimen, and the SEM image of the fracture surface of the RL and TL specimen respectively. The strain fields of both cases look similar, where there are strain concentrations at the crack tip and at the grains. However, the fracture surface looks different. The RL specimen exhibits damage in only one layer of the vertical tracheid cells. However, the TL specimen exhibits more damage, where about 2-3 layers of the tracheid cells fractured, which is much less than the UM specimen. Additionally, the ray cells now are in plane with the crack propagation, whereas the ray cells of the RL specimen are perpendicular. This results in delamination and fracturing of the ray cells, while the RL specimen only exhibits fracturing of the ray cells.
Figure 19: a) Max principal strain of the TL specimen right before crack propagation. b) Grain pattern of the specimen tested. c) SEM image of the fracture surface.
Figure 18: a) Max principal strain of the RL specimen right before crack propagation. b) Grain pattern of the specimen tested. c) SEM image of the fracture surface.
For the Radial-Longitudinal (RL) system, two specimen sizes were tested. For the small size, all the fracture tests exhibited stable crack propagation with load softening visible in the
Figure 21: Load as a function of the vertical displacement for thermally modified size 2 samples of the RL system. The plots also show the shifted load-displacement curves with fitted extrapolation function used to calculate the initial fracture energy and the total fracture energy.
Figure 20: Load as a function of the vertical displacement for thermally modified size 1 samples of the RL system. The plots also show the shifted load-displacement curves with fitted extrapolation function used to calculate the initial fracture energy and the total fracture energy.
load-displacement curves. All the tests but one allowed the characterization of both the total and initial fracture energies (Figure 20). For the larger size, crack propagation was rather stable although only two tests allowed the full characterization of the total fracture energy (Fig. 21).
The Tangential-Longitudinal (TL) system displayed a similar behavior to the unmodified wood. For all the tests, crack propagation was stable. As can be noted from Fig. 22 the load-displacement curves were very consistent and allowed a fairly accurate calculation of the fracture energies.
Table 4 presents a summary of the fracture energies of the thermally modified wood. As can be noted, even in this case the RL system features a larger fracture energy compared to the TL system, consistent with the unmodified wood. However, the values of the fracture energies are significantly lower compared as it will discussed next.
\begin{table}
\begin{tabular}{l c c c c} \hline & \multicolumn{2}{c}{RL system} & \multicolumn{2}{c}{TL system} \\ \cline{2-5} & Mean & CoV (\%) & Mean & CoV (\%) \\ \hline \(G_{\text{f}}\) (N/mm) & 0.106 & 11.8 & 0.069 & 27.3 \\ \(G_{\text{F}}\) (N/mm) & 0.136 & 9.1 & 0.075 & 18.4 \\ \hline \end{tabular}
\end{table}
Table 4: Initial and total fracture energies for both the tangential-longitudinal and radial-longitudinal configurations of thermally modified wood.
Figure 22: Load as a function of the vertical displacement for the thermally modified TL system. The plots also show the shifted load-displacement curves with fitted extrapolation function used to calculate the initial fracture energy and the total fracture energy.
#### 3.2.3 Comparison between thermally modified and unmodified wood
The accurate calculations of the initial and total fracture energies presented in the foregoing sections give an opportunity to evaluate the effect of thermal modification on these important properties. As shown by Figure 23, thermal modification leads to quite a significant embrittlement in the RL system. The initial fracture energy is reduced by 41% while the total fracture energy is reduced by 47%.
Similar conclusions can be drawn for the TL system. As Figure 24 shows, in this case the reduction is even more dramatic: 52% for the initial fracture energy and 60% for the total fracture energy.
Figure 24: Comparison between the initial and total fracture energy of modified and unmodified wood for the Tangential-Longitudinal (TL) configuration
Figure 23: Comparison between the initial and total fracture energy of modified and unmodified wood for the Radial-Longitudinal (RL) configuration
There are a few factors that could be causing the change in fracture energy, 1) change in moisture content, 2) change in degree of polymerization, and 3) degradation of the hemicellulose in the wood. Previous work on other wood species, on cellulose nanocrystal films, and other polymer composites have shown that the moisture content has a significant effect on the mechanical properties of the material (Hou, et al. 2020, Arnold 2010, Wang and Wang 1999, Nakagawa, et al. 2022). These works showed that the water molecules can act as a plasticizer, allowing for the polymer chains to slide more easily and increasing energy absorption. It has been seen in nanocellulose films, that the degree of polymerization of the cellulose has an impact on the strength of the material (Fang, et al. 2020). However, with wood, the change in the degree of polymerization of the cellulose is species dependent and may have a limited change at the low temperature thermal modification done in this work (Kubovsky, Kacikova and Kacik 2020). Finally, the degradation of wood due to thermal and chemical modification has been studied previously and has shown that the degree of degradation is species and process dependent (Sikora, et al. 2018, LeVan, Ross and Winandy 1990). However, when degradation is significant, it is typically seen in the hemicellulose. This degradation would cause a poor transfer of stress between the cells and cause a decrease in fracture energy.
### Janka hardness tests
Figure 25 and Table 5 contains the results of the Janka hardness tests for each configuration and wood face. The average hardness values are higher in the UM specimen for all wood faces and thermal modification ultimately caused a statistically significant decrease in surface hardness of the tangential and transverse wood planes. These findings are consistent with findings from previous work, that surface hardness generally decreased with thermal modification, especially at increased temperatures (Nourian and Avramidis 2021, S. Nourian 2018). Although the TM specimens in this study underwent minimum TM temperatures, the difference is still apparent. Clearly, the effect and degree of thermal modification on Western hemlock hardness must be considered when designing with this wood species.
## 4 Conclusions
This article investigated the effect of thermal modification on the longitudinal and transverse flexure properties, fracture energy, and hardness of Western Hemlock. Based on the results of this work, the following conclusions can be elaborated:
1. Thermal modification has significant effects on the mechanical properties of Western Hemlock, especially in the direction transverse to the grains;
2. In the longitudinal direction, thermal modification led to slight increases in Modulus of Elasticity (MOE) and Modulus of Rupture (MOR). In fact, the MOE increased by 6.8% while the MOR increased by 5.2%;
3. In the transverse direction, thermal modification led to slight increases in Modulus of Elasticity (MOE), which increased by 6.7%. However, the transverse MOR saw a significant drop by 28%;
4. The fracture energy of both Radial-Longitudinal (RL) and Tangential-Longitudinal (TL) systems showed a dramatic reduction due to thermal modification. In the RL system, the initial and total fracture energies decreased by 41% and 47% respectively. For the TL system, the initial and total fracture energies decreased by 52% and 60%;
5. The Janka hardness values were reduced on average due to the thermal modification treatment and this reduction was statistically significant for the tangential and transverse wood planes.
The foregoing results are very important for the design of western hemlock structures. The reduction of the fracture energy and Janka hardness induced by thermal modification must be taken into serious consideration since they correlate strongly to the capacity of the material to be damage tolerant and to dissipate energy upon crushing. Ballistic performance is known to be strongly correlated to these properties as well.
## 5 Acknowledgments
This material is based upon work supported by the US Army Engineer Research and Development Center (ERDC) under contract W9132T22C0008. The tests described and the resulting data presented herein, unless otherwise noted, are supported under PE 0603119A, Project BO3 'Military Engineering Technology Demonstration (CA)', Task 'Program Increase - Cross-Laminated Timber and Recycled Carbon Fiber Materials'.
\begin{table}
\begin{tabular}{l c c c c} \hline & \multicolumn{2}{c}{UM system} & \multicolumn{2}{c}{TM system} \\ \cline{2-5} & Mean [kN] & CoV (\%) & Mean [kN] & CoV (\%) \\ \hline Tangential & 2.43 & 14.95 & 2.22 & 13.10 \\ Radial & 2.01 & 17.55 & 1.93 & 14.50 \\ Transverse & 3.62 & 9.79 & 3.28 & 13.44 \\ \hline \end{tabular}
\end{table}
Table 5: Janka hardness of UM and TM Western Hemlock on the tangential, radial, and transverse faces. |
2306.17343 | On the nonlinear Schrödinger-Poisson systems with positron-electron
interaction | We study the Schr\"{o}dinger-Poisson type system: \begin{equation*} \left\{
\begin{array}{ll} -\Delta u+\lambda u+\left( \mu _{11}\phi _{u}-\mu _{12}\phi
_{v}\right) u=% \frac{1}{2\pi }\int_{0}^{2\pi }\left\vert u+e^{i\theta
}v\right\vert ^{p-1}\left( u+e^{i\theta }v\right) d\theta & \text{ in
}\mathbb{R}^{3}, \\ -\Delta v+\lambda v+\left( \mu _{22}\phi _{v}-\mu _{12}\phi
_{u}\right) v=% \frac{1}{2\pi }\int_{0}^{2\pi }\left\vert v+e^{i\theta
}u\right\vert ^{p-1}\left( v+e^{i\theta }u\right) d\theta & \text{ in
}\mathbb{R}^{3},% \end{array}% \right. \end{equation*}% where $1<p<3$ with
parameters $\lambda ,\mu_{ij}>0$. Novel approaches are employed to prove the
existence of a positive solution for $1<p<3$ including, particularly, the
finding of a ground state solution for $2\leq p<3$ using established linear
algebra techniques and demonstrating the existence of two distinct positive
solutions for $1<p<2.$ The analysis here, by employing alternative techniques,
yields additional and improved results to those obtained in the study of Jin
and Seok [Calc. Var. (2023) 62:72]. | Ching-yu Chen, Yueh-cheng Kuo, Tsung-fang Wu | 2023-06-30T00:09:15Z | http://arxiv.org/abs/2306.17343v1 | # On the nonlinear Schrodinger-Poisson system with positron-electron interaction
###### Abstract
We study the Schrodinger-Poisson type system:
\[\left\{\begin{array}{ll}-\Delta u+\lambda u+\left(\mu_{11}\phi_{u}-\mu_{12} \phi_{v}\right)u=\frac{1}{2\pi}\int_{0}^{2\pi}\left|u+e^{i\theta}v\right|^{p-1 }\left(u+e^{i\theta}v\right)d\theta&\mbox{ in }\mathbb{R}^{3},\\ -\Delta v+\lambda v+\left(\mu_{22}\phi_{v}-\mu_{12}\phi_{u}\right)v=\frac{1}{ 2\pi}\int_{0}^{2\pi}\left|v+e^{i\theta}u\right|^{p-1}\left(v+e^{i\theta}u \right)d\theta&\mbox{ in }\mathbb{R}^{3},\end{array}\right.\]
where \(1<p<3\) with parameters \(\lambda,\mu_{ij}>0\). Novel approaches are employed to prove the existence of a positive solution for \(1<p<3\) including, particularly, the finding of a ground state solution for \(2\leq p<3\) using established linear algebra techniques and demonstrating the existence of two distinct positive solutions for \(1<p<2.\) The analysis here, by employing alternative techniques, yields additional and improved results to those obtained in the study of Jin and Seok [Calc. Var. (2023) 62:72].
**Keywords:** variational method.
**2010 Mathematics Subject Classification:** 35J20, 35J61, 35A01, 35B40.
## 1 Introduction
In this paper, we study the Schrodinger-Poisson type systems:
\[\left\{\begin{array}{ll}-\Delta u+\lambda u+\left(\mu_{11}\phi_{u}-\mu_{1,2} \phi_{v}\right)u=\frac{1}{2\pi}\int_{0}^{2\pi}\left|u+e^{i\theta}v\right|^{p-1 }\left(u+e^{i\theta}v\right)d\theta&\mbox{ in }\mathbb{R}^{3},\\ -\Delta v+\lambda v+\left(\mu_{22}\phi_{v}-\mu_{1,2}\phi_{u}\right)v=\frac{1} {2\pi}\int_{0}^{2\pi}\left|v+e^{i\theta}u\right|^{p-1}\left(v+e^{i\theta}u \right)d\theta&\mbox{ in }\mathbb{R}^{3},\end{array}\right.\] ( \[E\] )
where \(u,v:\mathbb{R}^{3}\rightarrow\mathbb{R},\)\(\lambda,\mu_{ij}>0\) for \(i,j=1,2\) and \(1<p<3\) with the function \(\phi_{w}\in D^{1,2}(\mathbb{R}^{3})\) given by
\[\phi_{w}(x)=\int_{\mathbb{R}^{3}}\frac{w^{2}(y)}{|x-y|}dy. \tag{1.1}\]
Such equation is variational and its solutions are critical points of the corresponding energy functional \(J:H\rightarrow\mathbb{R}\) defined as
\[J(u,v) = \frac{1}{2}\int_{\mathbb{R}^{3}}|\nabla u|^{2}+\lambda u^{2}dx+ \frac{1}{2}\int_{\mathbb{R}^{3}}|\nabla v|^{2}+\lambda v^{2}dx\] \[+\frac{1}{4}\int_{\mathbb{R}^{3}}\mu_{11}\phi_{u}u^{2}+\mu_{22} \phi_{v}v^{2}-2\mu_{12}\phi_{v}u^{2}dx\] \[-\frac{1}{2\pi\left(p+1\right)}\int_{\mathbb{R}^{3}}\int_{0}^{2 \pi}\left|u+e^{i\theta}v\right|^{p+1}d\theta dx,\]
where \(H:=H^{1}\left(\mathbb{R}^{3}\right)\times H^{1}\left(\mathbb{R}^{3}\right).\) Note that \((u,v)\in H\) is a solution of system \((E)\) if and only if \((u,v)\) is a critical point of \(J.\) The couple \((u,v)\) is called a ground state solution of system \((E),\) if \((u,v)\) is a solution of the system and a minimum among all nontrivial solutions.
The system \((E)\) stems from the study of the nonlinear Maxwell-Klein-Gordon equation in the limit of infinite light speed where the decomposition of the wave functions results in the following system
\[\left\{\begin{array}{l}2i\,\dot{v}_{+}-\Delta v_{+}+\left(\mu_{11}\phi_{v_{+ }}-\mu_{12}\phi_{v_{-}}\right)v_{+}=\frac{1}{2\pi}\int_{0}^{2\pi}g(v_{+}+e^{i \theta}\bar{v}_{-})d\theta,\\ 2i\,\dot{v}_{-}-\Delta v_{-}+\left(\mu_{22}\phi_{v_{-}}-\mu_{12}\phi_{v_{+}} \right)v_{-}=\frac{1}{2\pi}\int_{0}^{2\pi}g(v_{-}+e^{i\theta}\bar{v}_{+})d \theta,\end{array}\right.\]
with \(v_{+}\) and \(v_{-}\) being the decomposed positron and electron part of the wave solutions respectively and \(g\) the potential. For more detailed description on the physical background and the derivation of the system, we refer the interested readers to the paper by Jin and Seok [14] and the references therein. Further assumptions of separable forms of solutions for \(v_{+}\) and \(v_{-},\) namely,
\[v_{+}=u(x)e^{i\frac{\lambda}{2}t}\quad\mbox{and}\quad v_{-}=v(x)e^{i\frac{ \lambda}{2}t},\]
give rise to system \((E)\) where a standard power function of \(g(u)=|u|^{p-1}u\) is assumed. The system has been carefully studied by Jin and Seok in [14] and since the focus of our study is to extend and improve on their analysis, for the paper the be self-contained, a brief account of their results will be given below; but first we define some concepts of triviality and positiveness of a vector function \((u,v)\).
**Definition 1.1**: _A vector function \((u,v)\) is said to be \((i)\) nontrivial if either \(u\neq 0\) or \(v\neq 0;\)\((ii)\) semi-trivial if it is nontrivial but either \(u=0\) or \(v=0;\)\((iii)\) vectorial if both of \(u\) and \(v\) are not zero; \((iv)\) nonnegative if \(u\geq 0\) and \(v\geq 0;\)\((v)\) positive if \(u>0\) and \(v>0.\)_
Jin and Seok in [14] considered two cases of system \((E),\) namely, when the potential function \(g(u)\) is set to zero and when \(g(u)=|u|^{p-1}u,\) and put their results of the coupled system in comparison respectively with those of a Hartree equation,
\[-\Delta u+\lambda u+\mu\phi_{u}u=0, \tag{1.2}\]
when \(g=0\) and with those of a single nonlinear Schrodinger-Poisson equation, i.e.
\[-\Delta u+\lambda u+\mu\phi_{u}u=|u|^{p-1}u, \tag{1.3}\]
when \(g(u)=|u|^{p-1}u\) is the given power function. These equations appear in the study of semiconductor theory and have been investigated by many; see for example [1, 2, 4, 10, 16, 18, 19, 20, 21, 22, 25, 26]. The nonlinear term \(|u|^{p-1}u\) (or a more general form of \(g(u)\)) has been used conventionally in the Schrodinger-Poisson equation to model the interaction among particles (possibly nonradial). It is known that equation (1.2) admits a unique radial solution when \(\mu<0\) and, if \(\mu\geq 0,\) only the trivial solution is permitted. Jin and Seok [14] likened the role of \(\mu\) of equation (1.2) to that of \(\det(\mu_{ij})\) (i.e. \(\mu_{11}\mu_{22}-\mu_{12}^{2}\)) in system \((E)\) when the RHS vanishes, as they demonstrated that system \((E)\) similarly admits only the trivial solution when \(\det(\mu_{ij})\geq 0\) and a unique positive vector solution exists if \(\det(\mu_{ij})<0.\)
With the standard power functions assumed on the RHS of equation (1.3), the solution structure varies within different range of \(p\). In the sub-linear and super-critical range of \(p\in(0,1)\cup[5,\infty)\), there is no nontrivial solution for equation (1.3); similarly, Jin and Seok [14] proved that only the trivial solution is permitted for system \((E)\) but unlike in equation (1.3), additional conditions on \(\mu_{ij}\), i.e. \(\det(\mu_{ij})\geq 0\) is imposed.
In the range of \(p\in(1,5)\), as with the single Schrodinger-Poisson equation case, the solution structure of system \((E)\) changes at around \(p=2\). Jin and Seok [14] proved their results by considering the Mose index of the critical point of the energy functional and the existence of solutions were obtained subject to various additional conditions, for clarity, we itemise their results below.
* For \(1<p\leq 2\) and \(\lambda\geq 2,\mu_{ij}>0\) for \(i,j=1,2,\ \mbox{system}\ (E)\) permits only the trivial solution when \(\mu_{11}>4\) and \(\left(\mu_{11}-4\right)\left(\mu_{22}-4\right)>\mu_{12}^{2}\).
* For \(1<p\leq 2\) and \(\lambda,\mu_{11},\mu_{22}>0\) fixed. A positive solution exists for system \((E)\) provided \(\mu_{12}>\mu_{0}\) for some constant \(\mu_{0}>0\) (i.e. \(\mu_{11}\mu_{22}-\mu_{12}^{2}<0\)).
* For \(1<p<2\) and \(\lambda,\mu_{ij}>0\) for \(i,j=1,2\). At least two positive radial solutions are permitted for system \((E)\) when \(\mu_{11}\mu_{22}-\mu_{12}^{2}>0\) and \(\mu_{ij}\) is sufficiently small, where \(i,j=1,2\).
* For \(\frac{\sqrt{73}-2}{3}\leq p<5\) and parameters \(\lambda,\mu_{ij}>0\) for \(i,j=1,2\), system \((E)\) has a positive ground state solution. If \(\mu_{11}=\mu_{22}\) is imposed, then the range for the existence of a positive ground state solution is extended to \(2<p<5\).
Result \((i)\) indicates that, for \(1<p\leq 2\), a positive solution must lie within the L-shaped region of \(0<\mu_{11}<4\) or \(0<\mu_{22}<4\) and from result \((ii)\) the existence of a positive solution is only permitted if \(\mu_{12}\) is sufficiently large, namely, \(\mu_{11}\mu_{22}-\mu_{12}^{2}<0.\) For the existence of a positive ground state solution in \((iv)\), the result falls short of \(2<p<5\) unless a much stronger constraint of \(\mu_{11}=\mu_{22}\) is imposed. Jin and Seok [14], however, suggested that the shorter range of \(\frac{\sqrt{73}-2}{3}\leq p<5\) instead of \(2<p<5\) and the constraint of \(\mu_{11}=\mu_{22}\) are likely to be technical issues. Questions naturally arise as to whether we can improve on these results by demonstrating the existence of positive (ground state) solutions under weaker conditions of the parameter values of \(\lambda\) and \(\mu_{ij}\)'s.
We begin by looking for solutions in the L-shaped region of \(\mu_{11}\) and \(\mu_{22}\) without the intension of imposing any constraint on \(\mu_{12}\). As a result, we are able to identify a threshold number \(\Lambda_{0}\) for the parameter values of \(\mu_{11}\) and \(\mu_{22}\) such that a positive solution is always permitted provided \(\min\{\mu_{11},\mu_{22}\}<\Lambda_{0}\), as stated in Theorem 1.2 below. In Theorem 1.3, the existence of a positive ground state solution is extended to \(2\leq p<\frac{\sqrt{73}-2}{3}\) including \(p=2\) where a lesser constraint on the parameter values is required and, similarly, in Theorem 1.5, no additional condition on \(\mu_{ij}\) is imposed for the existence of at least two positive solutions.
In order to achieve these results, new ideas and techniques have been explored.To begin, we define
\[\Lambda_{0}:=\frac{3\sqrt{3}\left(p-1\right)\pi\lambda^{\frac{3}{2}}}{32(3-p) A\left(p\right)}\left(\frac{3-p}{2S_{p+1}^{p+1}}\right)^{2/(p-1)}>0\ \mbox{for}\ 1<p<3,\]
where
\[A\left(p\right)=\left\{\begin{array}{ll}\left(\frac{3-p}{2}\right)^{1/(p-1)},&\mbox{if}\ 1<p\leq 2,\\ \frac{1}{2},&\mbox{if}\ 2<p<3.\end{array}\right.\]
For simplicity, we have assumed \(\mu_{11}\leq\mu_{22}\) to facilitate further conditions being imposed on the smaller of the two below. However, the role of \(\mu_{11}\) and \(\mu_{22}\) can be interchanged while the results remain unchanged. Then we have the following theorems.
**Theorem 1.2**: _Suppose that \(1<p<3\) and \(\lambda,\mu_{ij}>0\). If \(0<\mu_{11}<\Lambda_{0},\) then System \((E)\) has a positive solution \((u_{0},v_{0})\) with positive energy and_
\[\left\|(u_{0},v_{0})\right\|_{H}\to 0\text{ as }\mu_{22}\to\infty.\]
In the proof of Theorem 1.2, we will find critical points by introducing a novel constraint, applying the fibering method while adoptting new analytical techniques.
The next theorem describes our results on the existence of a ground state solution.
**Theorem 1.3**: _Suppose that \(2\leq p<3\) and \(\lambda,\mu_{ij}>0\). Let \((u_{0},v_{0})\) be positive solution of System \((E)\) as in Theorem 1.2. Then we have \((i)\) if \(2<p<3,0<\mu_{ii}<\Lambda_{0}\) and \(\mu_{11}\mu_{22}-\mu_{12}^{2}\geq 0,\) then \((u_{0},v_{0})\) is a positive ground state solution of System \((E)\); \((ii)\) if \(p=2\) and \(0<\mu_{ii}<\Lambda_{0}\) then \((u_{0},v_{0})\) is a positive ground state solution of System \((E).\)_
Note that in setting out the argument for the proof Theorem 1.3, the integral equations (of Nehari and Pohozaev identities) and the required conditions are conveniently written as a linear system of equations with non-linear constraints. This formulation allows us to apply straightforward linear algebra techniques for the otherwise complicated analysis.
The next two theorems cover the case \(\mu_{11}\mu_{22}-\mu_{12}^{2}>0\). We will see that, unlike the case \(\mu_{11}\mu_{22}-\mu_{12}^{2}<0,\) the System \((E)\) does not admit any nontrivial solution when \(\det\left(\mu_{ij}\right)=\mu_{11}\mu_{22}-\mu_{12}^{2}\) satisfies suitable conditions.
**Theorem 1.4**: _Suppose that \(1<p\leq 2\) and \(\lambda,\mu_{ij}>0.\) If_
\[\frac{\mu_{11}\mu_{22}-\mu_{12}^{2}}{\mu_{11}+\mu_{22}}>\left\{\begin{array}[ ]{ll}\frac{(p-1)^{2}}{4}\left[\frac{2p(2-p)^{2-p}}{\lambda^{2-p}}\right]^{2/(p -1)},&\text{ if }1<p<2,\\ 4,&\text{ if }p=2,\end{array}\right.\]
_then System \((E)\) has only trivial solution._
Finally, we give the results on the existence of multiple positive solutions.
**Theorem 1.5**: _Suppose that \(1<p<2\) and \(\lambda,\mu_{ij}>0.\) If \(0<\mu_{11}<\Lambda_{0}\) and \(\mu_{11}\mu_{22}-\mu_{12}^{2}>0,\) then System \((E)\) has at least two different positive solutions._
The key point in the proof of Theorem 1.5 lies in establishing Lions type inequalities within the context of the vector functions (or see [14, 18]). Using these inequalities in conjunction with Strauss's inequality in \(H_{r}:=H_{r}^{1}\left(\mathbb{R}^{3}\right)\times H_{r}^{1}\left(\mathbb{R}^{ 3}\right)\) and the comparison of energy, we are able to demonstrate the existence of two different positive solutions.
The rest of this paper is organized as follows. After introducing some preliminary results in Section 2, we prove Theorem 1.2 in Section 3. In Section 4, the proof of Theorem 1.3 is given and, in Section 5, the proof of Theorem 1.5.
Preliminaries
We first establish the following estimates on the nonlinearity.
**Lemma 2.1**: _Suppose that \(1<p<2,\)\(\lambda,d>0\) is given. Let \(f_{d}\left(s\right)=\lambda-2^{p}s^{p-1}+ds\) for \(s>0.\) Then there exist \(d_{\lambda}:=\left(p-1\right)\left[\frac{p\left(2-p\right)^{2-p}}{\lambda^{2-p }}\right]^{1/\left(p-1\right)}>0\) and \(s_{0}\left(d\right):=\left(\frac{2^{p}\left(p-1\right)}{d}\right)^{1/\left(2- p\right)}>0\) satisfying \(\left(i\right)\)\(f_{d}^{\prime}\left(s_{0}\left(d\right)\right)=0\) and \(f_{d}\left(s_{0}\left(d_{\lambda}\right)\right)=0;\)\(\left(ii\right)\) for each \(d<d_{\lambda}\) there exist \(\eta_{d},\xi_{d}>0\) such that \(\eta_{d}<s_{0}\left(d\right)<\xi_{d}\) and \(f_{d}\left(s\right)<0\) for all \(s\in\left(\eta_{d},\xi_{d}\right);\)\(\left(iii\right)\) for each \(d>d_{\lambda},\)\(f_{d}\left(s\right)>0\) for all \(s>0.\)_
**Proof.** By a straightforward calculation, we can show that the results hold. \(\square\)
We need the following results.
**Lemma 2.2**: _Suppose that \(1<p<3\) and \(\mu_{ij}>0.\) Let \(g\left(s\right)=\mu_{11}s^{2}+\mu_{22}\left(1-s\right)^{2}-2\mu_{12}s\left(1-s\right)\) for \(s\in\left[0,1\right].\) Then there exists \(0<s_{\min}=\frac{\mu_{22}+\mu_{12}}{\mu_{11}+\mu_{22}+2\mu_{12}}<1\) such that \(\min_{s\in\left[0,1\right]}g\left(s\right)=g\left(s_{\min}\right)=\frac{\mu_{11 }\mu_{22}-\mu_{12}^{2}}{\mu_{11}+\mu_{22}+2\mu_{12}}<\mu_{ii}\) for \(i=1,2.\)_
**Proof.** Since
\[g\left(s\right) = \mu_{11}s^{2}+\mu_{22}\left(1-s\right)^{2}-2\mu_{12}s\left(1-s\right)\] \[= \left(\mu_{11}+\mu_{22}+2\mu_{12}\right)s^{2}-2\left(\mu_{22}+ \mu_{12}\right)s+\mu_{22}\]
and
\[g^{\prime}\left(s\right)=2\left(\mu_{11}+\mu_{22}+2\mu_{12}\right)s-2\left( \mu_{22}+\mu_{12}\right),\]
we conclude that there exists
\[0<s_{\min}=\frac{\mu_{22}+\mu_{12}}{\mu_{11}+\mu_{22}+2\mu_{12}}<1\]
such that
\[\min_{s\in\left[0,1\right]}g\left(s\right) = g\left(s_{\min}\right)=g\left(\frac{\mu_{22}+\mu_{12}}{\mu_{11}+ \mu_{22}+2\mu_{12}}\right)=\frac{\left(\mu_{22}+\mu_{12}\right)^{2}}{\mu_{11 }+\mu_{22}+2\mu_{12}}-2\frac{\left(\mu_{22}+\mu_{12}\right)^{2}}{\mu_{11}+\mu _{22}+2\mu_{12}}+\mu_{22}\] \[= \mu_{22}-\frac{\left(\mu_{22}+\mu_{12}\right)^{2}}{\mu_{11}+\mu _{22}+2\mu_{12}}\] \[= \frac{\mu_{11}\mu_{22}-\mu_{12}^{2}}{\mu_{11}+\mu_{22}+2\mu_{12} }<\mu_{ii}\text{ for }i=1,2.\]
This completes the proof. \(\square\)
The function \(\phi_{u}\) defined in (1.1) possesses certain properties [2, 18] and the Hardy-Littlewood-Sobolev and Gagliardo-Nirenberg inequalities, thus we have the following results.
**Lemma 2.3**: _For each \(u\in H^{1}\left(\mathbb{R}^{3}\right)\), the following two inequalities are true._
* \(\phi_{u}\geq 0;\)__
* \(\int_{\mathbb{R}^{3}}\phi_{u}u^{2}dx\leq\frac{16}{3\sqrt{3}\pi\lambda^{\frac{ 3}{2}}}\left(\int_{\mathbb{R}^{3}}\lambda u^{2}dx\right)^{\frac{3}{2}}\left( \int_{\mathbb{R}^{3}}|\nabla u|^{2}dx\right)^{\frac{1}{2}}\) for \(\lambda>0.\)
Next, we consider the following Schrodinger-Poisson equation:
\[-\Delta u+\lambda u+\mu\int_{\mathbb{R}^{3}}\phi_{{}_{u}}u^{2}=\left|u\right|^{p-2 }u\quad\text{in }\mathbb{R}^{3}.\] ( \[SP_{\mu}\] )
Such equation is variational and its solutions are critical points of the corresponding energy functional \(I:H^{1}\left(\mathbb{R}^{3}\right)\rightarrow\mathbb{R}\) defined as
\[I_{\mu}(u)=\frac{1}{2}\int_{\mathbb{R}^{3}}\left|\nabla u\right|^{2}+\lambda u ^{2}dx+\frac{\mu}{4}\int_{\mathbb{R}^{3}}\phi_{{}_{u}}u^{2}dx-\frac{1}{p+1} \int_{\mathbb{R}^{3}}\left|u\right|^{p+1}dx.\]
Note that \(u\in H^{1}\left(\mathbb{R}^{3}\right)\) is a solution of Equation (\(SP_{\mu}\)) if and only if \(u\) is a critical point of \(I_{\mu}.\) Next, we define the Nehari manifold of functional \(I_{\mu}\) as follows,
\[\mathbf{N}_{\mu}:=\{u\in H^{1}\left(\mathbb{R}^{3}\right)\backslash\{0\}: \left\langle I_{\mu}^{\prime}\left(u\right),u\right\rangle=0\}.\]
The Nehari manifold \(\mathbf{N}_{\mu}\) is closely linked to the behavior of the function of the form \(f_{u}:t\to I_{\mu}\left(tu\right)\) for \(t>0.\) Such maps are known as fibering maps and were introduced by Drabek-Pohozaev [12], and were further discussed by Brown-Zhang [8], Brown-Wu [6, 7] and many others. For \(u\in H^{1}\left(\mathbb{R}^{3}\right),\) we find
\[f_{u}\left(t\right) = \frac{t^{2}}{2}\int_{\mathbb{R}^{3}}\left|\nabla u\right|^{2}+ \lambda u^{2}dx+\frac{t^{4}\mu}{4}\int_{\mathbb{R}^{3}}\phi_{{}_{u}}u^{2}dx- \frac{t^{p+1}}{p+1}\int_{\mathbb{R}^{3}}\left|u\right|^{p+1}dx,\] \[f_{u}^{\prime}\left(t\right) = t\int_{\mathbb{R}^{3}}\left|\nabla u\right|^{2}+\lambda u^{2}dx+t ^{3}\mu\int_{\mathbb{R}^{3}}\phi_{{}_{u}}u^{2}dx-t^{p}\int_{\mathbb{R}^{3}} \left|u\right|^{p+1}dx,\] \[f_{u}^{\prime\prime}\left(t\right) = \int_{\mathbb{R}^{3}}\left|\nabla u\right|^{2}+\lambda u^{2}dx+3t ^{2}\mu\int_{\mathbb{R}^{3}}\phi_{{}_{u}}u^{2}dx-pt^{p-1}\int_{\mathbb{R}^{3}} \left|u\right|^{p+1}dx.\]
As a direct consequence, we have
\[tf_{u}^{\prime}\left(t\right)=\int_{\mathbb{R}^{3}}\left|\nabla tu\right|^{2}+ \lambda\left(tu\right)^{2}dx+\mu\int_{\mathbb{R}^{3}}\phi_{{}_{tu}}\left(tu \right)^{2}dx-\int_{\mathbb{R}^{3}}\left|tu\right|^{p+1}dx,\]
and so, for \(u\in H^{1}\left(\mathbb{R}^{3}\right)\) and \(t>0,\)\(f_{u}^{\prime}\left(t\right)=0\) holds if and only if \(tu\in\mathbf{N}_{\mu}.\) In particular, \(f_{u}^{\prime}\left(1\right)=0\) if and only if \(u\in\mathbf{N}_{\mu}.\) It is then natural to split \(\mathbf{N}_{\mu}\) into three parts corresponding to the local minima, local maxima and points of inflection. Following [23], we define
\[\mathbf{N}_{\mu}^{+} = \{u\in\mathbf{N}_{\mu}:f_{u}^{\prime\prime}\left(1\right)>0\},\] \[\mathbf{N}_{\mu}^{0} = \{u\in\mathbf{N}_{\mu}:f_{u}^{\prime\prime}\left(1\right)=0\},\] \[\mathbf{N}_{\mu}^{-} = \{u\in\mathbf{N}_{\mu}:f_{u}^{\prime\prime}\left(1\right)<0\}.\]
Let
\[\beta_{\mu}:=\inf_{u\in\mathbf{N}_{\mu}}I_{\mu}\left(u\right).\]
Using the argument of theorem 1.3 and lemma 2.4 in [20] (or see Lemma 3.3 in below), for each \(1<p<3\) and \(0<\mu<\Lambda_{0},\) Equation (\(SP_{\mu}\)) has a positive solution \(w_{\mu}\in\mathbf{N}_{\mu}^{-}\) such that
\[\left\|w_{\mu}\right\|_{H^{1}}<\left(\frac{3\sqrt{3}\left(p-1\right)\pi\lambda ^{\frac{3}{2}}}{16\mu(3-p)}\right)^{1/2}\]
\[d_{0}<\beta_{\mu}=I_{\mu}\left(w_{\mu}\right)<\frac{A\left(p\right)\left(p-1\right)} {2\left(p+1\right)}\left(\frac{2S_{p+1}^{p+1}}{3-p}\right)^{2/\left(p-1\right)} \text{ for some }d_{0}>0.\]
In particular, by [9], if \(2\leq p<3,\) then \(w_{\mu}\) is a positive ground state solution of Equation \(\left(SP_{\mu}\right).\) Moreover, by
\[I_{\mu_{1}}(u)\leq I_{\mu_{2}}(u)\text{ for all }\mu_{1}<\mu_{2}\]
and
\[\beta_{\mu}:=\inf_{u\in\mathbf{N}_{\mu}^{-}}I_{\mu}\left(u\right)=+\infty, \text{ if }\mathbf{N}_{\mu}^{-}=\emptyset,\]
we may assume that \(\beta_{\mu_{1}}<\beta_{\mu_{2}}\) for all \(\mu_{1}<\mu_{2}.\) Thus, by [18, 20], if \(1<p<2,\) we have the following result.
**Theorem 2.4**: _Suppose that \(1<p<2.\) Then for each \(0<\mu<\Lambda_{0},\) Equation \(\left(SP_{\mu}\right)\) has at least two positive radial solutions \(w_{r,\mu}^{\left(1\right)}\) and \(w_{r,\mu}^{\left(2\right)}\) with_
\[I_{\mu}\left(w_{r,\mu}^{\left(1\right)}\right)=\beta_{r,\mu}^{\left(1\right)} :=\inf_{u\in\mathbf{N}_{\mu}^{-}\cap H_{r}^{1}\left(\mathbb{R}^{3}\right)}I_{ \mu}\left(u\right)>0\]
_and_
\[I_{\mu}\left(w_{r,\mu}^{\left(2\right)}\right)=\beta_{r,\mu}^{\left(2\right)} :=\inf_{u\in\mathbf{N}_{\mu}^{+}\cap H_{r}^{1}\left(\mathbb{R}^{3}\right)}I_{ \mu}\left(u\right)=\inf_{u\in H_{r}^{1}\left(\mathbb{R}^{3}\right)}I_{\mu} \left(u\right)<0.\]
## 3 Positive vectorial solutions
First, we define the Nehari manifold of functional \(J\) as follows.
\[\mathbf{M}:=\{\left(u,v\right)\in H\backslash\{\left(0,0\right)\}:F\left(u,v \right):=\left\langle J^{\prime}\left(u,v\right),\left(u,v\right)\right\rangle =0\},\]
where
\[F\left(u,v\right) = \left\|\left(u,v\right)\right\|_{H}^{2}+\int_{\mathbb{R}^{3}} \mu_{11}\phi_{{}_{u}}u^{2}+\mu_{22}\phi_{{}_{v}}v^{2}-2\mu_{12}\phi_{{}_{v}}u ^{2}dx\] \[-\frac{1}{2\pi}\int_{\mathbb{R}^{3}}\int_{0}^{2\pi}\left|u+e^{i \theta}v\right|^{p+1}d\theta dx\] \[= \int_{\mathbb{R}^{3}}|\nabla u|^{2}+\lambda u^{2}dx+\frac{1}{2} \int_{\mathbb{R}^{3}}|\nabla v|^{2}+\lambda v^{2}dx\] \[+\int_{\mathbb{R}^{3}}\mu_{11}\phi_{{}_{u}}u^{2}+\mu_{22}\phi_{{} _{v}}v^{2}-2\mu_{12}\phi_{{}_{v}}u^{2}dx\] \[-\frac{1}{2\pi}\int_{\mathbb{R}^{3}}\int_{0}^{2\pi}\left(u^{2}+2 uv\cos\theta+v^{2}\right)^{\frac{p+1}{2}}d\theta dx.\]
Then \(u\in\mathbf{M}\) if and only if \(\left\langle J^{\prime}\left(u,v\right),\left(u,v\right)\right\rangle=0.\) It follows the Sobolev and Young inequalities that
\[\left\|\left(u,v\right)\right\|_{H}^{2}-2\mu_{12}\widehat{C}_{0} \left\|\left(u,v\right)\right\|_{H}^{4} \leq \left\|\left(u,v\right)\right\|_{H}^{2}+\int_{\mathbb{R}^{3}}\mu_ {11}\phi_{{}_{u}}u^{2}+\mu_{22}\phi_{{}_{v}}v^{2}-2\mu_{12}\phi_{{}_{v}}u^{2}dx\] \[= \frac{1}{2\pi}\int_{\mathbb{R}^{3}}\int_{0}^{2\pi}\left|u+e^{i \theta}v\right|^{p+1}d\theta dx\] \[\leq C_{0}\left\|\left(u,v\right)\right\|_{H}^{p+1}\text{ for all }u\in\mathbf{M}.\]
Since \(1<p<3\), there exists \(C_{\mu_{12}}>0\) with \(C_{\mu_{12}}\to 0\) as \(\mu_{12}\rightarrow\infty\) such that
\[\left\|\left(u,v\right)\right\|_{H}\geq C_{\mu_{12}}\text{ for all }u\in\mathbf{M}. \tag{3.1}\]
The Nehari manifold \(\mathbf{M}\) is closely linked to the behavior of the function of the form \(h_{\left(u,v\right)}:t\to J\left(tu,tv\right)\) for \(t>0\). For \(\left(u,v\right)\in H,\) we find
\[h_{\left(u,v\right)}\left(t\right) = \frac{t^{2}}{2}\left\|\left(u,v\right)\right\|_{H}^{2}+\frac{t^{4 }}{4}\int_{\mathbb{R}^{3}}\mu_{11}\phi_{{}_{u}}u^{2}+\mu_{22}\phi_{{}_{v}}v^{2 }-2\mu_{12}\phi_{{}_{v}}u^{2}dx\] \[-\frac{t^{p+1}}{2\pi\left(p+1\right)}\int_{\mathbb{R}^{3}}\int_{0 }^{2\pi}\left|u+e^{i\theta}v\right|^{p+1}d\theta dx,\] \[h^{\prime}_{\left(u,v\right)}\left(t\right) = t\left\|\left(u,v\right)\right\|_{H}^{2}+t^{3}\int_{\mathbb{R}^ {3}}\mu_{11}\phi_{{}_{u}}u^{2}+\mu_{22}\phi_{{}_{v}}v^{2}-2\mu_{12}\phi_{{}_{ v}}u^{2}dx\] \[-\frac{t^{p}}{2\pi}\int_{\mathbb{R}^{3}}\int_{0}^{2\pi}\left|u+e^ {i\theta}v\right|^{p+1}d\theta dx,\] \[h^{\prime\prime}_{\left(u,v\right)}\left(t\right) = \left\|\left(u,v\right)\right\|_{H}^{2}+3t^{2}\int_{\mathbb{R}^{3 }}\mu_{11}\phi_{{}_{u}}u^{2}+\mu_{22}\phi_{{}_{v}}v^{2}-2\mu_{12}\phi_{{}_{v} }u^{2}dx\] \[-\frac{pt^{p-1}}{2\pi}\int_{\mathbb{R}^{3}}\int_{0}^{2\pi}\left|u +e^{i\theta}v\right|^{p+1}d\theta dx.\]
As a direct consequence, we have
\[th^{\prime}_{\left(u,v\right)}\left(t\right) = \left\|\left(tu,tv\right)\right\|_{H}^{2}+\int_{\mathbb{R}^{3}} \mu_{11}\phi_{{}_{tu}}t^{2}u^{2}+\mu_{22}\phi_{{}_{tv}}t^{2}v^{2}-2\mu_{12} \phi_{{}_{tv}}t^{2}u^{2}dx\] \[-\frac{1}{2\pi}\int_{\mathbb{R}^{3}}\int_{0}^{2\pi}\left|tu+e^{i \theta}tv\right|^{p+1}d\theta dx,\]
and so, for \(\left(u,v\right)\in H\backslash\left\{\left(0,0\right)\right\}\) and \(t>0\), \(h^{\prime}_{\left(u,v\right)}\left(t\right)=0\) holds if and only if \(\left(tu,tv\right)\in\mathbf{M}\). In particular, \(h^{\prime}_{\left(u,v\right)}\left(1\right)=0\) if and only if \(\left(u,v\right)\in\mathbf{M}.\) It is then natural to split \(\mathbf{M}\) into three parts corresponding to the local minima, local maxima and points of inflection. Following [23], we define
\[\mathbf{M}^{+} = \{u\in\mathbf{M}:h^{\prime\prime}_{\left(u,v\right)}\left(1\right) >0\},\] \[\mathbf{M}^{0} = \{u\in\mathbf{M}:h^{\prime\prime}_{\left(u,u\right)}\left(1\right) =0\},\] \[\mathbf{M}^{-} = \{u\in\mathbf{M}:h^{\prime\prime}_{\left(u,v\right)}\left(1\right) <0\}.\]
**Lemma 3.1**: _Suppose that \(\left(u_{0},v_{0}\right)\) is a local minimizer for \(J\) on \(\mathbf{M}\) and \(\left(u_{0},v_{0}\right)\notin\mathbf{M}^{0}.\) Then \(J^{\prime}\left(u_{0},v_{0}\right)=0\) in \(H^{-1}.\)_
**Proof.** The proof of Lemma 3.1 is essentially the same as that in Brown-Zhang [8, Theorem 2.3] (or see Binding-Drabek-Huang [3]) and is subsequently omitted here. \(\square\)
For each \(\left(u,v\right)\in\mathbf{M},\) we find that
\[h^{\prime\prime}_{\left(u,v\right)}\left(1\right) = \left\|\left(u,v\right)\right\|_{H}^{2}+3\int_{\mathbb{R}^{3}} \mu_{11}\phi_{{}_{u}}u^{2}+\mu_{22}\phi_{{}_{v}}v^{2}-2\mu_{12}\phi_{{}_{v}}u^ {2}dx-\frac{p}{2\pi}\int_{\mathbb{R}^{3}}\int_{0}^{2\pi}\left|tu+e^{i\theta }tv\right|^{p+1}d\theta dx \tag{3.2}\] \[= -\left(p-1\right)\left\|\left(u,v\right)\right\|_{H}^{2}+(3-p) \int_{\mathbb{R}^{3}}\mu_{11}\phi_{{}_{u}}u^{2}+\mu_{22}\phi_{{}_{v}}v^{2}-2\mu_ {12}\phi_{{}_{v}}u^{2}dx\] \[= -2\left\|\left(u,v\right)\right\|_{H}^{2}+\frac{3-p}{2\pi}\int_{ \mathbb{R}^{3}}\int_{0}^{2\pi}\left|tu+e^{i\theta}tv\right|^{p+1}d\theta dx. \tag{3.3}\]
For each \(\left(u,v\right)\in\mathbf{M}^{-},\) using (3.1) and (3.3) gives
\[J(u,v) = \frac{1}{4}\left\|\left(u,v\right)\right\|_{H}^{2}-\frac{3-p}{8\pi \left(p+1\right)}\int_{\mathbb{R}^{3}}\int_{0}^{2\pi}\left|tu+e^{i\theta}tv \right|^{p+1}d\theta dx\] \[> \frac{p-1}{4\left(p+1\right)}\left\|\left(u,v\right)\right\|_{H}^ {2}\geq\frac{p-1}{4\left(p+1\right)}C_{\mu_{12}}^{2}>0,\]
and for each \(\left(u,v\right)\in\mathbf{M}^{+},\)
\[J(u,v) = \frac{p-1}{2\left(p+1\right)}\left\|\left(u,v\right)\right\|_{H}^ {2}-\left(\frac{3-p}{4\left(p+1\right)}\right)\int_{\mathbb{R}^{3}}\mu_{11} \phi_{{}_{u}}u^{2}+\mu_{22}\phi_{{}_{v}}v^{2}-2\mu_{12}\phi_{{}_{v}}u^{2}dx\] \[< \frac{p-2}{4p}\left\|\left(u,v\right)\right\|_{H}^{2}.\]
Hence, we obtain the following result.
**Lemma 3.2**: _The energy functional \(J\) is coercive and bounded below on \(\mathbf{M}^{-}.\) Furthermore, for all \(u\in\mathbf{M}^{-}\),_
\[J(u,v)>\frac{p-1}{4\left(p+1\right)}C_{\mu_{12}}^{2}>0.\]
For \(0<\mu_{11}<\Lambda_{0},\) let \(\left(u,v\right)\in\mathbf{M}\) with \(J\left(u,v\right)<\frac{A\left(p\right)\left(p-1\right)}{2\left(p+1\right)} \left(\frac{2S_{p+1}^{p+1}}{3-p}\right)^{2/\left(p-1\right)}.\) Since \(\mu_{11}\leq\mu_{22},\) by Lemma 2.3, we deduce that
\[\frac{A\left(p\right)\left(p-1\right)}{2\left(p+1\right)}\left( \frac{2S_{p+1}^{p+1}}{3-p}\right)^{2/\left(p-1\right)} > J(u,v)\] \[= \frac{p-1}{2\left(p+1\right)}\left\|\left(u,v\right)\right\|_{H}^ {2}-\frac{3-p}{4\left(p+1\right)}\int_{\mathbb{R}^{3}}\mu_{11}\phi_{{}_{u}}u^ {2}+\mu_{22}\phi_{{}_{v}}v^{2}-2\mu_{12}\phi_{{}_{v}}u^{2}dx\] \[\geq \frac{p-1}{2\left(p+1\right)}\left\|\left(u,v\right)\right\|_{H}^ {2}-\frac{3-p}{4\left(p+1\right)}\int_{\mathbb{R}^{3}}\mu_{11}\phi_{{}_{u}}u^ {2}+\mu_{22}\phi_{{}_{v}}v^{2}dx\] \[\geq \frac{p-1}{2\left(p+1\right)}\left\|\left(u,v\right)\right\|_{H}^ {2}-\frac{\mu_{22}\left(3-p\right)}{4\left(p+1\right)}\frac{16}{3\sqrt{3}\pi \lambda^{\frac{3}{2}}}\left\|\left(u,v\right)\right\|_{H}^{4}\] \[= \frac{p-1}{2\left(p+1\right)}\left\|\left(u,v\right)\right\|_{H}^ {2}-\frac{4\mu_{22}\left(3-p\right)}{3\sqrt{3}\left(p+1\right)\pi\lambda^{ \frac{3}{2}}}\left\|\left(u,v\right)\right\|_{H}^{4}.\]
Since the function
\[q\left(x\right)=\frac{p-1}{2\left(p+1\right)}x^{2}-\frac{4\mu_{22}\left(3-p \right)}{3\sqrt{3}\left(p+1\right)\pi\lambda^{\frac{3}{2}}}x^{4}\]
has the maximum at \(x_{0}=\left(\frac{3\sqrt{3}\left(p-1\right)\pi\lambda^{\frac{3}{2}}}{16\mu_{22 }\left(3-p\right)}\right)^{1/2},\) we have
\[\max_{x\geq 0}q\left(x\right)=q\left(x_{0}\right)=\frac{3\sqrt{3}\left(p-1 \right)^{2}\pi\lambda^{\frac{3}{2}}}{64\mu_{22}\left(3-p\right)\left(p+1 \right)}\geq\frac{A\left(p\right)\left(p-1\right)}{2\left(p+1\right)}\left( \frac{2S_{p+1}^{p+1}}{3-p}\right)^{2/\left(p-1\right)}.\]
Thus,
\[\mathbf{M}\left[\frac{A\left(p\right)\left(p-1\right)}{2\left(p+1\right)} \left(\frac{2S_{p+1}^{p+1}}{3-p}\right)^{2/\left(p-1\right)}\right]=\mathbf{M} ^{\left(1\right)}\cup\mathbf{M}^{\left(2\right)}, \tag{3.4}\]
where
\[\mathbf{M}\left[\frac{A\left(p\right)\left(p-1\right)}{2\left(p+1\right)}\left( \frac{2S_{p+1}^{p+1}}{3-p}\right)^{2/\left(p-1\right)}\right]:=\left\{u\in \mathbf{M}:J\left(u,v\right)<\frac{A\left(p\right)\left(p-1\right)}{2\left(p+1 \right)}\left(\frac{2S_{p+1}^{p+1}}{3-p}\right)^{2/\left(p-1\right)}\right\},\]
\[\mathbf{M}^{\left(1\right)}:=\left\{u\in\mathbf{M}\left[\frac{A\left(p\right) \left(p-1\right)}{2\left(p+1\right)}\left(\frac{2S_{p+1}^{p+1}}{3-p}\right)^{2 /\left(p-1\right)}\right]:\left\|\left(u,v\right)\right\|_{H}<\left(\frac{3 \sqrt{3}\left(p-1\right)\pi\lambda^{\frac{3}{2}}}{16\mu_{22}(3-p)}\right)^{1/ 2}\right\},\]
and
\[\mathbf{M}^{\left(2\right)}:=\left\{u\in\mathbf{M}\left[\frac{A\left(p\right) \left(p-1\right)}{2\left(p+1\right)}\left(\frac{2S_{p+1}^{p+1}}{3-p}\right)^{ 2/\left(p-1\right)}\right]:\left\|\left(u,v\right)\right\|_{H}>\left(\frac{3 \sqrt{3}\left(p-1\right)\pi\lambda^{\frac{3}{2}}}{16\mu_{22}(3-p)}\right)^{1/ 2}\right\}.\]
By (3.2) and Lemma 2.3, it follows from the Sobolev inequality that
\[h_{\left(u,v\right)}^{\prime\prime}\left(1\right) = -\left(p-1\right)\left\|\left(u,v\right)\right\|_{H}^{2}+\left(3- p\right)\int_{\mathbb{R}^{3}}\mu_{11}\phi_{{}_{u}}u^{2}+\mu_{22}\phi_{{}_{v}}v^{2}-2 \mu_{12}\phi_{{}_{v}}u^{2}dx\] \[\leq \left\|\left(u,v\right)\right\|_{H}^{2}\left[\frac{16\mu_{22}(3- p)}{3\sqrt{3}\pi\lambda^{\frac{3}{2}}}\left\|\left(u,v\right)\right\|_{H}^{2}- \left(p-1\right)\right]\] \[< \left\|\left(u,v\right)\right\|_{H}^{2}\left(\frac{16\mu_{22}(3- p)}{3\sqrt{3}\pi\lambda^{\frac{3}{2}}}\frac{3\sqrt{3}\left(p-1\right)\pi \lambda^{\frac{3}{2}}}{16\mu_{22}(3-p)}-\left(p-1\right)\right)\] \[= 0\text{ for all }u\in\mathbf{M}^{\left(1\right)}.\]
Using \(\left(\ref{3.3}\right),\) we deduce that
\[\frac{1}{4}\left\|\left(u,v\right)\right\|_{H}^{2}-\frac{3-p}{8 \pi\left(p+1\right)}\int_{\mathbb{R}^{3}}\int_{0}^{2\pi}\left|tu+e^{i\theta}tv \right|^{p+1}d\theta dx\] \[= J\left(u,v\right)<\frac{3\sqrt{3}\left(p-1\right)^{2}\pi\lambda ^{\frac{3}{2}}}{64\mu_{22}(3-p)\left(p+1\right)}\] \[< \frac{p-1}{4\left(p+1\right)}\left\|\left(u,v\right)\right\|_{H}^ {2}\text{ for all }u\in\mathbf{M}^{\left(2\right)},\]
which implies that if \(u\in\mathbf{M}^{\left(2\right)},\) then
\[h_{\left(u,v\right)}^{\prime\prime}\left(1\right)=-2\left\|\left(u,v\right) \right\|_{H}^{2}+\frac{3-p}{2\pi}\int_{\mathbb{R}^{3}}\int_{0}^{2\pi}\left|tu+ e^{i\theta}tv\right|^{p+1}d\theta d>0.\]
Hence, we obtain the following results.
**Lemma 3.3**: _Suppose that \(1<p<3\) and \(\mu_{ij}>0.\) If \(0<\mu_{11}<\Lambda_{0},\) Then \(\mathbf{M}^{\left(1\right)}\subset\mathbf{M}^{-}\) and \(\mathbf{M}^{\left(2\right)}\subset\mathbf{M}^{+}\) are \(C^{1}\) sub-manifolds. Furthermore, each local minimizer of the functional \(J\) in the sub-manifolds \(\mathbf{M}^{\left(1\right)}\) and \(\mathbf{M}^{\left(2\right)}\) is a critical point of \(J\) in \(H.\)_
We have the following results.
**Lemma 3.4**: _Suppose that \(1<p<3\) and \(0<\mu_{11}\leq\mu_{22}.\) Let \(\left(u_{0},v_{0}\right)\) be a critical point of \(J\) on \(\mathbf{M}^{-}.\) Then we have \(J\left(u_{0},v_{0}\right)\geq\beta_{\mu_{11}}\) if either \(u_{0}=0\) or \(v_{0}=0.\)_
**Proof.** Since \(\beta_{\mu_{11}}\leq\beta_{\mu_{22}}\) for \(\mu_{11}\leq\mu_{22},\) without loss of generality, we may assume that \(v_{0}=0.\) Then
\[J\left(u_{0},0\right)=I_{\mu_{11}}\left(u_{0}\right)=\frac{1}{2}\left\|u_{0} \right\|_{H^{1}}^{2}+\frac{\mu_{11}}{4}\int_{\mathbb{R}^{3}}\phi_{u_{0}}u_{0}^ {2}dx-\frac{1}{p+1}\int_{\mathbb{R}^{3}}\left|u_{0}\right|^{p+1}dx\]
and
\[h_{\left(u,0\right)}^{\prime\prime}\left(1\right)=f_{u}^{\prime\prime}\left(1 \right)=-2\left\|u_{0}\right\|_{H^{1}}^{2}+\left(3-p\right)\int_{\mathbb{R}^{ 3}}\left|u_{0}\right|^{p+1}dx<0,\]
which implies that \(u_{0}\in\mathbf{N}_{\mu_{11}}^{-}.\) Thus \(J\left(u_{0},0\right)=I_{\mu_{11}}\left(u_{0}\right)\geq\beta_{\mu_{11}}.\) Consequently, we complete the proof. \(\Box\)
**Lemma 3.5**: _Suppose that \(1<p<3\) and \(\mu_{ij}>0.\) Let \(0<\mu_{11}<\Lambda_{0}\) and let \(w_{\mu_{11}}\) be a positive solution of Equation \(\left(SP_{\mu_{11}}\right)\) with \(I_{\mu_{11}}\left(w_{\mu_{11}}\right)=\beta_{\mu_{11}}.\) Then we have the following results. \(\left(i\right)\) If \(\det\left(\mu_{ij}\right)>0,\) then there exist two constants \(t_{0}^{+}\) and \(t_{0}^{-}\) which satisfy_
\[0<t_{0}^{-}<\left(\frac{2\left\|w_{\mu_{11}}\right\|_{H^{1}}^{2}}{\left(3-p \right)\int_{\mathbb{R}^{3}}w_{\mu_{11}}^{p+1}dx}\right)^{1/\left(p-1\right)}< t_{0}^{+},\]
_such that_
\[\left(t_{0}^{\pm}\sqrt{s_{\min}}w_{\mu_{11}},t_{0}^{\pm}\sqrt{1-s_{0}}w_{\mu_ {11}}\right)\in\mathbf{M}^{\pm}\]
_and_
\[J\left(t_{0}^{-}\sqrt{s_{\min}}w_{\mu_{11}},t_{0}^{-}\sqrt{1-s_{ \min}}w_{\mu_{11}}\right) < \beta_{\mu_{11}},\] \[J\left(t_{0}^{+}\sqrt{s_{\min}}w_{\mu_{11}},t_{0}^{+}\sqrt{1-s_ {\min}}w_{\mu_{11}}\right) = \inf_{t\geq 0}J\left(t\sqrt{s_{\min}}w_{\mu_{11}},t\sqrt{1-s_{ \min}}w_{\mu_{11}}\right)<0,\]
_where \(s_{\min}=\frac{\mu_{22}+\mu_{12}}{\mu_{11}+\mu_{22}+\mu_{12}}\) as in Lemma 2.2. In particular,_
\[\left(t_{0}^{-}\sqrt{s_{\min}}w_{\mu_{11}},t_{0}^{-}\sqrt{1-s_{\min}}w_{\mu_{ 11}}\right)\in\mathbf{M}^{\left(1\right)}\]
_and_
\[\left(t_{0}^{+}\sqrt{s_{\min}}w_{\mu_{11}},t_{0}^{+}\sqrt{1-s_{\min}}w_{\mu_{ 11}}\right)\in\mathbf{M}^{\left(2\right)}.\]
\(\left(ii\right)\) _If \(\det\left(\mu_{i,j}\right)\leq 0,\) then there exists a constant \(t_{0}^{-}\) which satisfy_
\[0<t_{0}^{-}<\left(\frac{2\left\|w_{\mu_{11}}\right\|_{H^{1}}^{2}}{\left(3-p \right)\int_{\mathbb{R}^{3}}w_{\mu_{11}}^{p+1}dx}\right)^{1/\left(p-1\right)},\]
_such that_
\[\left(t_{0}^{-}\sqrt{s_{\min}}w_{\mu_{11}},t_{0}^{-}\sqrt{1-s_{\min}}w_{\mu_{ 11}}\right)\in\mathbf{M}^{\left(1\right)}\]
_and_
\[J\left(t_{0}^{-}\sqrt{s_{\min}}w_{\mu_{11}},t_{0}^{-}\sqrt{1-s_{\min}}w_{\mu_{ 11}}\right)<\beta_{\mu_{ii}}.\]
**Proof.** Define \(w_{0}\left(x\right):=w_{\mu_{11}}\left(x\right)\) and
\[\eta\left(t\right) = t^{-2}\left\|\left(\sqrt{s_{\min}}w_{0},\sqrt{1-s_{\min}}w_{0} \right)\right\|_{H}^{2}-\frac{t^{p-3}}{2\pi}\int_{\mathbb{R}^{3}}\int_{0}^{2 \pi}\left|\sqrt{s_{\min}}w_{0}+e^{i\theta}\sqrt{1-s_{\min}}w_{0}\right|^{p+1}d \theta dx\] \[= t^{-2}\left\|w_{0}\right\|_{H^{1}}^{2}-\frac{t^{p-3}}{2\pi}\int _{\mathbb{R}^{3}}\int_{0}^{2\pi}\left(s_{\min}w_{0}^{2}+2\sqrt{s_{\min}\left(1- s_{\min}\right)}w_{0}^{2}\cos\theta+\left(1-s_{\min}\right)w_{0}^{2}\right)^{ \left(p+1\right)/2}d\theta dx\] \[= t^{-2}\left\|w_{0}\right\|_{H^{1}}^{2}-\frac{t^{p-3}}{2\pi}\int _{\mathbb{R}^{3}}\int_{0}^{2\pi}\left(1+2\sqrt{s_{\min}\left(1-s_{\min}\right)} \cos\theta\right)^{\left(p+1\right)/2}w_{0}^{p+1}d\theta dx,\ \ \mbox{for}\ t>0.\]
Clearly, \(t_{0}w_{0}\in\mathbf{M}\) if and only if
\[\eta\left(t_{0}\right) = -\int_{\mathbb{R}^{3}}\mu_{11}\phi_{\sqrt{\min_{w_{0}}}}s_{\min}w_{ 0}^{2}+\mu_{22}\phi_{\sqrt{1-s_{\min}w_{0}}}\left(1-s_{\min}\right)w_{0}^{2}-2 \mu_{12}\phi_{\sqrt{1-s_{\min}w_{0}}}\left(1-s_{\min}\right)w_{0}^{2}dx\] \[= -\left[\mu_{11}s_{\min}^{2}+\mu_{22}\left(1-s_{\min}\right)^{2}-2 \mu_{12}s_{\min}\left(1-s_{\min}\right)\right]\int_{\mathbb{R}^{3}}\phi_{{}_{w _{0}}}w_{0}^{2}dx\] \[= -\frac{\mu_{11}\mu_{22}-\mu_{12}^{2}}{\mu_{11}+\mu_{22}+2\mu_{12 }}\int_{\mathbb{R}^{3}}\phi_{{}_{w_{0}}}w_{0}^{2}dx,\ \ \mbox{for some $t_{0}>0$}.\]
Moreover, by Jensen's inequality,
\[\frac{1}{2\pi}\int_{0}^{2\pi}\left(1+2\sqrt{s_{\min}\left(1-s_{ \min}\right)}\cos\theta\right)^{\left(p+1\right)/2}d\theta > \left(\frac{1}{2\pi}\int_{0}^{2\pi}1+2\sqrt{s_{\min}\left(1-s_{ \min}\right)}\cos\theta d\theta\right)^{\left(p+1\right)/2}\] \[= 1.\]
Thus,
\[\eta\left(t\right)<\eta_{0}\left(t\right),\ \mbox{for $t>0$},\]
where
\[\eta_{0}\left(t\right)=t^{-2}\left\|w_{0}\right\|_{H^{1}}^{2}-t^{p-3}\int_{ \mathbb{R}^{3}}w_{0}^{p+1}dx.\]
A straightforward evaluation gives
\[\eta_{0}\left(1\right)=-\mu_{11}\int_{\mathbb{R}^{3}}\phi_{{}_{w_{0}}}w_{0}^{ 2}dx,\lim_{t\to 0^{+}}\eta_{0}(t)=\infty\ \mbox{and}\ \lim_{t\rightarrow\infty}\eta_{0}(t)=0.\]
Since \(1<p<3\) and
\[\eta_{0}^{\prime}\left(t\right)=t^{-3}\left[-2\left\|w_{0}\right\|_{H^{1}}^{2 }+\left(3-p\right)t^{p-2}\int_{\mathbb{R}^{3}}w_{0}^{p+1}dx\right],\]
we find that \(\eta_{0}\left(t\right)\) is decreasing when \(0<t<\left(\frac{2\left\|w_{0}\right\|_{H^{1}}^{2}}{\left(3-p\right)\int_{ \mathbb{R}^{3}}w_{0}^{p+1}dx}\right)^{1/\left(p-1\right)}\) and is increasing when \(t>\left(\frac{2\left\|w_{0}\right\|_{H^{1}}^{2}}{\left(3-p\right)\int_{ \mathbb{R}^{3}}w_{0}^{p+1}dx}\right)^{1/\left(p-1\right)}>1.\) This gives
\[\inf_{t>0}\eta_{0}\left(t\right)=\eta_{0}\left(\left(\frac{2\left\|w_{0} \right\|_{H^{1}}^{2}}{\left(3-p\right)\int_{\mathbb{R}^{3}}w_{0}^{p+1}dx} \right)^{1/\left(p-1\right)}\right),\]
and it follows that
\[\inf_{t>0}\eta\left(t\right) \leq \eta\left(\left(\frac{2\left\|w_{0}\right\|_{H^{1}}^{2}}{\left(3- p\right)\int_{\mathbb{R}^{3}}w_{0}^{p+1}dx}\right)^{1/\left(p-1\right)}\right)<\eta_{0} \left(\left(\frac{2\left\|w_{0}\right\|_{H^{1}}^{2}}{\left(3-p\right)\int_{ \mathbb{R}^{3}}w_{0}^{p+1}dx}\right)^{1/\left(p-1\right)}\right)\] \[< \eta_{0}\left(1\right)\] \[= -\mu_{11}\int_{\mathbb{R}^{3}}\phi_{{}_{w_{0}}}w_{0}^{2}dx\] \[< -\frac{\mu_{11}\mu_{22}-\mu_{12}^{2}}{\mu_{11}+\mu_{22}+2\mu_{12 }}\int_{\mathbb{R}^{3}}\phi_{{}_{w_{0}}}w_{0}^{2}dx\] \[= -\left[\mu_{11}s_{\min}^{2}+\mu_{22}\left(1-s_{\min}\right)^{2}-2 \mu_{12}s_{\min}\left(1-s_{\min}\right)\right]\int_{\mathbb{R}^{3}}\phi_{{}_{w _{0}}}w_{0}^{2}dx\] \[= -\int_{\mathbb{R}^{3}}\mu_{11}\phi_{{}_{\sqrt{\min}w_{0}}}s_{\min} w_{0}^{2}+\mu_{22}\phi_{{}_{\sqrt{1-s_{\min}w_{0}}}}\left(1-s_{\min}\right)w_{0}^{2}-2 \mu_{12}\phi_{{}_{\sqrt{1-s_{\min}w_{0}}}}\left(1-s_{\min}\right)w_{0}^{2}dx,\]
since
\[\mu_{11}s_{\min}^{2}+\mu_{22}\left(1-s_{\min}\right)^{2}-2\mu_{12}s _{\min}\left(1-s_{\min}\right) = \frac{\mu_{11}\mu_{22}-\mu_{12}^{2}}{\mu_{11}+\mu_{22}+2\mu_{12}}\] \[< \mu_{ii},\ \ \text{for}\ i=1,2.\]
\((i)\)\(\det\left(\mu_{ij}\right)=\mu_{11}\mu_{22}-\mu_{12}^{2}>0.\) Since \(\lim_{t\to 0^{+}}\eta(t)=\infty\) and \(\lim_{t\rightarrow\infty}\eta(t)=0,\) there exist two constants \(t_{0}^{+}\) and \(t_{0}^{-}>0\) which satisfy
\[t_{0}^{-}<1<\left(\frac{2\left\|w_{0}\right\|_{H^{1}}^{2}}{(3-p)\int_{\mathbb{ R}^{3}}w_{0}^{p+1}dx}\right)^{1/(p-1)}<t_{0}^{+}, \tag{3.5}\]
such that
\[\eta\left(t_{0}^{\pm}\right)+\int_{\mathbb{R}^{3}}\mu_{11}\phi_{\sqrt{s_{\min }}w_{0}}s_{\min}w_{0}^{2}+\mu_{22}\phi_{\sqrt{1-s_{\min}}w_{0}}\left(1-s_{ \min}\right)w_{0}^{2}-2\mu_{12}\phi_{\sqrt{1-s_{\min}}w_{0}}\left(1-s_{\min} \right)w_{0}^{2}dx=0.\]
That is,
\[\left(t_{0}^{\pm}\sqrt{s_{\min}}w_{0},t_{0}^{\pm}\sqrt{1-s_{\min}}w_{0}\right) \in\mathbf{M}.\]
By a calculation on the second order derivatives, we find
\[h_{\left(t_{0}^{-}\sqrt{s_{\min}}w_{0},t_{0}^{-}\sqrt{1-s_{\min} }w_{0}\right)}^{\prime\prime}\left(1\right)\] \[= -2\left\|t_{0}^{-}w_{0}\right\|_{H^{1}}^{2}+\frac{3-p}{2\pi}\int _{\mathbb{R}^{3}}\int_{0}^{2\pi}\left(1+2\sqrt{s_{\min}\left(1-s_{\min}\right) }\cos\theta\right)^{(p+1)/2}\left|t_{0}^{-}w_{0}\right|^{p+1}d\theta dx\] \[= \left(t_{0}^{-}\right)^{5}\eta^{\prime}\left(t_{0}^{-}\right)\] \[< 0,\]
and
\[h_{\left(t_{0}^{+}\sqrt{s_{\min}}w_{0},t_{0}^{+}\sqrt{1-s_{\min} }w_{0}\right)}^{\prime\prime}\left(1\right)\] \[= -2\left\|t_{0}^{+}w_{0}\right\|_{H^{1}}^{2}+\frac{3-p}{2\pi}\int _{\mathbb{R}^{3}}\int_{0}^{2\pi}\left(1+2\sqrt{s_{\min}\left(1-s_{\min}\right) }\cos\theta\right)^{(p+1)/2}\left|t_{0}^{+}w_{0}\right|^{p+1}d\theta dx\] \[= \left(t_{0}^{+}\right)^{5}\eta^{\prime}\left(t_{0}^{+}\right)\] \[> 0.\]
This implies that
\[\left(t_{0}^{\pm}\sqrt{s_{\min}}w_{0},t_{0}^{\pm}\sqrt{1-s_{\min}}w_{0} \right)\in\mathbf{M}^{\pm}\]
and
\[h_{\left(t_{0}^{+}\sqrt{s_{\min}}w_{0},t_{0}^{+}\sqrt{1-s_{\min }}w_{0}\right)}^{\prime}\left(t\right)\] \[= t^{3}\left(\eta(t)+\int_{\mathbb{R}^{3}}\mu_{11}\phi_{\sqrt{s_{ \min}}w_{0}}s_{\min}w_{0}^{2}+\mu_{22}\phi_{\sqrt{1-s_{\min}}w_{0}}\left(1-s_{ \min}\right)w_{0}^{2}-2\mu_{12}\phi_{\sqrt{1-s_{\min}}w_{0}}\left(1-s_{\min} \right)w_{0}^{2}dx\right).\]
Clearly,
\[h_{\left(\sqrt{s_{\min}}w_{0},\sqrt{1-s_{\min}}w_{0}\right)}^{\prime}\left(t \right)>0\ \text{for all}\ t\in\left(0,t_{0}^{-}\right)\cup\left(t_{0}^{+},\infty\right)\]
\[\xi\left(t\right)=\frac{t^{-2}}{2}\left\|w_{0}\right\|_{H^{1}}^{2}-\frac{t^{p-3}}{p +1}\int_{\mathbb{R}^{3}}\left|w_{0}\right|^{p+1}dx.\]
Clearly, \(J\left(t_{0}\sqrt{s_{\min}}w_{0},t_{0}\sqrt{1-s_{\min}}w_{0}\right)<0\) if
\[\xi\left(t_{0}\right)+\frac{\mu_{11}}{4}\int_{\mathbb{R}^{3}}\phi_{w_{0}}^{2}w _{0}dx\leq 0\text{ for some }t_{0}>0.\]
It is not difficult to observe that
\[\xi\left(\hat{t}_{0}\right)=0,\ \ \lim_{t\to 0^{+}}\xi(t)=\infty\ \ \text{ and }\ \lim_{t\rightarrow\infty}\xi(t)=0,\]
where \(\hat{t}_{0}=\left(\frac{\left(p+1\right)\left\|w_{0}\right\|_{H^{1}}^{2}}{2\int_{ \mathbb{R}^{3}}\left|w_{0}\right|^{p+1}dx}\right)^{1/\left(p-1\right)}.\) Considering the derivative of \(\xi(t),\) we find
\[\xi^{\prime}\left(t\right) = -t^{-3}\left\|w_{0}\right\|_{H^{1}}^{2}+\frac{3-p}{p+1}t^{p-4} \int_{\mathbb{R}^{3}}\left|w_{0}\right|^{p+1}dx\] \[= t^{-3}\left[\frac{3-p}{p+1}t^{p-1}\int_{\mathbb{R}^{3}}\left|w_{ 0}\right|^{p+1}dx-\left\|w_{0}\right\|_{H^{1}}^{2}\right],\]
which implies that \(\xi\left(t\right)\) is decreasing when \(0<t<\left(\frac{\left(p+1\right)\left\|w_{0}\right\|_{H^{1}}^{2}}{\left(3-p \right)\int_{\mathbb{R}^{3}}\left|w_{0}\right|^{p+1}dx}\right)^{1/\left(p-1 \right)}\) and is increasing when \(t>\left(\frac{\left(p+1\right)\left\|w_{0}\right\|_{H^{1}}^{2}}{\left(3-p \right)\int_{\mathbb{R}^{3}}\left|w_{0}\right|^{p+1}dx}\right)^{1/\left(p-1 \right)}.\) Then
\[t_{0}^{-}<1<\left(\frac{2\left\|w_{0}\right\|_{H^{1}}^{2}}{\left(3-p\right) \int_{\mathbb{R}^{3}}\left|w_{0}\right|^{p+1}dx}\right)^{1/\left(p-1\right)}< \left(\frac{\left(p+1\right)\left\|w_{0}\right\|_{H^{1}}^{2}}{\left(3-p\right) \int_{\mathbb{R}^{3}}\left|w_{0}\right|^{p+1}dx}\right)^{1/\left(p-1\right)}\]
and
\[\inf_{t>0}\xi\left(t\right) = \xi\left[\left(\frac{\left(p+1\right)\left\|w_{0}\right\|_{H^{1} }^{2}}{\left(3-p\right)\int_{\mathbb{R}^{3}}\left|w_{0}\right|^{p+1}dx}\right) ^{1/\left(p-1\right)}\right]\] \[= -\frac{p-1}{2\left(3-p\right)}\left(\frac{\left(3-p\right)\int_{ \mathbb{R}^{3}}\left|w_{0}\right|^{p+1}dx}{\left(p+1\right)\left\|w_{0}\right\| _{H^{1}}^{2}}\right)^{2/\left(p-1\right)}\left\|w_{0}\right\|_{H^{1}}^{2}\] \[< -\frac{p-1}{2\left(3-p\right)}\left\|w_{0}\right\|_{H^{1}}^{2}\] \[< -\frac{4\mu_{11}}{3\sqrt{3}\pi\lambda^{\frac{3}{2}}}\left\|w_{0} \right\|_{H^{1}}^{4}\] \[< -\frac{\mu_{11}}{4}\int_{\mathbb{R}^{3}}\phi_{w_{0}}^{2}w_{0}dx,\]
which, subsequently, yields
\[J\left(t_{0}^{+}\sqrt{s_{\min}}w_{0},t_{0}^{+}\sqrt{1-s_{\min}}w_{0}\right)= \inf_{t\geq t_{0}^{-}}J\left(t\sqrt{s_{\min}}w_{0},t\sqrt{1-s_{\min}}w_{0} \right)<0,\]
indicating that \(\left(t_{0}^{+}\sqrt{s_{\min}}w_{0},t_{0}^{+}\sqrt{1-s_{\min}}w_{0}\right)\in \mathbf{M}^{\left(2\right)}.\)
\(\left(ii\right)\,\det\left(\mu_{ij}\right)=\mu_{11}\mu_{22}-\mu_{12}^{2}\leq 0.\) The proof is similar to the argument used in part \(\left(i\right)\) and is therefore omitted here. This completes the proof. \(\square\)
Define
\[\alpha^{-}:=\inf_{\left(u,v\right)\in\mathbf{M}^{\left(1\right)}}J\left(u,v \right)\text{ for }1<p<3.\]
Clearly, \(0<\frac{p-1}{4\left(p+1\right)}C_{\mu_{12}}^{2}<\alpha^{-}<\beta_{\mu_{11}}\) for \(0<\mu_{11}<\Lambda_{0}.\)
**We are now ready to prove Theorem 1.3.** By Lemmas 3.2, 3.3, 3.5 and the Ekeland variational principle, there exists a minimizing sequence \(\left\{\left(u_{n},v_{n}\right)\right\}\subset\mathbf{M}^{\left(1\right)}\) such that
\[J\left(u_{n},v_{n}\right)=\alpha^{-}+o\left(1\right)\text{ and }J^{\prime} \left(u_{n},v_{n}\right)=o\left(1\right)\text{ in }H^{-1}. \tag{3.6}\]
Since \(\{(u_{n},v_{n})\}\) is bounded, there exists a convergent subsequence of \(\{(u_{n},v_{n})\}\) (which will also be denoted by \(\{(u_{n},v_{n})\}\) for convenience) such that as \(n\to\infty,\)
\[\left(u_{n},v_{n}\right)\rightharpoonup\left(u_{0},v_{0}\right)\text{ weakly in }H, \tag{3.7}\]
where \(\left(u_{0},v_{0}\right)\in H\). By (3.7) and Sobolev compact embedding, we obtain
\[\left(u_{n},v_{n}\right) \to \left(u_{0},v_{0}\right)\text{ strongly in }L_{loc}^{p}\left(\mathbb{R}^{3}\right)\times L_{loc}^{p}\left( \mathbb{R}^{3}\right), \tag{3.8}\] \[\left(u_{n},v_{n}\right) \to \left(u_{0},v_{0}\right)\text{ a.e. in }\mathbb{R}^{3}. \tag{3.9}\]
Now we claim that there exist a subsequence \(\{(u_{n},v_{n})\}_{n=1}^{\infty}\) and a sequence \(\{x_{n}\}_{n=1}^{\infty}\subset\mathbb{R}^{3}\) such that
\[\int_{B^{N}(x_{n},R)}\left|\left(u_{n},v_{n}\right)\right|^{2}dx\geq d_{0}>0 \text{ for all }n\in\mathbb{N}, \tag{3.10}\]
where \(d_{0}\) and \(R\) are positive constants that are independent of \(n.\) Suppose the contrary is true. Then, for all \(R>0,\)
\[\sup_{x\in\mathbb{R}^{N}}\int_{B^{N}(x_{n},R)}\left|\left(u_{n},v_{n}\right) \right|^{2}dx\to 0\text{ as }n\to\infty.\]
Thus, applying the argument of Lemma I.1 in [15] (see also [24]) gives
\[\int_{\mathbb{R}^{N}}\left|u_{n}\right|^{r}+\left|v_{n}\right|^{r}dx\to 0 \text{ as }n\to\infty, \tag{3.11}\]
for all \(2<r<2^{*}.\) Then we have
\[\int_{\mathbb{R}^{3}}\int_{0}^{2\pi}\left|u_{n}+e^{i\theta}v_{n}\right|^{p+1} d\theta dx\to 0\text{ as }n\to\infty\]
and
\[\int_{\mathbb{R}^{3}}\mu_{11}\phi_{{}_{u_{n}}}u_{n}^{2}+\mu_{22}\phi_{{}_{v_{ n}}}v_{n}^{2}-2\mu_{12}\phi_{{}_{v_{n}}}u_{n}^{2}dx\to 0\text{ as }n\to\infty,\]
implying
\[\alpha^{-}+o\left(1\right) = J\left(u_{n},v_{n}\right)\] \[= -\frac{1}{4}\int_{\mathbb{R}^{3}}\mu_{11}\phi_{{}_{u_{n}}}u_{n}^{ 2}+\mu_{22}\phi_{{}_{v_{n}}}v_{n}^{2}-2\mu_{12}\phi_{{}_{v_{n}}}u_{n}^{2}dx\] \[+\frac{p-1}{4\left(p+1\right)\pi}\int_{\mathbb{R}^{3}}\int_{0}^{2 \pi}\left|u_{n}+e^{i\theta}v_{n}\right|^{p+1}d\theta dx\] \[= o\left(1\right);\]
this contradicts \(\alpha^{-}>0.\) Let \(\left(\overline{u}_{n}\left(x\right),\overline{v}_{n}\left(x\right)\right)= \left(u_{n}\left(x-x_{n}\right),v_{n}\left(x-x_{n}\right)\right).\) Clearly, \(\{(\overline{u}_{n},\overline{v}_{n})\}\subset\mathbf{M}^{(1)}\) such that
\[J\left(\overline{u}_{n},\overline{v}_{n}\right)=\alpha^{-}+o\left(1\right) \text{ and }J^{\prime}\left(\overline{u}_{n},\overline{v}_{n}\right)=o\left(1\right) \text{ in }H^{-1}. \tag{3.12}\]
Since \(\{(\overline{u}_{n},\overline{v}_{n})\}\) is also bounded, there exists a convergent subsequence of \(\{(\overline{u}_{n},\overline{v}_{n})\}\) and \(\left(u_{0}^{(1)},v_{0}^{(1)}\right)\in H\) such that as \(n\to\infty,\)
\[\left(\overline{u}_{n},\overline{v}_{n}\right)\rightharpoonup\left(u_{0}^{(1) },v_{0}^{(1)}\right)\text{ weakly in }H. \tag{3.13}\]
By (3.7) and Sobolev compact embedding, we obtain
\[\left(\overline{u}_{n},\overline{v}_{n}\right) \rightarrow \left(u_{0}^{\left(1\right)},v_{0}^{\left(1\right)}\right)\text{ strongly in }L_{loc}^{p}\left(\mathbb{R}^{3}\right)\times L_{loc}^{p}\left(\mathbb{R}^{3} \right), \tag{3.14}\] \[\left(\overline{u}_{n},\overline{v}_{n}\right) \rightarrow \left(u_{0}^{\left(1\right)},v_{0}^{\left(1\right)}\right)\text{ a.e. in }\mathbb{R}^{3}. \tag{3.15}\]
Moreover, by \(\left(\ref{eq:10}\right)\) and \(\left(\ref{eq:12}\right)-\left(\ref{eq:15}\right)\),
\[\int_{B^{N}\left(R\right)}\left|\left(u_{0}^{\left(1\right)},v_{0}^{\left(1 \right)}\right)\right|^{2}dx\geq d_{0}>0,\]
and the function \(\left(u_{0}^{\left(1\right)},v_{0}^{\left(1\right)}\right)\) is a nontrivial solution of System \(\left(E\right).\) On the other hand, we have \(\left(u_{0}^{\left(1\right)},v_{0}^{\left(1\right)}\right)\in\mathbf{M}.\) By Fatou's Lemma,
\[\left\|\left(u_{0}^{\left(1\right)},v_{0}^{\left(1\right)}\right)\right\|_{H }^{2}\leq\liminf_{n\rightarrow\infty}\left\|\left(\overline{u}_{n},\overline {v}_{n}\right)\right\|_{H}^{2}.\]
Suppose that
\[\left\|\left(u_{0}^{\left(1\right)},v_{0}^{\left(1\right)}\right)\right\|_{H }^{2}<\liminf_{n\rightarrow\infty}\left\|\left(\overline{u}_{n},\overline{v} _{n}\right)\right\|_{H}^{2}. \tag{3.16}\]
Then \(\left\|\left(u_{0}^{\left(1\right)},v_{0}^{\left(1\right)}\right)\right\|_{H }<\left(\frac{3\sqrt{3}(p-1)\pi\lambda^{\frac{3}{2}}}{16\mu_{22}(3-p)}\right) ^{1/2}\). By \(\left(\ref{eq:12}\right)\) and Lemma 2.3, it follows from the Sobolev inequality that
\[h_{\left(u_{0}^{\left(1\right)},v_{0}^{\left(1\right)}\right)}^{ \prime\prime}\left(1\right) = -\left(p-2\right)\left\|\left(u_{0}^{\left(1\right)},v_{0}^{\left( 1\right)}\right)\right\|_{H}^{2}+\lambda\left(4-p\right)\int_{\mathbb{R}^{3}} \phi_{u_{0}^{\left(1\right)},v_{0}^{\left(1\right)}}\left(\left[u_{0}^{\left( 1\right)}\right]^{2}+\left[v_{0}^{\left(1\right)}\right]^{2}\right)dx\] \[\leq \left\|\left(u_{0}^{\left(1\right)},v_{0}^{\left(1\right)}\right) \right\|_{H}^{2}\left[\frac{\lambda(4-p)}{\overline{S}^{2}S_{12/5}^{4}}\left\| \left(u_{0}^{\left(1\right)},v_{0}^{\left(1\right)}\right)\right\|_{H}^{2}- \left(p-2\right)\right]\] \[< \left\|\left(u_{0}^{\left(1\right)},v_{0}^{\left(1\right)}\right) \right\|_{H}^{2}\left(\frac{16\mu_{11}(3-p)}{3\sqrt{3}\pi\lambda^{\frac{3}{2} }}\frac{3\sqrt{3}\left(p-1\right)\pi\lambda^{\frac{3}{2}}}{16\mu_{11}(3-p)}- \left(p-1\right)\right)=0,\]
this indicates that \(\left(u_{0}^{\left(1\right)},v_{0}^{\left(1\right)}\right)\in\mathbf{M}^{-}\) and \(J\left(u_{0}^{\left(1\right)},v_{0}^{\left(1\right)}\right)\geq\alpha^{-}.\) Let \(\left(w_{n},z_{n}\right)=\left(\overline{u}_{n}-u_{0}^{\left(1\right)}, \overline{v}_{n}-v_{0}^{\left(1\right)}\right).\) Then, by \(\left(\ref{eq:13}\right)\) and \(\left(\ref{eq:16}\right),\) there exists \(c_{0}>0\) such that
\[c_{0}\leq\left\|\left(w_{n},z_{n}\right)\right\|_{H}^{2}=\left\|\left(\overline{u }_{n},\overline{v}_{n}\right)\right\|_{H}^{2}-\left\|\left(u_{0}^{\left(1 \right)},v_{0}^{\left(1\right)}\right)\right\|_{H}^{2}+o\left(1\right),\]
which implies that
\[\left\|\left(w_{n},z_{n}\right)\right\|_{H}^{2}<\left(\frac{3\sqrt{3}\left(p- 1\right)\pi\lambda^{\frac{3}{2}}}{16\mu_{22}(3-p)}\right)^{1/2},\text{ for }n\text{ sufficiently large.} \tag{3.17}\]
On the other hand, the Brezis-Lieb Lemma(cf. [5]) gives
\[\int_{\mathbb{R}^{3}}\int_{0}^{2\pi}\left|\overline{u}_{n}+e^{i\theta} \overline{v}_{n}\right|^{p+1}d\theta dx=\int_{\mathbb{R}^{3}}\int_{0}^{2\pi} \left|w_{n}+e^{i\theta}z_{n}\right|^{p+1}d\theta dx+\int_{\mathbb{R}^{3}}\int_ {0}^{2\pi}\left|u_{0}^{\left(1\right)}+e^{i\theta}v_{0}^{\left(1\right)} \right|^{p+1}d\theta dx\]
\[\int_{\mathbb{R}^{3}}\mu_{11}\phi_{\overline{\pi}_{n}}\,\overline{u} _{n}^{2}+\mu_{22}\phi_{\overline{\pi}_{n}}\,\overline{v}_{n}^{2}-2\mu_{12}\phi_ {\overline{\pi}_{n}}\,\overline{u}_{n}^{2}dx\] \[= \int_{\mathbb{R}^{3}}\mu_{11}\phi_{{}_{w_{n}}}w_{n}^{2}+\mu_{22} \phi_{{}_{z_{n}}}z_{n}^{2}-2\mu_{12}\phi_{{}_{z_{n}}}w_{n}^{2}dx\] \[+\int_{\mathbb{R}^{3}}\mu_{11}\phi_{{}_{w_{0}^{(1)}}}\left[u_{0}^ {(1)}\right]^{2}+\mu_{22}\phi_{{}_{v_{0}^{(1)}}}\left[v_{0}^{(1)}\right]^{2}- 2\mu_{12}\phi_{{}_{v_{0}^{(1)}}}\left[u_{0}^{(1)}\right]^{2}dx.\]
This implies that
\[\left\|(w_{n},z_{n})\right\|_{H}^{2}+\int_{\mathbb{R}^{3}}\phi_{w_{n},z_{n}} \left(w_{n}^{2}+z_{n}^{2}\right)dx-\int_{\mathbb{R}^{3}}\int_{0}^{2\pi}\left| w_{n}+e^{i\theta}z_{n}\right|^{p+1}d\theta dx=o\left(1\right) \tag{3.18}\]
and
\[J\left(\overline{u}_{n},\overline{v}_{n}\right)=J\left(w_{n},z_{n}\right)+J \left(u_{0}^{(1)},v_{0}^{(1)}\right)+o\left(1\right). \tag{3.19}\]
Moreover, by (3.17) and (3.18), there exists \(s_{n}=1+o\left(1\right)\) such that
\[\left\|(s_{n}w_{n},s_{n}z_{n})\right\|_{H}^{2}+\int_{\mathbb{R}^{3}}\phi_{s_{ n}w_{n},s_{n}z_{n}}\left(s_{n}^{2}w_{n}^{2}+s_{n}^{2}z_{n}^{2}\right)dx-\int_{ \mathbb{R}^{3}}\int_{0}^{2\pi}\left|s_{n}w_{n}+e^{i\theta}s_{n}z_{n}\right|^{ p+1}d\theta dx=0\]
and
\[\left\|(s_{n}w_{n},s_{n}z_{n})\right\|_{H}^{2}<\left(\frac{3\sqrt{3}\left(p- 1\right)\pi\lambda^{\frac{3}{2}}}{16\mu_{22}(3-p)}\right)^{1/2}\text{ for }n\text{ sufficiently large}.\]
Hence,
\[h_{\lambda,\left(s_{n}w_{n},s_{n}z_{n}\right)}^{\prime\prime}\left(1\right)= -\left(p-1\right)\left\|(s_{n}w_{n},s_{n}z_{n})\right\|_{H}^{2}+\lambda\left( 3-p\right)\left(s_{n}w_{n},s_{n}z_{n}\right)<0,\]
implying that \(J\left(s_{n}w_{n},s_{n}z_{n}\right)\geq\frac{1}{2}\alpha^{-}\) when \(n\) is sufficiently large. Therefore,
\[\alpha^{-}+o\left(1\right)=J\left(\overline{u}_{n},\overline{v}_{n}\right) \geq\frac{3}{2}\alpha^{-},\text{ for }n\text{ sufficiently large},\]
which is a contradiction. Thus, we can conclude that \(\left\|\left(u_{0}^{(1)},v_{0}^{(1)}\right)\right\|_{H}^{2}=\liminf_{n\to \infty}\left\|(\overline{u}_{n},\overline{v}_{n})\right\|_{H}^{2},\) then
\[\left(\overline{u}_{n},\overline{v}_{n}\right)\to\left(u_{0}^{(1)},v_{0}^{(1) }\right)\text{ strongly in }H,\]
and \(J\left(u_{0}^{(1)},v_{0}^{(1)}\right)=\alpha^{-}\), then so is \(\left(\left|u_{0}^{(1)}\right|,\left|v_{0}^{(1)}\right|\right).\) Then, by Lemma 3.3, we may assume that \(\left(u_{0}^{(1)},v_{0}^{(1)}\right)\) is a positive nontrivial critical point of \(J\). Moreover, by Lemma 3.4 and \(\alpha^{-}<\beta_{\mu_{11}}\), we have \(u_{0}^{(1)}\neq 0\) and \(v_{0}^{(1)}\neq 0.\) Moreover,
\[\left\|\left(u_{0}^{(1)},v_{0}^{(1)}\right)\right\|_{H}\leq\left(\frac{3\sqrt {3}\left(p-1\right)\pi\lambda^{\frac{3}{2}}}{16\mu_{22}(3-p)}\right)^{1/2} \to 0\text{ as }\mu_{22}\to\infty.\]
We complete the proof.
## 4 Positive ground state solutions
Define
\[\mathbb{A}:=\left\{\left(u,v\right)\in H\setminus\left\{\left(0,0\right)\right\}: \left(u,v\right)\text{ is a solution of System }\left(E\right)\text{ with }J\left(u,v\right)<D_{0}\right\},\]
where \(D_{0}=\frac{A\left(p\right)\left(p-1\right)}{2\left(p+1\right)}\left(\frac{2 \varsigma_{p+1}^{p+1}}{3-p}\right)^{2/\left(p-1\right)}.\) Clearly, \(\mathbb{A}\subset\mathbf{M}\left[D_{0}\right].\) Let
\[\overline{\Lambda}_{0}:=\left\{\begin{array}{ll}\frac{3\sqrt{3}\left(p+1 \right)^{2}\left(p-1\right)^{1/2}\pi\lambda^{3/2}}{8\left(5-p\right)^{2}\left( 3-p\right)^{1/2}}\left(\frac{3-p}{2S_{p+1}^{p+1}}\right)^{2/\left(p-1\right)},&\text{ if }2\leq p<\frac{\sqrt{73}-2}{3},\\ \infty,&\text{ if }\frac{\sqrt{73}-2}{3}\leq p<3.\end{array}\right.\]
Then we have the following results.
**Proposition 4.1**: _Suppose that \(2\leq p<3\) and \(\mu_{ij}>0.\) Then we have \(\left(i\right)\) if \(2<p<3\) and \(\mu_{11}\mu_{22}-\mu_{12}^{2}\geq 0,\) then for each \(\mu_{ii}\in\left(0,\overline{\Lambda}_{0}\right),\) we have \(\mathbb{A}\subset\mathbf{M}^{-};\)\(\left(ii\right)\) if \(p=2,\) then for every \(\mu_{ii}\in\left(0,\overline{\Lambda}_{0}\right),\) we have \(\mathbb{A}\subset\mathbf{M}^{-}.\)_
**Proof.** Let \(\left(u_{0},v_{0}\right)\in\mathbb{A}\) be a nontrivial solution of System \(\left(E\right).\) Then \(\left(u_{0},v_{0}\right)\) satisfies the Nehari identity:
\[\left\|\left(u_{0},v_{0}\right)\right\|_{H}^{2}+\int_{\mathbb{R}^{3}}\mu_{11} \phi_{{}_{u_{0}}}u_{0}^{2}+\mu_{22}\phi_{{}_{v_{0}}}v_{0}^{2}-2\mu_{12}\phi_{ {}_{v_{0}}}u_{0}^{2}dx-\frac{1}{2\pi}\int_{\mathbb{R}^{3}}\int_{0}^{2\pi} \left|tu_{0}+e^{i\theta}tv_{0}\right|^{p+1}d\theta dx=0. \tag{4.1}\]
Following the argument of [11, Lemma 3.1], it is not difficult to verify that solution \(\left(u_{0},v_{0}\right)\) also satisfies the following Pohozaev type identity:
\[\frac{1}{2}\left(\int_{\mathbb{R}^{3}}\left|\nabla u_{0}\right|^ {2}dx+\int_{\mathbb{R}^{3}}\left|\nabla v_{0}\right|^{2}dx\right)+\frac{3}{2} \left(\int_{\mathbb{R}^{3}}\lambda u_{0}^{2}dx+\int_{\mathbb{R}^{3}}\lambda v _{0}^{2}dx\right) \tag{4.2}\] \[+\frac{5}{4}\int_{\mathbb{R}^{3}}\mu_{11}\phi_{{}_{v_{0}}}u_{0}^ {2}+\mu_{22}\phi_{{}_{v_{0}}}v_{0}^{2}-2\mu_{12}\phi_{{}_{v_{0}}}u_{0}^{2}dx\] \[= \frac{3}{2\pi\left(p+1\right)}\int_{\mathbb{R}^{3}}\int_{0}^{2\pi }\left|tu_{0}+e^{i\theta}tv_{0}\right|^{p+1}d\theta dx.\]
Assume that
\[J\left(u_{0},v_{0}\right) = \frac{1}{2}\left\|\left(u_{0},v_{0}\right)\right\|_{H}^{2}+\frac{1 }{4}\int_{\mathbb{R}^{3}}\mu_{11}\phi_{{}_{u_{0}}}u_{0}^{2}+\mu_{22}\phi_{{}_{v _{0}}}v_{0}^{2}-2\mu_{12}\phi_{{}_{v_{0}}}u_{0}^{2}dx \tag{4.3}\] \[-\frac{1}{2\pi\left(p+1\right)}\int_{\mathbb{R}^{3}}\int_{0}^{2\pi }\left|tu_{0}+e^{i\theta}tv_{0}\right|^{p+1}d\theta dx\] \[= \theta.\]
Using \(\left(\ref{eq:1}\right)-\left(\ref{eq:2}\right)\), we have
\[\theta = \frac{p-1}{2\left(p+1\right)}\left(\int_{\mathbb{R}^{3}}\left| \nabla u_{0}\right|^{2}dx+\int_{\mathbb{R}^{3}}\left|\nabla v_{0}\right|^{2}dx \right)+\frac{p-1}{2\left(p+1\right)}\left(\int_{\mathbb{R}^{3}}\lambda u_{0} ^{2}dx+\int_{\mathbb{R}^{3}}\lambda v_{0}^{2}dx\right) \tag{4.4}\] \[-\frac{3-p}{4\left(p+1\right)}\int_{\mathbb{R}^{3}}\mu_{11}\phi_{{ }_{u_{0}}}u_{0}^{2}+\mu_{22}\phi_{{}_{v_{0}}}v_{0}^{2}-2\mu_{12}\phi_{{}_{v_{0} }}u_{0}^{2}dx\]
\[\int_{\mathbb{R}^{3}}\phi_{v_{0}}v_{0}^{2}dx\leq\frac{16}{3\sqrt{3} \pi\lambda^{\frac{3}{2}}}\left(\int_{\mathbb{R}^{3}}\lambda v_{0}^{2}dx\right)^ {\frac{3}{2}}\left(\int_{\mathbb{R}^{3}}|\nabla v_{0}|^{2}dx\right)^{\frac{1}{2}}. \tag{4.7}\]
We now rewrite \(\eqref{eq:1}-\eqref{eq:1}\) using the following notations,
\[z_{1}=\int_{\mathbb{R}^{3}}|\nabla u_{0}|^{2}dx+\int_{\mathbb{R} ^{3}}|\nabla v_{0}|^{2}dx,\;\;\;z_{2}=\int_{\mathbb{R}^{3}}\lambda u_{0}^{2}dx +\int_{\mathbb{R}^{3}}\lambda v_{0}^{2}dx,\] \[z_{3}=\int_{\mathbb{R}^{3}}\phi_{v_{0}}u_{0}^{2}dx,\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;
Next, we want to show that
\[-\left(p-1\right)\left(z_{1}+z_{2}\right)+\left(3-p\right)\mu_{11}z_{3}+\left(3-p \right)\mu_{22}z_{4}-2\left(3-p\right)\mu_{12}z_{5}<0,\text{ for all }\mu_{ii}\in\left(0,\overline{\Lambda}_{0}\right). \tag{4.10}\]
The general solution of the linear system (4.8) is given by
\[\left[\begin{array}{c}z_{1}\\ z_{2}\\ z_{3}\\ z_{4}\\ z_{5}\\ z_{6}\end{array}\right]=\theta\left[\begin{array}{c}1\\ 3\\ 0\\ 0\\ \frac{2}{\mu_{12}}\\ \frac{2}{2\mu_{12}}\\ 0\end{array}\right]+s\left[\begin{array}{c}0\\ 0\\ 0\\ \frac{1}{\mu_{22}}\\ \frac{1}{2\mu_{12}}\\ 0\end{array}\right]+t\left[\begin{array}{c}0\\ 0\\ \frac{1}{2\mu_{11}}\\ 0\\ \frac{1}{2\mu_{12}}\\ 0\end{array}\right]+w\left[\begin{array}{c}p-1\\ -2(p-2)\\ 0\\ 0\\ \frac{-(p-1)}{\mu_{12}}\\ p+1\end{array}\right], \tag{4.11}\]
where \(s,\ t,\ w\in\mathbb{R}\). Since \(2<p<3\), \(\mu_{ij}>0\) for \(i,j=1,2\) and, from (4.8), \(z_{i}>0\) for \(i=1,\cdots,6\), this is equivalent to
\[\left\{\begin{array}{l}3\theta-2w(p-2)>0,\\ 4\theta+s+t-2w(p-1)>0,\\ s>0,\ \ t>0,\ \ w>0.\end{array}\right. \tag{4.12}\]
To verify (4.10) is true, we substitute (4.11) into (4.10) to give
\[-(p-1)(4\theta+(3-p)w)+(3-p)\left[t+2(p-1)w+s-2(2\theta+t/2+s/2)\right]<0. \tag{4.13}\]
This leads to
\[w<\frac{8}{(p-1)(3-p)}\theta. \tag{4.14}\]
From (4.12), we have
\[w<\frac{3}{2p-4}\theta. \tag{4.15}\]
This is a necessary condition for guaranteeing \(z_{2}>0\). Note that, comparing (4.14) and (4.15), we obtain that if \(\frac{\sqrt{73}-2}{3}\leq p<3\) and (4.15) holds, then the condition (4.13) is satisfied, meaning that (4.10) is true for all \(\mu_{ij}>0.\) Let \(c_{\lambda}=\frac{256}{27\pi^{2}\lambda^{3}}.\) Applying the constraint (4.9), we substitute (4.5) and (4.11) into (4.9) to obtain
\[\frac{1}{\mu_{11}^{2}}t^{2}+\frac{1}{\mu_{22}^{2}}s^{2}\leq c_{\lambda}\left( \frac{5-p}{p-1}\theta\right)^{3}\left(\theta+(p-1)w\right). \tag{4.16}\]
From the second inequality of (4.12), we have
\[w<\frac{2}{(p-1)}\theta+\frac{1}{2(p-1)}(s+t). \tag{4.17}\]
This is then used in (4.16) to give
\[\frac{1}{\mu_{11}^{2}}t^{2}+\frac{1}{\mu_{22}^{2}}s^{2}<c_{\lambda}\left( \frac{5-p}{p-1}\theta\right)^{3}\left(3\theta+\frac{1}{2}(s+t)\right).\]
Moreover, if \(\mu_{ii}\in(0,\overline{\Lambda}_{0})\), then
\[t^{2}+s^{2}-\frac{c_{\lambda}}{2}\left(\frac{5-p}{p-1}\right)^{3}\overline{ \Lambda}_{0}^{2}\theta^{3}(s+t)-3c_{\lambda}\left(\frac{5-p}{p-1}\right)^{3} \overline{\Lambda}_{0}^{2}\theta^{4}<0. \tag{4.18}\]
Since \(t^{2}+s^{2}\geq\frac{1}{2}\left(s+t\right)^{2}\), we have
\[\left(s+t\right)^{2}-\overline{\Lambda}_{0}^{2}c_{\lambda}\left(\frac{5-p}{p-1} \right)^{3}\theta^{3}(s+t)-6\overline{\Lambda}_{0}^{2}c_{\lambda}\left(\frac{5- p}{p-1}\right)^{3}\theta^{4}<0. \tag{4.19}\]
Substituting \(\overline{\Lambda}_{0}>0\) and \(c_{\lambda}=\frac{256}{27\pi^{2}\lambda^{3}}\) into (4.19), and since \(\theta<D_{0}\), we can conclude that
\[l^{2}-\frac{(p+1)^{2}}{(3-p)(5-p)}\theta l-\frac{6(p+1)^{2}}{(3-p)(5-p)}\theta^ {2}<0,\text{ where }l=s+t.\]
Since \(l\) is positive, we must have
\[0<l<l_{0}, \tag{4.20}\]
where
\[l_{0} =\frac{(p+1)^{2}\theta}{(3-p)(5-p)}+\theta\sqrt{\frac{(p+1)^{4}}{ (3-p)^{2}(5-p)^{2}}+\frac{24(p+1)^{2}}{(3-p)(5-p)}}\] \[=\frac{(p+1)^{2}\theta}{(3-p)(5-p)}+\frac{(p+1)\left(19-5p\right) \theta}{(3-p)(5-p)}\] \[=\frac{4(p+1)\theta}{3-p}.\]
Thus, for \(\mu_{ii}\in(0,\overline{\Lambda}_{0})\) and \(2<p<\frac{\sqrt{73}-2}{3}\), it follows from (4.20) that
\[0<s+t<\frac{4(p+1)\theta}{3-p},\]
implying
\[w<\frac{2\theta}{(p-1)}+\frac{2(p+1)\theta}{(p-1)(3-p)}=\frac{8\theta}{(p-1)( 3-p)}.\]
This shows that the inequality (4.10) holds. Therefore, \(\mathbb{A}\subset\mathbf{M}^{-}\).
\((ii)\) Since \(p=2\), by \(\left(\ref{eq:2}\right),\)
\[\theta=\frac{p-1}{5-p}\left(\int_{\mathbb{R}^{3}}\lambda u_{0}^{2}dx+\int_{ \mathbb{R}^{3}}\lambda v_{0}^{2}dx\right)>0. \tag{4.21}\]
Then, by \(\left(\ref{eq:2}\right)\) and using the argument similar to that in part \(\left(i\right),\) we obtain that
\[-\left(p-1\right)\left(z_{1}+z_{2}\right)+\left(3-p\right)\mu_{11}z_{3}+\left( 3-p\right)\mu_{22}z_{4}-2\left(3-p\right)\mu_{12}z_{5}<0,\text{ for all }\mu_{ii}\in\left(0,\overline{\Lambda}_{0}\right).\]
This completes the proof. \(\square\)
**Remark 4.2**: _Clearly, \(\overline{\Lambda}_{0}>\Lambda_{0},\text{ for all }2\leq p\leq\frac{\sqrt{73}-2}{3}.\)_
**We are now ready to prove Theorem 1.3:**\((i)\) For \(\mu_{ii}\in\left(0,\Lambda_{0}\right).\) Since \(\overline{\Lambda}_{0}>\Lambda_{0},\) by Theorem 1.2, System \((E)\) has a vectorial solution \(\left(u_{0}^{(1)},v_{0}^{(1)}\right)\in\mathbf{M}^{(1)}\) with
\[J\left(u_{0}^{(1)},v_{0}^{(1)}\right)=\alpha^{-}=\inf_{u\in\mathbf{M}^{-}}J \left(u,v\right)<\beta_{\mu_{ii}}^{\infty}.\]
Since \(\overline{\Lambda}_{0}>\Lambda_{0},\)\(\alpha^{-}<D_{0}\) for \(\mu_{ii}\in\left(0,\Lambda_{0}\right)\) and \(\mu_{11}\mu_{22}-\mu_{12}^{2}\geq 0,\) by Proposition 4.1, we can conclude that
\[J\left(u_{0}^{(1)},v_{0}^{(1)}\right)=\alpha^{-}=\inf_{u\in\mathbb{A}}J\left(u,v\right),\]
which implies that \(\left(u_{0}^{(1)},v_{0}^{(1)}\right)\) is a positive ground state solution of System \((E)\).
\((ii)\) The proof is similar to that of part \((i)\) and is therefore omitted here.
Nonexistence of nontrivial solutions
**We are now ready to prove Theorem 1.4.** Suppose that \((u,v)\in H\) is a notrivial solution of System \((E).\) Then
\[\left\|(u,v)\right\|_{H}^{2}+\int_{\mathbb{R}^{3}}\mu_{11}\phi_{{}_{u}}u^{2}+\mu_ {22}\phi_{{}_{v}}v^{2}-2\mu_{12}\phi_{{}_{v}}u^{2}dx-\frac{1}{2\pi}\int_{ \mathbb{R}^{3}}\int_{0}^{2\pi}\left|u+e^{i\theta}v\right|^{p+1}d\theta dx=0 \tag{5.1}\]
or
\[0 = \int_{\mathbb{R}^{3}}\left|\nabla u\right|^{2}+\left|\nabla v \right|^{2}+\lambda_{1}u^{2}+\lambda_{2}v^{2}dx+\int_{\mathbb{R}^{3}}\mu_{11} \phi_{{}_{u}}u^{2}+\mu_{22}\phi_{{}_{v}}v^{2}-2\mu_{12}\phi_{{}_{v}}u^{2}dx \tag{5.2}\] \[-\frac{1}{2\pi}\int_{\mathbb{R}^{3}}\int_{0}^{2\pi}\left(u^{2}+2 uv\cos\theta+v^{2}\right)^{\frac{p+1}{2}}d\theta dx\]
We note that
\[\frac{1}{2\pi}\int_{0}^{2\pi}\left(u^{2}+2uv\cos\theta+v^{2}\right)^{\frac{p+1 }{2}}d\theta\leq\left(\left|u\right|+\left|v\right|\right)^{p+1}\leq 2^{p} \left(\left|u\right|^{p+1}+\left|v\right|^{p+1}\right).\]
By the definition of \(\phi_{w},\) we have that
\[\int_{\mathbb{R}^{3}}\phi_{w}u^{2}dx=\int_{\mathbb{R}^{3}}\left|\nabla\phi_{w }\right|^{2}dx\text{ for }w=u,v.\]
Moreover, by inequality (13) in [14],
\[\mu_{11}\left(\mu_{11}\left|\nabla\phi_{u}\right|^{2}+\mu_{22}\left|\nabla \phi_{v}\right|^{2}-2\mu_{12}\nabla\phi_{u}\cdot\nabla\phi_{v}\right)\geq \left(\mu_{11}\mu_{22}-\mu_{12}^{2}\right)\left|\nabla\phi_{v}\right|^{2}\]
and
\[\mu_{22}\left(\mu_{11}\left|\nabla\phi_{u}\right|^{2}+\mu_{22}\left|\nabla \phi_{v}\right|^{2}-2\mu_{12}\nabla\phi_{u}\cdot\nabla\phi_{v}\right)\geq \left(\mu_{11}\mu_{22}-2\mu_{12}^{2}\right)\left|\nabla\phi_{u}\right|^{2},\]
this implies that
\[\mu_{11}\left|\nabla\phi_{u}\right|^{2}+\mu_{22}\left|\nabla\phi_{v}\right|^{2 }-2\mu_{12}\nabla\phi_{u}\cdot\nabla\phi_{v}\geq\frac{\mu_{11}\mu_{22}-\mu_{1 2}^{2}}{\mu_{11}+\mu_{22}}\left(\left|\nabla\phi_{u}\right|^{2}+\left|\nabla \phi_{v}\right|^{2}\right). \tag{5.3}\]
On the other hand, we deduce that
\[2\left(\frac{\mu_{11}\mu_{22}-\mu_{12}^{2}}{\mu_{11}+\mu_{22}} \right)^{1/2}\int_{\mathbb{R}^{3}}\left|w\right|^{3}dx = 2\left(\frac{\mu_{11}\mu_{22}-\mu_{12}^{2}}{\mu_{11}+\mu_{22}} \right)^{1/2}\int_{\mathbb{R}^{3}}\left(-\Delta\phi_{{}_{w}}\right)\left|w \right|dx \tag{5.4}\] \[= 2\left(\frac{\mu_{11}\mu_{22}-2\mu_{12}^{2}}{\mu_{11}+\mu_{22}} \right)^{1/2}\int_{\mathbb{R}^{3}}\nabla\phi_{{}_{w}}\cdot\nabla\left|w \right|dx\] \[\leq \int_{\mathbb{R}^{3}}\left|\nabla w\right|^{2}dx+\frac{\mu_{11} \mu_{22}-\mu_{12}^{2}}{\mu_{11}+\mu_{22}}\int_{\mathbb{R}^{3}}\left|\nabla \phi_{w}\right|^{2}dx\] \[= \int_{\mathbb{R}^{3}}\left|\nabla w\right|^{2}dx+\frac{\mu_{11} \mu_{22}-\mu_{12}^{2}}{\mu_{11}+\mu_{22}}\int_{\mathbb{R}^{3}}\phi_{{}_{w}}w^ {2}dx.\]
Thus, by \(\left(\ref{eq:2}\right)-\left(\ref{eq:2}\right),\) we can conclude that
\[0 = \int_{\mathbb{R}^{3}}\left|\nabla u\right|^{2}+\left|\nabla v\right| ^{2}+\lambda u^{2}+\lambda v^{2}dx+\int_{\mathbb{R}^{3}}\mu_{11}\phi_{{}_{u}}u ^{2}+\mu_{22}\phi_{{}_{v}}v^{2}-2\mu_{12}\phi_{{}_{v}}u^{2}dx \tag{5.5}\] \[-\frac{1}{2\pi}\int_{\mathbb{R}^{3}}\int_{0}^{2\pi}\left(u^{2}+2uv \cos\theta+v^{2}\right)^{\frac{p+1}{2}}d\theta dx\] \[\geq \int_{\mathbb{R}^{3}}\left|\nabla u\right|^{2}+\left|\nabla v \right|^{2}+\lambda u^{2}+\lambda v^{2}dx+\frac{\mu_{11}\mu_{22}-\mu_{12}^{2} }{\mu_{11}+\mu_{22}}\left(\int_{\mathbb{R}^{3}}\phi_{{}_{u}}u^{2}dx+\int_{ \mathbb{R}^{3}}\phi_{{}_{v}}v^{2}dx\right)\] \[-2^{p}\int_{\mathbb{R}^{3}}\left|u\right|^{p+1}dx-2^{p}\int_{ \mathbb{R}^{3}}\left|v\right|^{p+1}dx\] \[\geq \int_{\mathbb{R}^{3}}u^{2}\left(\lambda-2^{p}\left|u\right|^{p-1 }+2\left(\frac{\mu_{11}\mu_{22}-\mu_{12}^{2}}{\mu_{11}+\mu_{22}}\right)^{1/2} \left|u\right|\right)dx\] \[+\int_{\mathbb{R}^{3}}v^{2}\left(\lambda-2^{p}\left|v\right|^{p-1 }+2\left(\frac{\mu_{11}\mu_{22}-\mu_{12}^{2}}{\mu_{11}+\mu_{22}}\right)^{1/2} \left|v\right|\right)dx.\]
If \(1<p<2,\) then by Lemma 2.1 and (5.5), we can conclude that \(u=v\equiv 0\) for all \(\lambda,\mu_{ij}>0\) with
\[\frac{\mu_{11}\mu_{22}-\mu_{12}^{2}}{\mu_{11}+\mu_{22}}>\frac{\left(p-1\right) ^{2}}{4}\left[\frac{2^{p}\left(2-p\right)^{2-p}}{\lambda^{2-p}}\right]^{2/(p- 1)}.\]
If \(p=2,\) then by (5.5), we can conclude that \(u=v\equiv 0\) for all \(\lambda,\mu_{ij}>0\) with
\[\frac{\mu_{11}\mu_{22}-\mu_{12}^{2}}{\mu_{11}+\mu_{22}}>4.\]
This completes the proof.
## 6 Existence of two positive solutions
Following the idea in [18] and inequality (5.3), we study the existence of two positive solutions of System \((E)\) for \(1<p<2\) and \(\mu_{11}\mu_{22}-\mu_{12}^{2}>0.\) Then we have the following result.
**Proposition 6.1**: _Suppose that \(1<p<2\) and \(\mu_{ij}>0.\) If \(\mu_{11}\mu_{22}-\mu_{12}^{2}>0,\) then we have \((i)\)\(J\) is bounded from below and coercive in \(H_{r};\)\((ii)\)\(J\) satisfies \((PS)\) condition in \(H_{r}.\)_
**Proof.**\((i)\) Since \(\mu_{ii}>0\) and \(\mu_{11}\mu_{22}-\mu_{12}^{2}>0,\)
\[\frac{1}{2\sqrt{2}}\left(\frac{\mu_{11}\mu_{22}-\mu_{12}^{2}}{\mu _{11}+\mu_{22}}\right)^{1/2}\int_{\mathbb{R}^{3}}\left|w\right|^{3}dx = \frac{1}{2\sqrt{2}}\left(\frac{\mu_{11}\mu_{22}-\mu_{12}^{2}}{ \mu_{11}+\mu_{22}}\right)^{1/2}\int_{\mathbb{R}^{3}}\left(-\Delta\phi_{{}_{w}} \right)\left|w\right|dx \tag{6.1}\] \[= \frac{1}{2\sqrt{2}}\left(\frac{\mu_{11}\mu_{22}-\mu_{12}^{2}}{ \mu_{11}+\mu_{22}}\right)^{1/2}\int_{\mathbb{R}^{3}}\nabla\phi_{{}_{w}}\cdot \nabla\left|w\right|dx\] \[\leq \frac{1}{4}\int_{\mathbb{R}^{3}}\left|\nabla w\right|^{2}dx+\frac {\mu_{11}\mu_{22}-\mu_{12}^{2}}{8\left(\mu_{11}+\mu_{22}\right)}\int_{ \mathbb{R}^{3}}\left|\nabla\phi_{{}_{w}}\right|^{2}dx\] \[= \frac{1}{4}\int_{\mathbb{R}^{3}}\left|\nabla w\right|^{2}dx+\frac {\mu_{11}\mu_{22}-\mu_{12}^{2}}{8\left(\mu_{11}+\mu_{22}\right)}\int_{ \mathbb{R}^{3}}\phi_{{}_{w}}w^{2}dx\]
for \(w=u,v.\) We note that
\[\frac{1}{2\pi}\int_{0}^{2\pi}\left(u^{2}+2uv\cos\theta+v^{2}\right)^{\frac{p+1}{2 }}d\theta\leq\left(\left|u\right|+\left|v\right|\right)^{p+1}\leq 2^{p}\left( \left|u\right|^{p+1}+\left|v\right|^{p+1}\right). \tag{6.2}\]
Then by inequalities \(\left(\ref{1}\right),\left(\ref{2}\right)\) and \(\left(\ref{2}\right),\)
\[J\left(u,v\right) = \frac{1}{2}\left\|\left(u,v\right)\right\|_{H}^{2}+\frac{1}{4} \int_{\mathbb{R}^{3}}\mu_{11}\phi_{{}_{u}}u^{2}+\mu_{22}\phi_{{}_{v}}v^{2}-2 \mu_{12}\phi_{{}_{v}}u^{2}dx \tag{6.3}\] \[-\frac{1}{2\pi\left(p+1\right)}\int_{\mathbb{R}^{3}}\int_{0}^{2 \pi}\left|u+e^{i\theta}v\right|^{p+1}d\theta dx\] \[\geq \frac{1}{4}\int_{\mathbb{R}^{3}}\left|\nabla u\right|^{2}+\lambda u ^{2}dx+\frac{\mu_{11}\mu_{22}-\mu_{12}^{2}}{8\left(\mu_{11}+\mu_{22}\right)} \int_{\mathbb{R}^{3}}\phi_{{}_{u}}u^{2}dx\] \[+\frac{1}{4}\int_{\mathbb{R}^{3}}\lambda u^{2}+\frac{1}{2\sqrt{2 }}\left(\frac{\mu_{11}\mu_{22}-\mu_{12}^{2}}{\mu_{11}+\mu_{22}}\right)^{1/2} \left|u\right|^{3}-\frac{2^{p}}{p+1}\left|u\right|^{p+1}dx\] \[+\frac{1}{4}\int_{\mathbb{R}^{3}}\left|\nabla v\right|^{2}+ \lambda v^{2}dx+\frac{\mu_{11}\mu_{22}-\mu_{12}^{2}}{8\left(\mu_{11}+\mu_{22} \right)}\int_{\mathbb{R}^{3}}\phi_{{}_{v}}v^{2}dx\] \[+\frac{1}{4}\int_{\mathbb{R}^{3}}\lambda v^{2}+\frac{1}{2\sqrt{2 }}\left(\frac{\mu_{11}\mu_{22}-\mu_{12}^{2}}{\mu_{11}+\mu_{22}}\right)^{1/2} \left|v\right|^{3}-\frac{2^{p}}{p+1}\left|v\right|^{p+1}dx.\]
Thus, by \(\left(\ref{1}\right)\) and applying the argument in Ruiz [18, Theorem 4.3], \(J\) is coercive on \(H_{r}\) and there exists \(M>0\) such that
\[\inf_{\left(u,v\right)\in H_{r}}J(u,v)\geq-M.\]
\((ii)\) By [14, Proposition 6.1\((ii)\)]. \(\square\)
Assume that \(w_{r,\mu}^{(1)}\) and \(w_{r,\mu}^{(2)}\) are positive radial solutions of Equation \(\left(SP_{\mu}\right)\) as in Theorem 2.4, that is
\[I_{\mu}\left(w_{r,\mu}^{(1)}\right)=\beta_{r,\mu}^{(1)}:=\inf_{u\in\mathbf{N}_ {\mu}^{+}\cap H_{r}^{1}(\mathbb{R}^{3})}I_{\mu}\left(u\right)>0\]
and
\[I_{\mu}\left(w_{r,\mu}^{(2)}\right)=\beta_{r,\mu}^{(2)}:=\inf_{u\in\mathbf{N}_ {\mu}^{+}\cap H_{r}^{1}(\mathbb{R}^{3})}I_{\mu}\left(u\right)=\inf_{u\in H_{r }^{1}(\mathbb{R}^{3})}I_{\mu}\left(u\right)<0.\]
Then we have the following results.
**Lemma 6.2**: _Suppose that \(1<p<2\) and \(\mu_{ij}>0.\) If \(0<\mu_{11}<\Lambda_{0}\) and \(\mu_{11}\mu_{22}-\mu_{12}^{2}>0,\) then we have \(\left(i\right)\)\(J\left(\sqrt{s_{\min}}w_{r,\mu_{11}}^{(2)},\sqrt{1-s_{\min}}w_{r,\mu_{11}}^{(2)} \right)<I_{\mu}\left(w_{r,\mu_{11}}^{(2)}\right)=\beta_{r,\mu_{11}}^{(2)}<0;\)\(\left(ii\right)\) Let \(\left(u_{0},v_{0}\right)\) be a critical point of \(J\) on \(\mathbf{M}^{+}\cap H_{r}.\) Then we have \(J\left(u_{0},v_{0}\right)\geq\beta_{r,\mu}^{(2)}\) if either \(u_{0}=0\) or \(v_{0}=0.\)_
**Proof.**\((i)\) Since
\[J\left(\sqrt{s_{\min}}w_{r,\mu_{11}}^{(2)},\sqrt{1-s_{\min}}w_{r,\mu_{11}}^{(2)}\right)\] \[= \frac{1}{2}\left\|w_{r,\mu_{11}}^{(2)}\right\|_{H^{1}}^{2}+\frac {\mu_{11}\mu_{22}-\mu_{12}^{2}}{4\left(\mu_{11}+\mu_{22}+2\mu_{12}\right)} \int_{\mathbb{R}^{3}}\phi_{{}_{v,\mu_{11}}^{(2)}}\left|w_{r,\mu_{11}}^{(2)} \right|^{2}dx\] \[-\frac{1}{2\pi\left(p+1\right)}\int_{\mathbb{R}^{3}}\int_{0}^{2 \pi}\left(1+2\sqrt{s_{\min}\left(1-s_{\min}\right)}\cos\theta\right)^{\left(p+1 \right)/2}\left|w_{r,\mu_{11}}^{(2)}\right|^{p+1}d\theta dx,\]
and
\[\frac{1}{2\pi}\int_{0}^{2\pi}\left(1+2\sqrt{s_{\min}\left(1-s_{\min} \right)}\cos\theta\right)^{\left(p+1\right)/2}d\theta > \left(\frac{1}{2\pi}\int_{0}^{2\pi}1+2\sqrt{s_{\min}\left(1-s_{ \min}\right)}\cos\theta d\theta\right)^{\left(p+1\right)/2}\] \[= 1,\]
we have
\[J\left(\sqrt{s_{\min}}w_{r,\mu_{11}}^{\left(2\right)},\sqrt{1-s_{ \min}}w_{r,\mu_{11}}^{\left(2\right)}\right)\] \[< \frac{1}{2}\left\|w_{r,\mu_{11}}^{\left(2\right)}\right\|_{H^{1} }^{2}+\frac{\mu_{11}}{4}\int_{\mathbb{R}^{3}}\phi_{w_{r,\mu_{11}}^{\left(2 \right)}}\left|w_{r,\mu_{11}}^{\left(2\right)}\right|^{2}dx-\frac{1}{p+1}\int_ {\mathbb{R}^{3}}\left|w_{r,\mu_{11}}^{\left(2\right)}\right|^{p+1}d\theta dx\] \[= I_{\mu}\left(w_{r,\mu_{11}}^{\left(2\right)}\right)=\beta_{r, \mu_{11}}^{\left(2\right)}.\]
\(\left(ii\right)\) Without loss of generality, we may assume that \(v_{0}=0.\) Then
\[J\left(u_{0},0\right)=I_{\mu_{11}}\left(u_{0}\right)=\frac{1}{2}\left\|u_{0} \right\|_{H^{1}}^{2}+\frac{\mu_{11}}{4}\int_{\mathbb{R}^{3}}\phi_{u_{0}}u_{0} ^{2}dx-\frac{1}{p+1}\int_{\mathbb{R}^{3}}\left|u_{0}\right|^{p+1}dx\]
and
\[h_{\left(u,0\right)}^{\prime\prime}\left(1\right)=f_{u}^{\prime\prime}\left( 1\right)=-2\left\|u_{0}\right\|_{H^{1}}^{2}+\left(3-p\right)\int_{\mathbb{R} ^{3}}\left|u_{0}\right|^{p+1}dx>0,\]
implying that \(u_{0}\in\mathbf{N}_{\mu_{11}}^{+}\cap H_{r}^{1}\left(\mathbb{R}^{3}\right).\) Thus \(J\left(u_{0},0\right)=I_{\mu_{11}}\left(u_{0}\right)\geq\beta_{r,\mu}^{\left( 2\right)}.\) This completes the proof. \(\square\)
**We are now ready to prove Theorem 1.5.** By Proposition 6.1\(\left(i\right)\) and Lemma 6.2, we can apply the Ekeland variational principle [13] and Palais criticality principle [17] to obtain that there exists a sequence \(\left\{\left(u_{n}^{\left(2\right)},v_{n}^{\left(2\right)}\right)\right\} \subset H_{r}\diagleft\{\left(0,0\right)\right\}\) such that
\[J(u_{n}^{\left(2\right)},v_{n}^{\left(2\right)})=\inf_{\left(u,v\right)\in H_{ r}}J\left(u,v\right)+o(1)\text{ and }J(u_{n}^{\left(2\right)},v_{n}^{\left(2\right)})=o(1)\text{ in }H^{-1}\]
and
\[\inf_{\left(u,v\right)\in H_{r}}J\left(u,v\right)<\beta_{r,\mu_{11}}^{\left(2 \right)}<0,\]
for \(i=1,2.\) Then by Proposition 6.1\(\left(ii\right),\) there exists a vectorial solution \(\left(u_{0}^{\left(2\right)},v_{0}^{\left(2\right)}\right)\in H_{r}\diagleft\{ \left(0,0\right)\right\}\) such that
\[\left(u_{n}^{\left(2\right)},v_{n}^{\left(2\right)}\right) \rightarrow \left(u_{0}^{\left(2\right)},v_{0}^{\left(2\right)}\right)\text{ strongly in }H_{r};\] \[J(u_{0}^{\left(2\right)},v_{0}^{\left(2\right)}) = \inf_{\left(u,v\right)\in H_{r}}J\left(u,v\right).\]
Since
\[J(\left|u_{0}^{\left(2\right)}\right|,\left|v_{0}^{\left(2\right)}\right|)=J( u_{0}^{\left(2\right)},v_{0}^{\left(2\right)})=\inf_{\left(u,v\right)\in H_{r}}J \left(u,v\right)\text{ for }i=1,2,\]
we may assume that \(\left(u_{0}^{\left(2\right)},v_{0}^{\left(2\right)}\right)\) are positive critical points of \(J\) on \(H_{r}\). Moreover, by Lemma 6.2 and \(\inf_{\left(u,v\right)\in H_{r}}J\left(u,v\right)<\beta_{r,\mu_{11}}^{\left(2 \right)}<0\) we have \(u_{0}^{\left(2\right)}\neq 0\) and \(v_{0}^{\left(2\right)}\neq 0.\) Combining this result with Theorem 1.2, we conclude that System \(\left(E\right)\) has two positive solutions \(\left(u_{0}^{\left(1\right)},v_{0}^{\left(1\right)}\right)\) and \(\left(u_{0}^{\left(2\right)},v_{0}^{\left(2\right)}\right)\) such that
\[J(u_{0}^{\left(2\right)},v_{0}^{\left(2\right)})<0<\frac{p-1}{4\left(p+1\right) }C_{\mu_{12}}^{2}<\alpha^{-}=J(u_{0}^{\left(1\right)},v_{0}^{\left(1\right)}).\]
This completes the proof.
## Acknowledgments
T.F. Wu was supported by the National Science and Technology Council, Taiwan (Grant No. 112-2115-M-390-001-MY3).
|
2304.00046 | Accelerating exploration and representation learning with offline
pre-training | Sequential decision-making agents struggle with long horizon tasks, since
solving them requires multi-step reasoning. Most reinforcement learning (RL)
algorithms address this challenge by improved credit assignment, introducing
memory capability, altering the agent's intrinsic motivation (i.e. exploration)
or its worldview (i.e. knowledge representation). Many of these components
could be learned from offline data. In this work, we follow the hypothesis that
exploration and representation learning can be improved by separately learning
two different models from a single offline dataset. We show that learning a
state representation using noise-contrastive estimation and a model of
auxiliary reward separately from a single collection of human demonstrations
can significantly improve the sample efficiency on the challenging NetHack
benchmark. We also ablate various components of our experimental setting and
highlight crucial insights. | Bogdan Mazoure, Jake Bruce, Doina Precup, Rob Fergus, Ankit Anand | 2023-03-31T18:03:30Z | http://arxiv.org/abs/2304.00046v1 | # Accelerating exploration and representation learning with offline pre-training
###### Abstract
Sequential decision-making agents struggle with long horizon tasks, since solving them requires multi-step reasoning. Most reinforcement learning (RL) algorithms address this challenge by improved credit assignment, introducing memory capability, altering the agent's intrinsic motivation (i.e. exploration) or its worldview (i.e. knowledge representation). Many of these components could be learned from offline data. In this work, we follow the hypothesis that exploration and representation learning can be improved by separately learning two different models from a single offline dataset. We show that learning a state representation using noise-contrastive estimation and a model of auxiliary reward separately from a single collection of human demonstrations can significantly improve the sample efficiency on the challenging NetHack benchmark. We also ablate various components of our experimental setting and highlight crucial insights.
Machine Learning,
subtasks such as the NeurIPS'21 challenge, completing the game is only possible for symbolic agents or human experts. Therefore, NLE provides an excellent opportunity bridging the performance gap between RL agents and humans, unlike other common benchmarks such as the Arcade Learning Environment (Bellemare et al., 2013).
Previous work, Explore-Like-Exports (ELE, Anonymous, 2023) showed that many of the sparse reward tasks in NetHack can be solved by learning a simple scalar function which predicts the expert progress or temporal distance between two observations in any trajectory from the expert data. While sparser tasks are solved by introducing this additional reward, the performance on dense reward tasks like NeurIPS' 21 challenge and scout suffer in comparison to baseline. Secondly, we also hypothesize that ELE discards useful information contained in expert trajectories by compressing the dataset into a single scalar-valued function. We address these issues by using the same expert data to learn representations by contrastive pre-training and in conjunction with ELE, develop an agent that not only has significantly better sample efficiency but also improved performance than ELE and standard imitation learning baselines on a spectrum of tasks ranging from sparse to dense in a challenging domain like NetHack. This work illustrates how one can use the same dataset to learn representations and auxiliary reward, thereby achieving better sample efficiency and performance.
Specifically, we show that a simple offline pre-training scheme based on contrastive learning (Eysenbach et al., 2022; Mazoure et al., 2022) can be used in conjunction with ELE i.e learning a progress reward (Anonymous, 2023) on the same expert data. This not only improves the sample efficiency and base performance of Muesli, a strong RL baseline (Hessel et al., 2021) but also using representation learning or progress reward alone on NetHack in a wide variety of tasks.
## 2 Related works
### Auxiliary tasks in RL
While one of the main goals in RL problems is to find a policy which maximizes expected reward, it can often be challenging due to a multitude of factors, e.g. sparse reward signal, untractably large policy space, long task horizon, etc. Since the problem is extremely hard in its current formulation, it is possible to augment it with external learning signals, which are notably specified via auxiliary downstream tasks. Auxiliary learning objectives have been widely studied in the literature, in both online (Jaderberg et al., 2016; Stooke et al., 2021) and offline settings (Schwarzer et al., 2021; Yang and Nachum, 2021). They can be used to equip RL agents with desirable inductive biases, e.g. disentanglement (Higgins et al., 2017), alignment and uniformity (Wang and Isola, 2020) or predictivity of future observations (Jaderberg et al., 2016; Mazoure et al., 2020).
World models provide one natural pre-training objective for RL agents, allowing it to capture crucial parameters of the environment such as transition dynamics, reward function and initial state distribution. Single-step world models such as DreamerV3 (Hafner et al., 2023) and Director (Hafner et al., 2022) equip RL agents with single-step transition and reward models that can then be used for planning. However, training such models from offline data is non-trivial and costly; using them in online settings is computationally inefficient as it requires unrolling the sequence of latent states and actions in an autoregressive manner. On the other hand, infinite-horizon models such as \(\gamma\)-models (Janner et al., 2020) or contrastive value functions (Eysenbach et al., 2022; Mazoure et al., 2022) are harder to learn, but directly capture the probability of observing a future state when rolling out from the current state.
### Exploration
Some of the inductive biases for challenging tasks can be learned from offline demonstrations, e.g. human interactions with the environment (Reid et al., 2022; Fan et al., 2022; Baker et al., 2022). In hard tasks with sparse rewards and long horizons, agents need to rely on other forms of supervision, i.e. intrinsic motivation. Intrinsic motivation for guided exploration has been an active area of research in the past years, encompassing count-based exploration (Bellemare et al., 2016; Tang et al., 2017), knowledge gathering (Kim et al., 2018; Zhang et al., 2021) and curiosity (Burda et al., 2018; Raileanu and Rocktaschel, 2020). However, curiosity-based exploration from tabula rasa is still a hard problem in some tasks (e.g. NetHack), and hence warrants the use of learned auxiliary rewards from data.
### Learning from demonstrations
In domains where RL agents have not yet achieved human-level performance, learning can be accelerated by training on demonstrations of experts (symbolic agents, humans, etc). Classical imitation learning methods like Behavior Cloning (Pomerleau, 1988), is one of the most effective and popularly used methods in presence of large quantities of data in complex domains like Minecraft(Baker et al., 2022), computer control(Humphreys et al., 2022) etc. Other approaches like GAIL (Ho and Ermon, 2016) learns a discriminator to distinguish expert trajectories from agent trajectories which could be modeled as a reward. These methods have been further extended to work on expert trajectories without actions as BCO (Torabi et al., 2018) and GAIfO (Torabi et al., 2018). Another generative approach FORM (Jaegle et al., 2021) augments the environment reward by an additional reward by learning a forward generative model of transition dynamics from offline data and rewarding transitions under the learned model. In scenarios of unlabeled(that contain
no actions) datasets like NetHack, experts can be used to annotate existing datasets without action information, e.g. add action information based on external sources such as in MineDojo or VPT (Fan et al., 2022; Baker et al., 2022). However, such labeling schemes can involve collecting data from human experts or training complex RL agents from scratch, both of which are prohibitively expensive in many scenarios. Alternatively, demonstrations can be used to guide RL agents through intrinsic motivation using learned heuristic functions. For example, ELE (Anonymous, 2023) learns a heuristic function quantifying temporal progress in expert trajectories. It outperforms prior state-of-the-art on 7 NetHack tasks with sparse rewards, but still does not solve the game itself. We hypothesize that the main drawback of ELE is that it reduces the pre-training dataset to a single scalar-valued function, and does not extract the most information out of the data. Specifically, the degree to which a dataset heuristic is beneficial for a given online task depends on its alignment with the optimal value function in that MDP (Cheng et al., 2021). In this work, we focus on using offline data which does not contain any actions and hence, limit ourselves to compare with ELE, BCO, GAIfO and FORM as standard imitation learning baselines.
## 3 Preliminaries
### Reinforcement learning
The classical reinforcement learning setting assumes that the environment follows a Markov decision process \(M\) defined by the tuple \(M\!=\!\langle\mathcal{S}\!,\!S_{0}\!,\!\mathcal{A}\!,\!\mathcal{T}\!,\!r\!,\! \gamma\rangle\), where \(\mathcal{S}\) is the state space, \(\mathbb{P}[S_{0}],S_{0}\!\in\!\mathcal{S}\) is the distribution of starting states, \(\mathcal{A}\) is the action space, \(\mathcal{T}\!=\!\mathbb{P}[\cdot|s_{t},a_{t}]\!:\!\mathcal{S}\!\times\! \mathcal{A}\!\rightarrow\!\Delta(\mathcal{S})\) is the transition kernel1, \(r:\mathcal{S}\times\mathcal{A}\!\rightarrow\![r_{\text{min}},\!r_{\text{max}}]\) is the reward function and \(\gamma\!\in\![0,\!1)\) is a discount factor. The environment is initialized in \(s_{0}\sim\mathbb{P}[S_{0}]\). At every timestep \(t=1,2,3,..\), the policy \(\pi:\mathcal{S}\!\rightarrow\!\Delta(\mathcal{A})\), samples an action \(a_{t}\!\sim\!\pi(\cdot|s_{t})\). The environment then transitions into the next state \(s_{t+1}\sim\mathcal{T}(\cdot|s_{t},a_{t})\) and emits a reward \(r_{t}\!=\!r(s_{t},a_{t})\). The state value function is defined as the cumulative per-timestep discounted rewards collected by policy \(\pi\) over an episode of length \(H\):
Footnote 1: \(\Delta(\mathcal{X})\) denotes the entire set of distributions over the space \(\mathcal{X}\).
\[V^{\pi}(s_{t})\!=\!\mathbb{E}_{\mathbb{P}^{\pi}_{t:H}}[\sum_{k=0}^{H-t}\!\gamma ^{k}r(s_{t+k},\!a_{t+k})|s_{t}], \tag{1}\]
where \(\mathbb{P}^{\pi}_{t:t+K}\) denotes the joint distribution of \(\{s_{t+k},\,a_{t+k}\}_{k=1}^{K}\) obtained by deploying \(\pi\) in the environment \(M\) from timestep \(t\) to timestep \(t+K\). The state-action value function is defined analogously as
\[Q^{\pi}(s_{t},\!a_{t})\!=\!\mathbb{E}_{\mathbb{P}^{\pi}_{t:H}}[\sum_{k=0}^{H-t }\!\gamma^{k}r(s_{t+k},\!a_{t+k})|s_{t},\!a_{t}], \tag{2}\]
such that \(Q^{\pi}(s_{t},\!a_{t})\!=\!r(s_{t},\!a_{t})+\gamma\mathbb{E}_{\mathcal{T}(s_{ t},a_{t})}[V^{\pi}(s_{t+1})]\).
The reinforcement learning problem consists in finding a Markovian policy \(\pi^{*}\) that maximizes the state value function over the set of initial states:
\[\pi^{*}\!=\!\max_{\pi\in\Pi}\!\mathbb{E}_{\mathbb{P}[S_{0}]}[V^{\pi}(s_{0})], \tag{3}\]
for \(s_{0}\sim\mathbb{P}[S_{0}]\) and set of policies \(\Pi\). Alternatively, the value function can also be re-written as the expectation of the reward over the geometric mixture of \(k\)-step forward transition probabilities:
\[V^{\pi}(s_{t})\!=\!\frac{1}{1\!-\!\gamma}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
### Explore-Like-Experts
Exploration in long-horizon problems with large action spaces and sparse rewards is hard: an uninformed agent would have to try \(|\mathcal{A}|^{H}\) actions for a horizon length \(H\), which is infeasible in NetHack, where \(|\mathcal{A}|=121\) and \(H\) can be close to \(10^{6}\)(Kuttler et al., 2020). Augmenting the uninformative extrinsic reward with an intrinsic signal which drives the agent to visit rare state-action pairs can directly translate into higher overall returns, as the learner uncovers more of the extrinsic reward structure. More formally, it is achieved by constructing an auxiliary MDP \(M^{\prime}\), where the reward function at timestep \(t\) is a combination of the extrinsic reward from \(M\) as well as some heuristic function \(h\dvtx{:}\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}\):
\[r^{\prime}(s_{t},a_{t})\dvtx{:}r(s_{t},a_{t})\dvtx{\lambda h(s_{0:t},a_{0:t})} \tag{6}\]
When \(h(s_{0:t},a_{0:t})=\gamma\mathbb{E}_{\mathcal{T}(s_{t},a_{t})}[v(s_{t+1})]\) for some function \(v:\mathcal{S}\rightarrow\mathbb{R}\), then solving Equation (3) in \(M^{\prime}\) is equivalent to finding \(\pi\) in \(M\) if \(v\) approximates \(V^{*}\) (original value function), but with a lower discount factor \(\gamma^{\prime}=\gamma(1-\lambda)\) when \(\lambda<1\)(Cheng et al., 2021)2. While setting \(v=V^{*}\) leads to maximal sample efficiency, \(V^{*}\) is not accessible in practice, and \(h\) has to be selected based on the structure of \(M\). In particular, good heuristics can be constructed from data using existing pessimistic offline RL algorithms, e.g. improvable heuristics lead to small estimation bias of \(V^{*}\), since they are smaller than the maximum of the Bellman backup.
Footnote 2: If \(v\) is an improvable heuristic aligned with the optimal value function, then the discount factor in \(M^{\prime}\) is lowered.
Intuitively, given two states, how to prioritize one over the other during the exploration process? If we had a systematic way to evaluate which state is closer to the goal under the optimal policy, then we could force the agent to expand that state during the exploration phase. Maximizing the progress in the task can be captured through a monotonically increasing function of states learned from optimal data (where, by definition, progress is maximal). Specifically, the Explore Like Experts algorithm (ELE) (Anonymous, 2023), first trains a function \(g\dvtx{:}\mathcal{S}\rightarrow\mathbb{R}\) by solving
\[g^{*}=\min_{g\in\mathcal{F}}\mathbb{E}_{\mathcal{D}}[\ell_{\text{ELE}}(g,s_{t },s_{t+\Delta t})] \tag{7}\]
where
\[\ell_{\text{ELE}}(g,s_{t},s_{t+\Delta t})=\{g(s_{t},s_{t+\Delta t})-\text{ sgn}(\Delta t)\text{log}(1+|\Delta t|)\}^{2}, \tag{8}\]
\(\mathcal{D}\) is a set of expert human demonstrations and \(\Delta t\sim\text{LogUniform}(0,\,10^{4})\). Specifically, Equation (8) does mean-squared error regression in the signed log-space to predict the time offset \(\Delta t\) from states \(s_{t}\) and \(s_{t+\Delta t}\).
In the second step, ELE uses the pre-trained progress model in place of the \(h\) heuristic in Equation (6)
\[r^{\text{ELE}}(s_{t},a_{t})\dvtx{:}r(s_{t},a_{t})+\lambda g(s_{t-\Delta t},s_{ t}), \tag{9}\]
an approximation of the local progress from \(s_{t-\Delta t}\) to \(s_{t}\). While \(\Delta t\) was sampled by LogUniform distribution while training \(g\), it was kept fixed during the online phase in ELE. In other words, auxiliary reward always computed progress with respect to a state \(\Delta t\) steps behind from current state.
### Contrastive representation learning
The conditional probability distribution of \(s_{t+\Delta t}\) given \(s_{t}\) can be efficiently estimated using an implicit model \(f:\mathcal{S}\times\mathcal{S}\rightarrow\mathbb{R}\) trained via contrastive learning (Oord et al., 2018) on offline demonstrations \(\mathcal{D}\) by solving:
\[f^{*}=\min_{f\in\mathcal{F}}\mathbb{E}_{\mathcal{D}}[\ell_{\text{Conta} \text{Conta}\text{tive}}(f,s_{t},s_{t}^{+},s_{t}^{-})] \tag{10}\]
where
\[\ell_{\text{Conta}\text{Conta}\text{tive}}(f,s_{t},s_{t}^{+},s_{t}^{-})=-\log \frac{e^{f(s_{t},s_{t}^{+})}}{\sum\limits_{s_{t}^{\prime}\in s_{t}^{+}\cup s_{ t}^{-}}e^{f(s_{t},s_{t}^{+})}}\,\,. \tag{11}\]
To approximate the occupancy measure defined in Equation (5), positive samples are sampled from \(s_{t}^{+}\in\{s_{t+\Delta t};\Delta t\sim\text{Geo}_{t}^{H}(1-\gamma)\}\) for timestep \(t\). Specifically, they are constructed by first sampling the interval \(\Delta t\) from \(\text{Geo}_{t}^{H}(1-\gamma)\) and subsequently querying \(s_{t+\Delta t}\) in the same episode. The negative samples \(s_{t}^{-}\) are uniformly sampled from any timestep within the current or any other episode.
Minimizing Equation (11) over \(\mathcal{D}\) yields a function \(f^{*}\) which, at optimality, approximates the future state visitation probability under \(\pi\) up to a multiplicative term (Ma and Collins, 2018; Poole et al., 2019).
\[f^{*}(s_{t},s_{t+\Delta t})\propto\log\frac{\mathbb{P}[s_{t+\Delta t}|s_{t}; \pi]}{\mathbb{P}[s_{t+\Delta t};\pi]}\,\,. \tag{12}\]
It should be noted that the time offsets in both ELE's progress model and in the contrastive pre-training phase are sampled from similar distributions (see Appendix A.1). In the following section, we show how \(f\) can be used for accelerating exploration in the online setting.
## 4 Methodology
In this section, we provide details of both the pre-training phase on offline human demonstrations and using these state representations in Muesli (Hessel et al., 2021), a strong RL baseline. We also describe how to use same offline data for training progress model as well as training ELE's progress model.
Pre-training state representations with contrastive trainingThe idea behind pre-training offline representations is fairly straightforward: learn fundamental inductive biases required for exploration from existing demonstrations (e.g. state connectivity structure, action effects, perceptual local invariance, sequential dependence of states), therefore improving the sample-efficiency of the agent during the online phase.
Figure 1 and Algorithm 1 (see Appendix) outline the general paradigm of offline pre-training with online learning used in all of our experiments, and which relies on finding \(\phi\) that minimizes Equation (11) over the set of possible encoders. The pre-trained encoder is kept frozen or fixed through out the training so that even if agent explores a new part of state space, it does not drift away from the pre-trained representation. But it should be noted that we use a standard LSTM and MLP on top of the frozen encoder which are trained through out the training. In our experiments, we observe that these pre-trained representations themselves are very useful for improving sample efficiency of dense tasks but fail to solve the sparse version of tasks themselves in NetHack where a single reward is provided in the whole episode(more details on sparsity of tasks in experimental section). To address this, we add the ELE progress reward (Anonymous, 2023) learnt from the same data to the environment reward. The main hypothesis is that the signal from the progress model solves the problem of hard exploration in sparse reward tasks while pre-training helps for faster learning and hence, improving sample efficiency. Hence, one can use the same dataset to learn signals of representation as well as additional reward to assist exploration providing orthogonal benefits.
Why contrastive pre-training?Why should one pick the contrastive pre-training scheme over any other objective? First, as mentioned in Section 5, we test our hypothesis on NetHack data containing only sequences of states (i.e. no actions nor rewards), which prevents the use of inverse RL and reward prediction objectives such as SGI (Schwarzer et al., 2021). Second, strong empirical evidence from prior works suggests that contrastive learning is optimal for value function representation and outperforms latent and reconstruction based forward losses (Eysenbach et al., 2022). Finally, latent-space prediction losses are known to be prone to representation collapse, i.e. when bootstrapped prediction targets are being exactly matched by the online network and become independent of the state (Gelada et al., 2019). Contrastive learning avoids representation collapse due to the presence of negative samples which ensure uniformity of coverage on the unit sphere (Wang and Isola, 2020).
## 5 Experiments
In this section, we conduct a series of experiments to validate our central hypothesis: combining representation learning with exploration improves the agent's sample complexity more than representation learning or exploration by themselves.
### Experimental Details
Baselines:Our main results are obtained by comparing the performance of _tabula rasa_ Muesli and ELE (Anonymous, 2023) with their counterparts using pre-trained state representations. In addition, we also compare with standard baselines that use action-free expert demonstrations: GAIfO (Torabi et al., 2018), BCO (Torabi et al., 2018) and FORM (Jaegle et al., 2021), the same baselines as ELE. All these baselines learn from the same offline data and are implemented on top of Muesli agent (Hessel et al., 2021) for fair comparison and we use the same hyperparameters as provided in ELE (Anonymous, 2023). Since previous work (Eysenbach et al., 2022) demonstrated that contrastive pre-training performs much better than other representation learning techniques, we limit our comparison in this work to contrastive pre-training. We performed preliminary investigations for other types of pre-training using forward prediction and latent models but they performed inferior to contrastive pre-training. The motivation for using contrastive pre-training of state representations is two-fold: 1) it allows Muesli to predict value functions using a linear layer, making the task simpler, and 2) it was shown to perform significantly better than latent-space or reconstruction-based objectives (see Eysenbach et al., 2022).
TasksWe use 7 different tasks from NetHack ranging from dense to sparse rewards which were proposed in (Kuttler et al., 2020) and ELE (Anonymous, 2023). On the dense side of the spectrum, we use the **Score** and **Scout** tasks, which reward the agent for increasing the game score and revealing tiles in the dungeon, respectively. At the sparse end, the **Depth N**, **Level N**, and **Oracle** tasks deliver a reward of zero at all timesteps until a trigger condition is met: reaching a particular dungeon depth, achieving a particular experience level, or finding and standing next to the Oracle character (found between dungeon depths 5 and 9 inclusive); these sparse tasks terminate when the condition is met. We believe that the wide range of sparsity levels exhibited by this task collection represents a good selection of conditions under which to evaluate the sample complexity of the algorithms we compare.
DatasetWe use the NAO Top 10 dataset proposed in previous work (Anonymous, 2023) which consists of human games from top 10 players on nethack.alt.org. These trajectories are useful for pre-training our contrastive representations as this dataset provides a good balance of trajectory quality and diversity(Anonymous, 2023) to learn representations. This dataset consists of approximately 16K trajectories of expert play, with a total of 184M transitions.
Frame BudgetAs we want to compare algorithms on sample efficiency, we use 200M actor steps inspired from the Atari benchmark (Bellemare et al., 2013) on all these tasks, with the exception of the **Oracle** task. As this task poses a significantly harder exploration challenge, we allow a larger budget of 500M actor steps.
ArchitectureInspired from previous work (Anonymous, 2023), we use a Residual Network (ResNet, He et al., 2016) architecture which encodes \(80\times 24\) TTY arrays (shown in Figure 2) with a series of 2d convolutional block. This model acts as encoder which is used by contrastive pre-training, ELE's progress model as well as Muesli agent. During online phase, we pass the generated representation with a recurrent network (LSTM) and MLP to predict policy and value heads in Muesli. In case of contrastive pre-training, we simply pass the ResNet encoder through an MLP in order to project the state features into a latent space. The ELE's progress model fuses the two states given as inputs, which are then passed through a similar ResNet followed by an MLP to predict a scalar value in logarithmic space, that corresponds to the temporal distance between both input states.
### Results
We state the main results followed by ablation for different components. All our experimental results are ran with 5 random seeds and are plotted with \(\pm\) standard deviation.
Comparison of Progress Model and baseline with and without Pre-trainingFigure 3 shows that equipping strong RL algorithms such as Muesli and ELE with human demonstrations via offline pre-training significantly improves the sample complexity of the underlying method. While ELE significantly outperformed Muesli on the sparser tasks, the performance did not improve on the denser **Score** and **Scout** tasks and in fact was inferior to Muesli. Using contrastive pre-training with both Muesli and ELE, however, significantly improves its performance in the sample regime under investigation in this work. On the sparse tasks **Depth**
Figure 3: Episode returns of Muesli and ELE, with and without pre-trained state representations. Dense reward tasks like Score and Scout benefit immensely from contrastive pre-training in both performance and sample efficiency. While ELE’s exploration reward is needed to solve sparse reward tasks, contrastive pre-training augments ELE by improving sample efficiency for all the sparse tasks. All curves are reported over 5 random seeds \(\pm\) one standard deviation.
Figure 2: Examples of observations generated by the NetHack Learning environment.
**2**, **Depth 4**, **Level 2**, **Level 4** and **Oracle**, pre-training with ELE significantly improves performance in the low sample regime. It should be noted that the contrastive pre-training without an exploration bonus struggles to solve the sparser tasks, and the progress reward is clearly beneficial in this case. This illustrates that the same dataset can be used for both pre-training representations as well as learning an exploration reward, and that these two different applications of human data target orthogonal problems: exploration bonuses help the agent discover the reward, and representation learning improves its ability to exploit it.
Figure 4: Episode returns of ELE with pre-trained representation in contrast to standard imitation learning baselines (without access to actions) like BCO, GAIfO and FORM. Contrastive Pre-training + ELE outperforms all baselines on sparse as well as dense tasks. All curves are reported over 5 random seeds \(\pm\) one standard deviation.
Figure 5: Episode returns of encoder extracted from ELE to Encoder separately trained by contrastive pre-training. All the tasks demonstrate that training a separate encoder by contrastive pre-training is much more useful on all the tasks illustrating ELE and contrastive pre-training capture two different dimensions of exploration and representation learning respectively.
**Comparison with Standard Imitation Learning Baselines:** Figure 4 show the comparison of ELE + Pre-training with other standard imitation learning baselines like GAIL from Observations (GAIfO) (Torabi et al., 2018), Behavior Cloning from Observations (BCO) (Torabi et al., 2018) and FORM (Jaegle et al., 2021). On the sparser tasks, only ELE and ELE + Pre-training are able to solve them at all, and contrastive pre-training improves convergence speed significantly. Dense tasks like **Score** and **Scout** are learned by many of the baselines, but contrastive pre-training significantly improves sample efficiency.
### Ablations
**Is using progress model's representation encoder as good as contrastive pre-training?** An interesting question which stems from this work is that if the ELE's progress model is useful for exploration, could we use the trained progress model's torso as an encoder for initializing representations as well? We experiment with extracting the torso of trained progress model and use it to initialize the representation encoder (instead of using encoder from contrastive pre-training). Figure 5 shows the comparison of using progress model representations v/s training these separately by contrastive pre-training on the same data. We observe that using ELE's torso as encoder and additional reward from ELE takes off faster but eventually achieves poor performance than ELE itself and is significantly poor than using ELE with contrastive pre-training encoder.
**Do pre-trained state representations need to be fine-tuned during the online phase?** Next, we study the effect of freezing3 the pre-trained representations of observation encoder from pre-training phase. We observe no significant difference on most tasks with or without freezing representations. Figure 6 shows 1 sparse task and 1 dense task for this ablation (more details in appendix). We stick with freezing representations for the online phase as our default setting through out the paper.
Footnote 3: Fixing a set of weights during the online phase.
**How does encoder architecture impact its performance?** A natural question which arises when pre-training the state representations offline is how well does the model capture the future states. In all of our experiments, we use a simple ResNet (He et al., 2016) which takes as inputs an array of TTY characters. However, recent works have shown that the local invariance biases from the 2d convolutions can be learned through a vision transformer model (ViT, Dosovitskiy et al., 2020; Raghu et al., 2021), which positioned ViT as a competitive alternative to standard convolution-based architectures. The main drawback of ViTs is their need to be trained on vast amounts of data, which is abundant in NetHack. We have conducted pre-training experiments comparing the contrastive prediction accuracy of convolution-based models with that of ViTs. Results shown in Figure 7 hint that ResNet-like models are better suited for NLE, as they obtain better training set and test set categorical accuracy as compared to ViTs.
## 6 Discussion
In this work, we posited that same offline data could be used for learning representations as well as learning an auxiliary reward to aid exploration and training these models separately provide orthogonal benefits. We show that pre-training state representations using contrastive learning and then using this network to initialize the representations provides a large sample-efficiency improvement. However, using pre-training alone fail to solve sparse tasks. We address the problem by adding a learned auxiliary reward and observe that pre-training helps in representation learning and auxiliary reward aids exploration. We validate our hypothesis in the NetHack, a challenging rogue-like terminal-based game with large state and action spaces, long task horizon and strong notion of forward progress.
Figure 6: Episode returns of with/without freezing representations of contrastive pre-training during online phase. There is no significant difference between freezing or not freezing the state representations. We show 1 dense task (Score) and 1 sparse task (Oracle) as representatives. All other tasks are shown in the Appendix
Figure 7: Ablation on different Model Architectures for contrastive pre-training. We observe that Vision transformer performs much worse than ResNet architecture. |
2310.20209 | Network Contention-Aware Cluster Scheduling with Reinforcement Learning | With continuous advances in deep learning, distributed training is becoming
common in GPU clusters. Specifically, for emerging workloads with diverse
amounts, ratios, and patterns of communication, we observe that network
contention can significantly degrade training throughput. However, widely used
scheduling policies often face limitations as they are agnostic to network
contention between jobs. In this paper, we present a new approach to mitigate
network contention in GPU clusters using reinforcement learning. We formulate
GPU cluster scheduling as a reinforcement learning problem and opt to learn a
network contention-aware scheduling policy that efficiently captures contention
sensitivities and dynamically adapts scheduling decisions through continuous
evaluation and improvement. We show that compared to widely used scheduling
policies, our approach reduces average job completion time by up to 18.2\% and
effectively cuts the tail job completion time by up to 20.7\% while allowing a
preferable trade-off between average job completion time and resource
utilization. | Junyeol Ryu, Jeongyoon Eo | 2023-10-31T06:17:23Z | http://arxiv.org/abs/2310.20209v1 | # Network Contention-Aware Cluster Scheduling with Reinforcement Learning
###### Abstract
With continuous advances in deep learning, distributed training is becoming common in GPU clusters. Specifically, for emerging workloads with diverse amounts, ratios, and patterns of communication, we observe that network contention can significantly degrade training throughput. However, widely used scheduling policies often face limitations as they are agnostic to network contention between jobs. In this paper, we present a new approach to mitigate network contention in GPU clusters using reinforcement learning. We formulate GPU cluster scheduling as a reinforcement learning problem and opt to learn a network contention-aware scheduling policy that efficiently captures contention sensitivities and dynamically adapts scheduling decisions through continuous evaluation and improvement. We show that compared to widely used scheduling policies, our approach reduces average job completion time by up to 18.2% and effectively cuts the tail job completion time by up to 20.7% while allowing a preferable trade-off between average job completion time and resource utilization.
Scheduling, Machine learning, Reinforcement learning, Heterogeneous (hybrid) systems
## I Introduction
Distributed deep learning (DL) training is becoming increasingly prevalent in GPU clusters. A recent analysis published by Alibaba [1] has reported that over 80% of the total submitted DL training jobs1 run on multiple GPUs spanning over multiple nodes. This rate corresponds to approximately 5\(\times\) increase in five years from the previous report from Microsoft [2]. Also, emerging DL training workloads present diverse amounts, ratios, and patterns of communication. For instance, Fully Sharded Data-Parallel training (FSDP) [3, 4] features heavy communication, with at least 50% increased communication cost compared to conventional data-parallel training [5]. Mixture of Experts (MoE) [6] training implements a gating network-based routing mechanism between distributed experts using AllToAll pattern that has higher communication cost than traditional AllReduce pattern. Eventually, previous studies have shown that GPU clusters can suffer significant performance degradation due to conflicts in network communication when distributed training and emerging workloads with diverse communication are common [1, 7, 8]. Ultimately, this trend raises a new challenge to GPU cluster scheduling: mitigating performance slowdown due to _network contention_.
Footnote 1: Combination of DL model to train, GPU demand, and scheduled nodes.
Notwithstanding the new challenge, we notice that widely used scheduling policies (e.g., LAS [9, 10] and SRTF [11]) are often agnostic to network contention between jobs, hence are prone to significant degradation in jobs' training throughput. For example, when FSDP and MoE training share networks, they suffer up to 49.1% and 66.7% throughput degradation compared to isolated cluster, respectively. Unfortunately, such unfavorable scheduling decisions cannot be avoided under contention-agnostic policies. Yet, we observe that a job experiences varying degrees of network contention based on the model and the placement of the co-located2 jobs. Accordingly, the throughput degradation of the aforementioned example can be improved up to 21.6% and 19.5%, respectively, depending on how they are co-located. Building upon this insight, we define _contention sensitivity_ (\(CS\)) of a job as the ratio of its ideal throughput to the degraded throughput when co-located with another job (1).
Footnote 2: Scheduling jobs such that they become each other’s node-sharing neighbors, sharing all or part of the allocated nodes, and thus bandwidth of intra and inter-node networks.
\[CS=\frac{Throughput_{\text{ideal}}}{Throughput_{\text{contention}}} \tag{1}\]
In this paper, we propose a method for scheduling distributed DL jobs in GPU clusters that minimizes network contention. Two notable challenges are as follows: 1. efficiently capturing contention sensitivities, and 2. dynamically adapting to diverse distributions of jobs and their contention sensitivities. To address these challenges, we devise a reinforcement learning (RL)-based approach to swiftly learn an effective scheduling policy by continuous evaluation and improvement of scheduling decisions across diverse distributions of jobs. Specifically, we make the following contributions:
* We propose a novel design that translates the network contention problem in cluster scheduling into an RL problem. We show that our design can efficiently capture contention sensitivities of jobs and dynamically adapt scheduling decisions across diverse distributions of jobs.
* We present an end-to-end system for training scheduling policies with RL to its deployment on GPU clusters. We provide two initial scheduling policies, RL-base and RL-Hybrid, and implement mechanisms that execute the decisions of the scheduling policy.
* We evaluate our scheduling policies with a variety of DL training job traces on a GPU cluster. RL-base outperforms LAS and SRTF by reducing average JCT by up to 18.2% and effectively cutting tail JCT by up to 20.7%, while RL-Hybrid achieves a preferable trade-off between average JCT and resource utilization.
* We open our work at ([https://github.com/gajagajago/deepshare](https://github.com/gajagajago/deepshare)) as a community asset for future research in RL-based GPU cluster scheduling.
## II Background
**DL training in GPU clusters.**_Deep learning_ is a process to train a deep neural network (DNN) with the purpose of analyzing and deriving valuable knowledge from data [12]. The DNN is constructed with numerous layers and parameters. During training, the DNN is tasked with making predictions and updating its parameters based on the computation of errors in relation to actual outcomes. Given its inherent computational intensity, training is usually performed on high-capacity accelerators like GPUs. Due to extremely high price (e.g., NVIDIA A100 costs about $10k), organizations often build GPU clusters to be shared by users and production groups [13]. Consequently, GPU clusters usually employ a scheduler for the efficient allocation of cluster resources. These schedulers predominantly operate under two key objectives: reducing Job Completion Time (JCT) [14, 15, 16, 17, 9, 7] and enhancing resource utilization [18, 19, 9, 13]. Thus, these two performance metrics form the foundation of our RL formulation's reward (Section IV).
**RL for cluster scheduling.**_Reinforcement learning_ involves an agent that learns to make better decisions directly from experience interacting with the environment [20]. The agent learns by _reinforcement_, wherein it receives rewards contingent on the quality of its decisions [21]. RL has recently become an active area of research in machine learning [22, 23, 24, 25, 26], and RL-based approaches have demonstrated great potential in various domains including congestion control [27], video streaming [28, 29, 30], real-time communication [31, 32, 33, 34, 35], and resource management [36, 21, 37, 38]. RL approaches are known to be especially well-suited to resource management systems due to the followings:
* Decisions made by the systems are highly repetitive, leaving abound of training data (e.g., scheduling decisions and corresponding effects) to the RL algorithm.
* Reward can reflect complex objectives (e.g., JCT and utilization) which are difficult to model analytically.
* Agent are highly adaptive to constantly shifting or even previously unseen circumstances.
DeepRM [21] and DeepPlace [37, 38] are two notable works that apply RL in resource management. DeepRM represents the system state as a combination of cluster resources and resource demands of jobs. In this scheme, cluster resources are equipped with available time slots for job allocation, while jobs demand distinct quantities of time slots for different resource types. The action step revolves around matching jobs' time slots with those of resources, and the reward is designed to mirror the objective of minimizing average slowdown. DeepPlace builds upon DeepRM and introduces a more intricate reward that incorporates a blend of purpose-oriented cues, including factors like _resource contention penalty_ and _under-utilization penalty_.
Nevertheless, there exists a limitation when attempting to employ these methodologies in scheduling distributed DL jobs in GPU clusters. First, since these approaches primarily focus on host resources (such as CPU and DRAM) and address small containerized microservices as their target jobs, their state representation is inadequate for capturing distributed GPU jobs. Second, their action scope remains limited to within a single node, making it impractical to schedule distributed jobs spanning multiple nodes. Finally, due to the absence of consideration for network contention between jobs, their reward structure fails to encompass the performance impact of network contention. Therefore, our main objective is to introduce an efficient RL-based solution for GPU cluster scheduling that efficiently handles network contention problem (Section III).
## III Motivation
In this section, we introduce emerging DL training workloads encompassing diverse communication characteristics. Then, we analyze their contention sensitivities to draw insights for effective network contention-aware cluster scheduling. We use these workloads (and their variations) to constitute the jobs traces for evaluation (Section VI).
**Communication characteristics of emerging DL training workloads (Table I).** FSDP [3, 4] and MoE [6] exhibit high network bandwidth consumption and communication-to-computation ratio. Specifically, FSDP and MoE demonstrate respectively 3.01\(\times\) and 5.67\(\times\) higher communication-to-computation ratio and 12.65\(\times\) and 4.40\(\times\) larger average bandwidth consumption compared to a traditional image model training such as MobileNetV3 [40]. These characteristics stem from their communication pattern, designed to achieve GPU memory efficiency at the cost of increased communication. Concretely, FSDP adds per-layer AllGather at forward pass, and per-layer AllGather and ReduceScatter at backward pass compared to conventional data-parallel training [3, 4]. MoE adds gated network-based AllToAll communication between distributed experts [6].
DLRM [41] and Transformer-XL [42] display either a high communication-to-computation ratio or high network bandwidth consumption. Especially, DLRM exhibits a 5.49\(\times\) higher communication-to-computation ratio but 19.3% lower bandwidth consumption compared to MobileNetV3. Collective communications account for a significant fraction of time
in training DLRM at scale [44]. This is because recommendation tasks such as DLRM spend around 80% of total training time on host resources [1] as sparse computation of element-wise operators dominates. On the other hand, Transformer-XL exhibits a 23.04% lower communication-to-computation ratio but 4.04\(\times\) higher bandwidth consumption compared to MobileNetV3. This is because the Transformer [45] blocks employ the attention mechanism that requires larger amounts of computation compared to the convolution-based mechanism of image models.
GraphSage [39] exhibits the lowest network bandwidth consumption and communication-to-computation ratio, with 76.85% less communication-to-computation ratio and 88.34% lower average bandwidth consumption compared to MobileNetV3. This is because GraphSage training partitions the input graph among nodes, where per-node graph preprocessing takes 30-90% of the training time with involving only a little communication [1].
**Contention sensitivity of a job varies by its co-located job.** In Figure 1, each pixel value depicts the contention sensitivity of the target job according to the network contention from the co-located job. A pair of a target model and a co-located model is represented as a \(6\times 6\) grid (e.g. the black boxes in Figure 1 shows the varying contention sensitivities of FSDP and MoE, with respect to diverse GPU demands and node assignments). We observe that some jobs (GNN and IMG) show consistent contention sensitivities regardless of co-located jobs, whereas the others (DLRM, LM, FSDP, and MoE) show high variability with regard to the model, GPU demands, and node assignment of the co-located job. For example, when FSDP and MoE training are co-located, they experience contention sensitivities of up to 1.96 and 3.00, respectively, with varying degrees according to their co-location (the black boxes in Figure 1). In contrast, when FSDP and MobileNetV3 training are co-located, they exhibit moderate degrees of contention sensitivities with at most 1.35 and 1.43, respectively (the white boxes in Figure 1).
We summarize our findings from this section as follows:
* Jobs exhibit a variety of communication characteristics, which contribute to their varying contention sensitivities when co-located with other jobs.
* In this regard, scheduling based on efficiently captured contention sensitivities and aimed at reducing expected contention will effectively alleviate network contention.
## IV RL Formulation
**Requirements.** Summarizing the key insights from previous sections, the main requirements of an ideal network contention-aware GPU cluster scheduling includes:
* **R1.** Fast adaptation of its decisions to reflect the constantly changing distribution of contention sensitivities as scheduled jobs change.
* **R2.** Minimizing cluster-wide performance degradation due to contention, achieving low average and tail JCT while maintaining high resource utilization.
To satisfy the requirements, we propose our design for a network contention-aware scheduling with RL. We explain our formulation of state, action, reward, and training algorithm. We also present two initial scheduling policies
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline Task & Model (Abbreviation) & Dataset & Trainable Parameters & Average Bandwidth & \(\frac{Comm}{Comp}\) & Communication Patterns \\ \cline{3-6} & & & Parameters & Consumption (MB/s) & & \\ \hline Graph & GraphSage (GNN) [39] & Reddit & 0.33M & 24.63 & 0.57 & AllReduce \\ \hline Image & MobileNetV3 (IMG) [40] & ImageNet & 2.04M & 211.25 & 2.43 & AllReduce \\ \hline Recommendation & DLRM (DLRM) [41] & Criteo & 333.32M & 170.28 & 13.36 & AllReduce \\ \hline \multirow{3}{*}{Language} & Transformer-XL (LM) [42] & Wikitext-103 & 202.44M & 854.82 & 1.87 & AllReduce \\ & \(\mathrm{GPT}\)-2 (FSDP) [43] & Wikitext-2 & 184.89M & 2672.40 & 7.32 & ReduceScatter, AllGather \\ \cline{1-1} & \(\mathrm{GPT}\)-2 (MoE) [43] & Wikitext-2 & 268.92M & 929.48 & 13.79 & AllToAll \\ \hline \hline \end{tabular}
\end{table}
Table I: Models used in this work. \(\frac{Comm}{Comp}\) denotes communication-to-computation ratio. Average bandwidth consumption and \(\frac{Comm}{Comp}\) is profiled in our evaluation environment (Section VI).
Figure 1: Contention sensitivity heat map. Darker value indicates higher contention sensitivity, which signifies larger throughput degradation of the target job according to the network contention from the co-located job. Numbers behind the model name denote nodes and GPUs per node, respectively.
(_agents_ in RL term) trained with RL, namely RL-base and RL-Hybrid. These policies serve as benchmarks for evaluation (Section VI).
**State.** To adapt the agent's decisions such that it reflects the constantly changing contention sensitivities (**R1**), we develop a state representation that captures the cluster-wide co-location of jobs. Figure 2 illustrates an example of our state design. We represent the cluster state as a two-dimensional tensor of shape <Nodes, \(2\times\)GPUs per node>. This fixed state design satisfies a desirable attribute to be applied as input to neural network-based RL training algorithms. The left half encodes the physical placement of scheduled jobs. Each slot contains the profile of a job, if any job is scheduled on the GPU corresponding to the slot. Conversely, the right half captures the resource demands of candidate jobs in the waiting queue. For example, if a candidate job is encoded in \(slot_{i,j}\) of the right half, it signifies that one feasible placement option for the job is distributing its total \(j*2^{i}\) demands across \(2^{i}\) nodes with \(j\) GPUs each. Since there are various ways to partition a job's demand, jobs with multiple GPU demands can be encoded in multiple slots of the right half.
**Action.** Choosing an arbitrary set of candidate jobs from the waiting queue can lead to a large action space of up to \(2^{Q}\) when the queue length is \(Q\). To reduce the size of the action space for faster convergence of the training algorithm, \(K\) candidate jobs with different GPU demands are picked starting from the head of the waiting queue (forming the right half of the state representation). Besides, to keep an action space of constant size without invalid actions, our action space for the candidate jobs is a set of lists of nodes to schedule the targets. \(\emptyset\) denotes that the agent decided not to schedule any jobs in the current round. For example, if the agent returns [Node1, Node2] as an action for job type 1 (Figure 2), it is assigning the job to Node1 and Node2 with two GPUs each as job type 1 is encoded in \(slot_{2,2}\) on the right half of the state representation. One possibly beneficial extension of the action space can be to migrate a job if there is a better placement option or preempt if the current co-location is detrimental. But for the current prototype, we only support preemption based on a predefined threshold on contention sensitivity and plan to incorporate this feature in the action space soon.
```
Input :\(Trace\): job trace, \(Alg\): RL algorithm Parameter:\(W_{1},W_{2}\): weights, \(T\): scheduling interval, \(F\): policy checkpoint path
1\(C\leftarrow\) Init empty cluster environment
2\(Q\leftarrow\) Waiting jobs in \(Trace\)
3\(Policy\leftarrow\) Init policy with \(Alg\)
4while/\(C\_empty\) or!Q.emptydo
5 Train(\(C\), \(Q\), \(Policy\))
6 end while
7 Save \(Policy\) to \(F\)
8
9FunctionRound(\(C\), \(Q\), \(Policy\)):
10\(State\leftarrow\) Get state from \(C\) and \(Q\)
11\(Actions\gets Policy\) computes action from \(State\)
12for\(job,nodes\in Actions\)do
13 Schedule \(job\) to \(nodes\)
14
15 end for
16 Sleep for \(T\)
17\(Reward\leftarrow\) ComputeReward(\(C\))
18 Update \(Policy\) with \(Reward\)
19
20 FunctionComputeReward(\(C\)):
21\(CS\), \(Util\gets 0\)
22for\(job\in C.jobs\)do
23\(CS_{\text{job}}\leftarrow\) Profile contention sensitivity of \(job\)
24 Update \(CS\) with \(CS_{\text{job}}\)
25 end for
26for\(node\in C.nodes\)do
27for\(GPU\in node\)do
28\(Util_{\text{GPU}}\leftarrow\) Profile utilization of \(GPU\)
29 Update \(Util\) with \(Util_{\text{GPU}}\)
30 end for
31
32 end for
33return\(-W_{1}*CS+W_{2}*Util\)
```
**Algorithm 1**RL Training Algorithm
**Reward.** To minimize cluster-wide performance degradation due to contention (**R2**), our reward penalizes an increase in cluster-wide average contention sensitivity of scheduled jobs and provides incentives on higher cluster-wide average GPU utilization (2). The two terms articulate the reward because reduced average job contention sensitivity is positively related to reduced JCT, and high GPU utilization corresponds to enhanced resource utilization, which are the two pivotal performance metrics of GPU cluster schedulers (Section II). Weights \(W_{1}\) and \(W_{2}=1-W_{1}\) are imposed on contention sensitivity (CS) and utilization (Util) term. Its values can be tailored to reflect the relative preference between average contention sensitivity and utilization, as the two terms typically have a trade-off relationship. For instance, co-locating as many jobs as possible to maximize resource utilization may exacerbate average contention sensitivity, leading to more
Figure 2: State representation. (Example for 4 nodes each with 8 GPUs, 3 scheduled jobs and 3 candidate jobs with varying in GPU demands)
network contention experienced by the co-located jobs.
\[Reward=-W_{1}*CS+W_{2}*Util \tag{2}\]
**Training algorithm.** Algorithm 1 illustrates the training algorithm. The agent (Policy) is trained with a neural network-based RL algorithm (Alg) using job traces (Trace) on a simulated GPU cluster environment. The agent makes scheduling decisions until all jobs have finished (Lines 4-6). For every scheduling round, the agent is given the state representation (Line 10) and chooses an action (Line 11). Jobs are scheduled as depicted in the action (Lines 12-14). After waiting for the scheduling interval (Line 15), reward is computed with respect to contention sensitivities of the scheduled jobs and resource utilization of the cluster (Line 16) and is returned to the agent to adapt its policy using the neural network-based RL algorithm (Alg) (Line 17). By iterating through numerous scheduling rounds in the training process, the agent continuously explores diverse job co-location options and adapts its scheduling decisions to effectively mitigate cluster-wide network contention. When the training finishes, the agent is saved into a file as a trained scheduling policy (Line 7).
**Benchmark policies.** We present two initial scheduling policies trained with RL, namely RL-base and RL-Hybrid. These policies serve as benchmarks for evaluation (Section VI).
* RL-base makes scheduling decisions only based on the action of the trained policy. It may decide not to schedule jobs even when the cluster has enough resources to avoid an increase in performance degradation due to contention.
* RL-Hybrid performs decision-level multiplexing of the trained policy and a simple rule-based policy. 3 It usually follows the decisions of the trained policy, but when it faces an \(\emptyset\), i.e. decision not to schedule, it follows a safety rule of trying greedy scheduling to prevent low utilization.
Footnote 3: Such hybrid design is in line with the recent trend of incorporating the decisions of rule-based policies to guide or aid the RL-based policy to achieve both the RL-based one’s high adaptivity and the rule-based one’s stability and interpretability [33, 34, 35, 46].
## V System
### _Framework Overview_
We build an end-to-end system from training a scheduling policy with RL on a simulated cluster environment to its deployment on GPU clusters. Figure 3 illustrates the overall system architecture. The system is composed of two parts: Trainer and Scheduler. Trainer implements a simulated cluster environment and the training algorithm to train a scheduling policy with RL (Section IV). Scheduler deploys the trained policy on GPU clusters and provides mechanisms to execute the scheduling decisions.
### _Trainer_
Trainer provides OpenAI gym [47]-based simulated cluster environment of nodes and GPUs. It also provides a common interface for a set of off-the-shelf RL algorithms in widely used stable-baselines3 [48]. Hence, researchers can easily customize the RL formulation and train their own policies. Trainer requires only job trace (e.g., Microsoft's Philly trace [2] and Alibaba's PAI trace [1]) to start training on simulated cluster environment. Training an agent on an episode of 256 jobs completes in less than a minute.
### _Scheduler_
Scheduler is implemented on top of Slurm [49], an open-sourced Linux cluster manager, and uses HDFS [50] as the checkpoint store. Controller is implemented on top of Slurm control daemon (slurmctld) by adding global data structures for cluster state and waiting queues, an interface for communicating with the scheduler and the node agents (ComputeObs() and Schedule()), and PreemptionManager to support checkpointing through asynchronous preemption protocol. Scheduler provides an interface to load the checkpoint of the trained policy. It also supports a set of widely used scheduling policies, where users can configure one via script. NodeAgent is implemented on top of Slurm daemon (slurmd) with CheckpointManager, Profiler built on top of PyTorch profiler [51], and ProcessManager for initializing a node process group used for synchronization in distributed DL training.
Figure 3: System overview. The policy is trained on a trace-driven simulated GPU cluster environment using RL and deployed on the actual GPU cluster.
Figure 4 illustrates the in-depth workflow of Scheduler. The trained policy is plugged into Core and deployed on the GPU cluster. Core executes round-based scheduling with a predefined round duration knob. For every round, ComputeObs() encodes the cluster and the waiting queue state as a state and sends it to Core(). Following the scheduling decision produced by the policy (), Controller dequeues the target jobs to schedule from the waiting queue (). Then Schedule() spawns NodeAgent for the job (one per each node allocated to the job), and submits the job training script (). NodeAgent builds the process group for the job and delivers the profiled job stats to Controller after executing an online profiling for predefined iterations (). Monitoring the contention sensitivity of jobs from the profiles, Controller sends a preemption signal to node agents with jobs with higher contention sensitivity than a predefined knob. NodeAgent waits for CheckpointManager to save the checkpoint of the job to distributed file system (DFS)-based job checkpoint store and returns its status to PreemptionManager asynchronously (). To support re-scheduling, PreemptionManager maintains a preemption lookup table. Thus, whether a job has been preempted can be identified by querying the job identifier on the lookup table, and the scheduler loads from the checkpoint store to resume from the latest checkpoint ().
## VI Evaluation
We evaluate the performance of our approach using the two benchmark policies, RL-base and RL-Hybrid (Section IV).
**Environment.** Experiment conducts on a homogeneous cluster of four nodes, each equipped with eight NVIDIA TITAN XP GPUs connected over 16GBps PCIe Gen3. Each node has one Mellanox MT27700 family ConnectX-4 NIC and is interconnected with 10Gbps Ethernet and 40Gbps InfiniBand.
We believe our evaluation environment of 32 GPUs covers a large section of important use cases: Microsoft reports that 93.7% of all jobs submitted to their internal DGX-2 clusters required at most 32 GPUs in the second half of 2021 [52]. Also, despite the moderate scale of the evaluation environment, we note that our design can be applied to large-scale clusters without any limitation.
**Traces.** To evaluate different scenarios with varying degrees of communication intensities, we generate four traces with different communication intensities according to the categorization in Section III. 4 Each trace contains ten job sets each with 256 randomly shuffled jobs. Each job is randomly assigned a GPU demand of up to 32, and given a total number of samples to train, which is configured to set its expected training time in an isolated cluster as one hour. The jobs' configurations (e.g., batch size, parameter size, number of layers, and gradient accumulation steps) are initialized randomly to evaluate extensibility to unseen cases.
**Branches of RL-base and RL-Hybrid.** We use five branches of RL-base and RL-Hybrid each by sweeping the range of the weights in the reward (\(W_{1}\), \(W_{2}\)) (Section IV) to demonstrate trade-off relationship between average contention sensitivity and utilization. 5 In Figure 5 to Figure 8, A, B, C, D, and E correspond the branch with (0.3, 0.7), (0.4, 0.6), (0.5, 0.5), (0.6, 0.4), and (0.7, 0.3) of the weights, respectively.
Footnote 4: Normal, heavy, medium, and low communication intensive traces follow the model distribution of GNN:IMG:DLRM:LM:FSDP:MoE=1:1:1:1:1:1, 1:1:1:1:4, 1:1:4:4:1:1, and 4:4:1:1:1:1, respectively.
Figure 4: Detailed workflow of Scheduler.
Figure 5: CDF of JCT. (Trace: Normal, Branch: B)
Figure 6: Proportion of average contention sensitivity.
**Results.** Figure 5 shows the cumulative distribution function (CDF) of JCT. Comparing average JCT to SRTF and LAS, RL-base (RL-Hybrid) achieves 15.4% (12.1%) and 18.2% (15.1%) reduction, respectively. The large improvement in average JCT is the result of reduced cluster-wide contention sensitivity. Figure 6 shows the proportion of average contention sensitivity (i.e., average of contention sensitivities that jobs experience during training). RL-base and RL-Hybrid show a concentration of average contention sensitivity at a lower degree than SRTF and LAS. Clearly, this demonstrates that scheduling decisions of RL-base and RL-Hybrid successfully reduce cluster-wide average contention sensitivity which leads to an eventual large reduction in cluster-wide average JCT. For p90 JCT, RL-base and RL-Hybrid equally achieve 16.4% and 20.7% reduction compared to SRTF and LAS, respectively. Even though RL-Hybrid shows the smaller degree of average JCT reduction (\(\approx 3\%\)) than RL-base, RL-Hybrid trades such cost to a large improvement in cluster-wide utilization. Figure 7 shows the kernel density estimation (KDE) of utilization. RL-Hybrid displays the highest utilization, emphasizing the preferable trade-off between average JCT and utilization. The difference mainly stems from their policy discrepancies; when the trained policy decides \(\emptyset\), i.e., not to schedule, RL-Hybrid tries to improve utilization by multiplexing to greedy scheduling.
The reduced tail JCT demonstrates that scheduling with RL effectively curbs scheduling decisions that trigger the worst contention situations.
Figure 8 shows the results according to traces with varying degrees of communication intensities. In normal, high, and medium communication traces, we observe a trade-off relationship between average JCT and utilization among RL-base, SRTF, and LAS (blue dashed lines). RL-base's A and B trained with average contention sensitivity term weight (\(w_{1}\)) 0.3 and 0.4 achieve relatively balanced average JCT and utilization. On the other hand, RL-base's C and D fail to reach the preferable trade-off. RL-base's E trained with the highest penalization of the average contention sensitivity achieves the best average JCT at the cost of much lower utilization compared with SRTF and LAS which opt for maximizing utilization while not taking contention into account. RL-Hybrid moves the original dots of RL-base to the higher right end of the graph. This means that when cluster job distribution follows normal, high, or medium communication intensity conditions, adhering to the action of the trained RL policy in common cases while conforming to the decisions produced by the rule-based safety condition to prevent low utilization in rare cases leads to a large improvement in utilization at the expense of a relatively small increase in average JCT. Conversely, in low communication traces, we observe that SRTF and LAS perform similarly to RL-Hybrid because the relatively low level of contention sensitivities of jobs makes the impact of network contention negligible.
## VII Related Work
**GPU cluster schedulers for DL.** GPU clusters employ a scheduler for efficient allocation of cluster resources. The common goals of the schedulers include reducing average JCT [14, 15, 16, 17, 7, 9], improving utilization [18, 9, 13, 19], or providing fairness [53, 54]. Optimus [14] estimates training speed and assigns more resources to jobs with higher marginal gain. Gandiva [18] employs job time-slicing to provide early feedback and executes a greedy approach for job migration and packing. Tiresias [9] introduces _least attained service_ metric to prevent starvation of late arriving jobs. Themis [53] introduces _finish time fairness_ and promotes fairness between jobs. Gandiva\({}_{fair}\) guarantees
Figure 8: Performance comparison for varying traces. The light red and red colored dots are different branches of RL-base and RL-Hybrid, respectively. Utilization denotes the ratio of used GPUs to total GPUs in the cluster.
Figure 7: KDE of utilization. (Trace: Normal, Branch: B)
fairness between users through allocating proportional resource shares. Antman [19] segments jobs into prioritized and opportunistic jobs to apply different scheduling policies. Gavel [15] proposes a policy-agnostic scheduling mechanism and schedules based on throughput metric that is comparable across heterogeneous accelerator types. AFS [16] opts for balance between the _shortest job first_ policy and resource efficiency when allocating idle resources to jobs. Pollux [17] co-optimizes system throughput and job-level performance metrics. Synergy [13] allocates host resources (e.g., CPU and DRAM) disproportionately according to jobs' sensitivity to resources. Muri [7] enables multiple jobs to be packed on the same set of resources by interleaving different stages of different jobs. However, our work presents the first approach that identifies the network contention sensitivity of jobs and optimizes the cluster objectives by efficiently mitigating network contention.
**RL for systems.** Using RL for cluster scheduling allows rapid adaptation by learning to schedule DL training jobs that experience different levels of network contention sensitivities, where the distribution of jobs with different model, GPU demand, and placement combinations changes constantly. The potential of the RL-based approach in high adaptivity to constantly changing or even unseen conditions has been already shown in various domains. In TCP congestion control (CC), PCC-RL [27] shows that RL-based TCP CC algorithm can successfully learn to distinguish different types of network losses that hand-optimized TCP CUBIC [55] cannot capture. In adaptive video streaming, Pensieve [28] demonstrates that RL-based adaptive bitrate selection for video streaming can improve quality-of-experience (QoE) such as reduced frame stalling while improving bandwidth utilization. Merina [30] shows that using meta-RL allows fast adaptation in unseen throughput dynamics, further improving QoE across a wide range of network throughput patterns. Our work applies RL in resource management system, and present the first approach to propose network contention-aware GPU cluster sheduling with RL.
## VIII Conclusion
We present a novel design that translates the network contention problem in GPU cluster scheduling into an RL problem. Our RL formulation trains the scheduling policies to efficiently tackle the challenge of continuously changing contention sensitivities of jobs in GPU clusters. We build an end-to-end system that can train scheduling policies with RL and deploy on GPU clusters. Our evaluation show that RL-based scheduling policies achieve reduced average and tail JCT by up to 18.2% and 20.7% compared to the widely used LAS and SRTF scheduling policies, and allows preferable trade-off of large improvement of utilization with small cost in average JCT. Our work is open-sourced at [https://github.com/gajagajago/deepshare](https://github.com/gajagajago/deepshare) for future research in RL-based GPU cluster scheduling.
|
2309.14971 | Minimizing Energy Consumption for 5G NR Beam Management for RedCap
Devices | In 5G New Radio (NR), beam management entails periodic and continuous
transmission and reception of control signals in the form of synchronization
signal blocks (SSBs), used to perform initial access and/or channel estimation.
However, this procedure demands continuous energy consumption, which is
particularly challenging to handle for low-cost, low-complexity, and
battery-constrained devices, such as RedCap devices to support mid-market
Internet of Things (IoT) use cases. In this context, this work aims at reducing
the energy consumption during beam management for RedCap devices, while
ensuring that the desired Quality of Service (QoS) requirements are met. To do
so, we formalize an optimization problem in an Indoor Factory (InF) scenario to
select the best beam management parameters, including the beam update
periodicity and the beamwidth, to minimize energy consumption based on users'
distribution and their speed. The analysis yields the regions of feasibility,
i.e., the upper limit(s) on the beam management parameters for RedCap devices,
that we use to provide design guidelines accordingly. | Manishika Rawat, Matteo Pagin, Marco Giordani, Louis-Adrien Dufrene, Quentin Lampin, Michele Zorzi | 2023-09-26T14:44:08Z | http://arxiv.org/abs/2309.14971v1 | # Minimizing Energy Consumption for
###### Abstract
In 5G New Radio (NR), beam management entails periodic and continuous transmission and reception of control signals in the form of synchronization signal blocks (SSBs), used to perform initial access and/or channel estimation. However, this procedure demands continuous energy consumption, which is particularly challenging to handle for low-cost, low-complexity, and battery-constrained devices, such as RedCap devices to support mid-market Internet of Things (IoT) use cases. In this context, this work aims at reducing the energy consumption during beam management for RedCap devices, while ensuring that the desired Quality of Service (QoS) requirements are met. To do so, we formalize an optimization problem in an Indoor Factory (Inf) scenario to select the best beam management parameters, including the beam update periodicity and the beamwidth, to minimize energy consumption based on users' distribution and their speed. The analysis yields the regions of feasibility, i.e., the upper limit(s) on the beam management parameters for RedCap devices, that we use to provide design guidelines accordingly.
5G NR, 3GPP, beam management, RedCap devices, energy consumption, Indoor Factory.
## I Introduction
In the last few years, standardization bodies and industry players have developed several Low-Power Wide Area Network (LPWAN) technologies, such as Long Range (LoRa), Narrowband-IoT (NB-IoT), and SigFox to support IoT applications in many fields, ranging from agriculture, transportation, logistics, and healthcare, as well as for smart cities [1, 2]. Along these lines, the 3rd Generation Partnership Project (3GPP) is also promoting new specifications [3] to simplify 5G NR standard operations to support high-end IoT devices, referred to as RedCap devices [4].
Among other features, RedCap devices may be operating in the lower part of the millimeter wave (mmWave) spectrum to improve the network performance in more demanding scenarios, such as in an indoor factory scenario [5]. Communication at mmWaves, however, requires directionality between the transmitter and the receiver to compensate for the additional path loss experienced at those frequencies, typically realized via Multiple Input Multiple Output (MIMO) antenna arrays. In 5G NR, beam management was designed to allow the endpoints to identify and continuously maintain the optimal direction of transmission, e.g., during initial access and/or channel estimation [6]. Specifically, beam management implies exhaustive search based on Synchronization Signal Blocks (SSBs), collected into bursts and transmitted by a Next Generation Node Base (gNB) according to pre-specified intervals and directions. However, beam management involves severe energy consumption for sending and receiving control signals, which is a function of the beamwidth and periodicity of SSBs [7]. Even though this is generally not an issue for 5G NR systems, it may be challenging to handle for low-complexity, battery-powered RedCap devices, and may degrade the network performance.
Recently, the scientific community has explored possible simplifications of the 5G NR standard to optimize power consumption for RedCap devices [5], for example via simplified air interface procedures [8], protocol stack, antenna configurations [9], and enhanced power-saving functionalities such as Discontinuous Reception (eDRX) or wake-up signals [10]. The 3GPP has also launched some Study and Work Items in this domain, for example in TR 38.869 [11] to study low-power wake-up signal and receiver for RedCap devices. However, to the best of our knowledge, there is no prior work focusing on beam management for RedCap devices, which stimulates further research in this domain.
To fill these gaps, in this work we formalize an optimization problem to determine the optimal beam management design for RedCap devices to minimize the energy consumption. Notably, we focus on an Indoor Factory (Inf) scenario, and derive the so-called regions of feasibility, i.e., the upper limit(s) on the beam management parameters, including the number of SSBs per burst and the burst periodicity, to guarantee that the Quality of Service (QoS) constraints are met, for example that User Equipments (UEs) never go undetected and/or maintain alignment as they move. Simulation results demonstrate that there exists an optimal configuration for beam management to promote energy efficiency, which depends on the speed of the UEs, the beamwidth, and other network parameters.
The rest of the paper is organized as follows. In Sec. II we present our system model (deployment, energy, mobility, and beam management). In Sec. III we describe our optimization problem. Also, we describe the impact of the number of antennas at the gNB on the QoS constraints. In Sec. IV we present the simulation results and provide design guidelines towards the optimal set of parameters for beam management. Finally, conclusions are given in Sec. V.
## II System Model
In this section we present our deployment model (Sec. II-A), beam management model (Sec. II-B), energy consumption model (Sec. II-C), and mobility model (Sec. II-D).
### _Deployment Model_
We consider a 3GPP InF-Sparse High (InF) scenario [12] with an area of size \(L\times W\times H\), a single gNB placed at the center of the ceiling at height \(h_{\text{gNB}}\), and obstacles in the form of clutters of size \(d_{c}\), height \(h_{c}\), and density \(r\). Then, \(K\) UEs are uniformly deployed at height \(h_{\text{UE}}\) around the clutters. The location of UE\({}_{k}\), for \(k\in\{1,2,\ldots,K\}\), is given by \((d_{k},\phi_{k})\), where \(d_{k}\) is the distance between UE\({}_{k}\) and the gNB, and \(\phi_{k}\) is the phase of UE\({}_{k}\) measured counterclockwise. The UEs are assumed to be moving on a circle at constant velocity \(v\) in a counterclockwise direction.
The Signal-to-Noise Ratio (SNR) \(\gamma_{k}\) at UE\({}_{k}\) is given by [13]
\[\gamma_{k}(d_{\text{3D}})=\frac{\mathcal{H}_{\text{L}}P_{r}(d_{\text{3D}})+ \mathcal{H}_{\text{N}}(1-P_{r}(d_{\text{3D}}))}{N_{0}\cdot B\cdot\text{NF}/G_{ \text{gNB},k}G_{\text{UE}}}, \tag{1}\]
where \(d_{\text{3D}}=\sqrt{(h_{\text{gNB}}-h_{\text{UE}})^{2}+d_{k}^{2}}\) is the distance between the gNB and UE\({}_{k}\), \(N_{0}\) is the noise Power Spectral Density, \(B\) is the channel bandwidth, and NF is the noise figure. \(\mathcal{H}_{\text{L}}\) and \(\mathcal{H}_{\text{N}}\) include the effect of path loss, shadowing and fading parameter for the Line-of-Sight (LoS) and Non-Line-of-Sight (NLoS) channels, respectively and \(P_{r}(d_{\text{3D}})\) is the LoS probability, as described in [12]. Specifically, we have
\[\mathcal{H}_{j}=|\mathrm{h}_{j}^{k}|^{2}\mathrm{PL}_{j}^{k},\;j\in\{\text{L, N}\}, \tag{2}\]
where \(\mathrm{h}_{j}^{k}\) and \(\mathrm{PL}_{j}^{k}\) are the channel fading gain and path loss for the LoS (L) and NLoS (N) links, respectively.
In Eq. (1), \(G_{\text{gNB},k}\) (\(G_{\text{UE}}\)) is the beamforming gain at the gNB (UE). We assume analog beamforming (a realistic assumption for RedCap devices to minimize the energy consumption [14]), such that the gNB (UEs) can probe only one direction at a time. Specifically, the gNB is equipped with \(N_{\text{gNB}}\) antennas, and the beamforming gain is expressed as [15]
\[G_{\text{gNB},k}=\sin\left(\frac{N_{\text{gNB}}\pi}{2}\sin\theta_{k}\right)/ \sin\left(\frac{\pi}{2}\sin\theta_{k}\right), \tag{3}\]
where \(\theta_{k}\) is the angular offset with respect to UE\({}_{k}\), as described in Sec. II-D.
### _Beam Management Model_
According to the 5G NR specifications [16], beam management operations rely on a directional version of the 3GPP LTE synchronization signal called SSB. Specifically, each SSB consists of 4 OFDM symbols in time and \(240\) subcarriers in frequency, where the subcarrier spacing depends on the 5G NR numerology [6]. Each SS block is mapped into a certain angular direction so that directional measurements can be made based on the quality of the received signal, e.g., in terms of the SNR. To reduce the overhead, SSBs can be gathered together into SS bursts. An SS burst consists of \(N_{\text{SS}}\in\{8,16,32,64\}\) SSBs, and the periodicity between consecutive SS bursts is \(T_{\text{SS}}\in\{5,10,20,40,80,160\}\) ms.
### _Energy Consumption Model_
In 5G NR beam management, the gNB transmits the SSBs by sequentially sweeping different angular directions to cover the whole beam space (or cell sector). At the UE, the energy consumption (EC) required to receive those SSBs is equal to
\[\text{EC}=S_{D}P_{\text{UE}}T_{\text{SSB}}, \tag{4}\]
where \(S_{D}\) is the number of SSBs required to completely sweep the beam space (which is a function of the beamwidth at the gNB), \(P_{\text{UE}}\) is the power consumed for receiving each SSB at the UE, and \(T_{\text{SSB}}\) is the time required to send each SSB.
From [6, Eq. (3)], the number of SSBs required to completely sweep the beam space on the horizontal plane, with azimuth ranging from \(0\) to \(2\pi\), can be expressed as
\[S_{D}=\lceil 2\pi/\Delta_{\text{3dB}}\rceil\approx\lceil\pi N_{\text{gNB}} \rceil, \tag{5}\]
where \(\Delta_{\text{3dB}}\) is the 3-dB beamwidth, which can be approximated as \(\Delta_{\text{3dB}}\approx 2/N_{\text{gNB}}\) according to [15]. Since each SSB consists of 4 OFDM symbols, the time required to send one SSB can be expressed as [6, Eq. (2)]
\[T_{\text{SSB}}=4T_{\text{symb}}=4\left(71.45/2^{n}\right), \tag{6}\]
where \(n\) represents the 5G NR numerology.
Finally, the power consumption at the UE, equipped with \(N_{\text{UE}}\) antennas, can be expressed as
\[P_{\text{UE}}=N_{\text{UE}}(P_{\text{LNA}}+P_{\text{PS}})+P_{\text{RF}}+P_{C} +2P_{\text{ADC}}, \tag{7}\]
where \(P_{\text{RF}}=P_{M}+P_{\text{LO}}+P_{\text{LPF}}+P_{\text{BB}}\) is the power consumption of the RF chain [17]. A description of the power components appearing in Eq. (7), and the relative numerical values, is provided in Table I.
### _Mobility Model_
At the beginning of the beam management process, UE\({}_{k}\), \(k\in\{1,2,\ldots,K\}\), establishes a physical link connection with the gNB using a certain beam. Due to the finite pre-defined codebook of directions available at the gNB, UE\({}_{k}\) comes with a non-zero initial angular offset \(\theta_{i,k}=\min(\phi_{k}-\boldsymbol{\bar{\phi}_{br}})\) with respect to the gNB antenna boresight directions \(\boldsymbol{\bar{\phi}_{br}}=\left[0,1,2,\ldots,S_{D}-1\right]\Delta_{\text{3dB}}\), as represented in Fig. 1.
Fig. 1: UE mobility model. During beam management, UE\({}_{k}\) accumulates an angular offset \(\theta_{k}\) due to both initial misalignment (\(\theta_{i,k}\)) and mobility (\(\theta_{o,k}\)). The latter depends on the UE speed \(v\), and the beam management time \(T_{\text{BM}}\).
At the same time we assume that, during beam management, UE\({}_{k}\) can move in a counterclockwise direction at constant velocity \(v\). During this time, UE\({}_{k}\) may lose beam alignment and the corresponding beamforming gain, and get disconnected if the resulting SNR is lower than a pre-defined threshold [18]. Thus, we define \(\theta_{v,k}\) as the angular offset due to mobility during beam management, i.e.,
\[\theta_{v,k}=vT_{\mathrm{BM}}/d_{k}. \tag{8}\]
In Eq. (8), \(T_{\mathrm{BM}}\) is the time for beam management, and is measured as the delay from the first SSB transmission to the completion of the sweep in all possible angular directions, which can be expressed as in [6, Eq. (4)], i.e.,
\[T_{\mathrm{BM}}=T_{\mathrm{SS}}\left(\left\lceil S_{D}/N_{\mathrm{SS}}\right \rceil-1\right)+T_{\ell}, \tag{9}\]
where \(T_{\ell}\) is the time required to send the remaining SSBs in the last burst and is given in [6, Eq. (6)]. Therefore, the overall angular offset for UE\({}_{k}\) during beam management, due to both the initial offset (\(\theta_{i,k}\)) and the offset accumulated due to mobility (\(\theta_{v,k}\)), can be expressed as
\[\theta_{k}=|\theta_{v,k}+\theta_{i,k}|. \tag{10}\]
## III Optimization Problem
In this section we define an optimization problem to minimize the energy consumption for RedCap devices for transmitting/receiving SSBs during beam management. The optimization problem can be formalized as follows:
\[\min_{N_{\mathrm{gNB}}}\quad\text{EC}= S_{D}P_{\mathrm{UE}}T_{\mathrm{SSB}}, \tag{11a}\] \[P_{T}\gamma_{k}\geq\tau,\;\forall k;\] (11b) \[N_{\mathrm{gNB}}\in\{2,3,\ldots,64\}, \tag{11c}\]
where \(P_{T}\) is the transmission power at the gNB, and \(\gamma_{k}\) is the SNR at UE\({}_{k}\) as given in Eq. (1). In (11), (11b) ensures that the SNR at UE\({}_{k}\) is greater than or equal to a minimum threshold \(\tau\), which is large enough to ensure that UE\({}_{k}\) can be properly detected, and (11c) restricts the number of antenna elements at the gNB to \(64\), as expected for RedCap devices.
**Modeling of the constraints.** The optimization problem determines the optimal value of \(N_{\mathrm{gNB}}\), referred to as \(N^{*}\), based on the SNR \(\gamma_{k}\), \(\forall k\), which depends on \(G_{\mathrm{gNB}}\), so on the angular offset \(\theta_{k}\) introduced by the moving UEs. Indeed, as the UE moves at constant velocity \(v\) during the beam management process, it may lose alignment with respect to the associated beam, potentially deteriorating the beamforming gain. This may cause the SNR of UE\({}_{k}\) to drop below the sensitivity threshold \(\tau\), preventing it from being detected. The factors that may lead to misalignment include: (i) the UE velocity \(v\) (the faster the UE, the sooner it may lose alignment); (ii) the beam management time \(T_{\mathrm{BM}}\) and, consequently \(T_{\mathrm{SS}}\) and \(N_{\mathrm{SS}}\) (the slower the beam management procedure, the higher the probability that the UE would lose alignment); and (iii) the number of antennas \(N_{\mathrm{gNB}}\), which defines the beamwidth (the narrower the beam, the higher the probability that the UE would lose alignment). In the following, we investigate the impact of those terms on the optimization problem.
In Fig. 2 we plot \(\bar{\theta}\) (the angular offset averaged over all \(K\) UEs in the scenario) vs. \(N_{\mathrm{gNB}}\) for different values of \(v\) and \(T_{\mathrm{SS}}\), and for \(N_{\mathrm{SS}}=\{8,16\}\). We observe that \(\bar{\theta}\) initially decreases with \(N_{\mathrm{gNB}}\). In fact, when the number of antennas is small, the beam is large enough to ensure continuous alignment despite mobility. In this region, \(\bar{\theta}\) is thus dominated by the initial offset \(\theta_{i,k}\) with respect to the antenna boresight direction. Then, as \(N_{\mathrm{gNB}}\) increases, the beams become progressively narrower, and the number of SSBs that are required to be sent to completely sweep the beam space also increases, which increases the beam management time. In these conditions, the angular offset due to mobility \(\theta_{v,k}\) increases accordingly as per Eq. (8). In addition, we observe that in both Fig. 2(a) and 2(b) the angular offset for \(v=2\) m/s and \(T_{\mathrm{SS}}=20\) ms overlaps with the offset for
Fig. 3: Average gain at the gNB \(G_{\mathrm{gNB}}\) as a function of \(N\), the UE speed \(v\), and the SS burst periodicity \(T_{\mathrm{SS}}\).
Fig. 2: Average angular offset \(\bar{\theta}\), as a function of \(N_{\mathrm{gNB}}\), the UE speed \(v\), and the SS burst periodicity \(T_{\mathrm{SS}}\).
\(v=1\) m/s and \(T_{\rm SS}=40\) ms. Similarly, the offset for \(v=2\) m/s and \(T_{\rm SS}=40\) overlaps over the offset for \(v=4\) m/s and \(T_{\rm SS}=20\) ms. Therefore, we conclude that the angular offset depends on \(v\) and \(T_{\rm SS}\) only through their product. This observation becomes significant while analyzing the feasibility regions in Sec. IV.
Notice that the zigzag effect in Fig. 2 is due to the fact that \(\theta_{v,k}\) and hence \(\vec{\theta}\) is a function of \(T_{\rm BM}\) which, as reported in Eq. (9), is a ceiling function. This effect increases as \(N_{\rm sNB}\) increases, i.e., as \(\theta_{v,k}\) dominates the average angular offset \(\vec{\theta}\).
Additionally, in Fig. 3 we plot the average antenna gain at the gNB (\(G_{\rm gNB}\), averaged over all \(K\) UEs in the scenario) vs. \(N_{\rm gNB}\) for different values of \(v\) and \(T_{\rm SS}\), and for \(N_{\rm SS}=\{8,16\}\). We notice that \(G_{\rm gNB}\) initially increases with \(N_{\rm gNB}\), and then drops after a threshold due to mobility. If \(N_{\rm M}\) is the number of antennas corresponding to the point where \(G_{\rm gNB}\) is maximum, we conclude that the optimization problem in (11) is restricted to the values of \(N_{\rm gNB}\leq N_{\rm M}\) because the energy consumption increases with \(N_{\rm gNB}\). We further observe that \(N_{\rm M}\) decreases with \(v\) and \(T_{\rm SS}\), and increases with \(N_{\rm SS}\). In other words, the product \(vT_{\rm SS}\) for a given value of \(N_{\rm SS}\) establishes an upper limit to determine the regions of feasibility, as further discussed in Sec. IV: if the SNR constraints are not satisfied for \(N_{\rm gNB}\leq N_{\rm M}\), the optimization problem will be infeasible.
**Optimization algorithm.** Based on the optimization problem in (11), and the considerations above, we conclude that the energy consumption at the UE increases monotonically with the number of antennas at the gNB. This suggests that the minimum value of \(N_{\rm gNB}\) at which the SNR constraints are satisfied should be the optimal \(N_{\rm gNB}\), or \(N^{*}\). For a given transmit power (\(P_{T}\)) and SNR threshold (\(\tau\)), if the constraints are not met, the problem is infeasible.
## IV Numerical Results
In this section, we evaluate the energy consumption for beam management as a function of \(N_{\rm SS}\), \(T_{\rm SS}\), \(N_{\rm gNB}\), \(v\), \(P_{T}\), and \(\tau\). Specifically, we perform \(10^{5}\) Monte Carlo simulations in MATLAB for each combination of parameters, and in each simulation we find \(N^{*}\) using the algorithm presented in Sec. III. The simulation parameters are reported in Tab. I, taken from [12, Table 7.2-4] for the InF-SH scenario, [5] for the RedCap devices, and [17] for the power consumption.
The goal of our analysis is to determine the regions of feasibility, and the corresponding set of 5G NR beam management parameters which minimize the energy consumption while satisfying SNR constraints. Notice that we assume zero misdetection probability in the analysis, i.e., no user goes misdetected during the beam management process.
### _Impact of \(T_{\rm SS}\) and \(N_{\rm SS}\)_
Figs. 4(a) and 4(b) depict the UE misdetection probability and \(N^{*}\), respectively, as a function of \(T_{\rm SS}\) and \(v\), for \(N_{\rm SS}=8\). While \(N^{*}\) depends on \(P_{T}\) and \(\tau\), it does not change with \(v\), \(T_{\rm SS}\), and \(N_{\rm SS}\). This is because the objective function always drives the optimization problem towards the minimum value of \(N_{\rm gNB}\) that meets the SNR constraints for each UE, so as to minimize the energy consumption. In turn, this sets \(N^{*}\) to the minimum value corresponding to the largest angular offset beyond which the problem becomes infeasible (which determines the values of \(v\) and \(T_{\rm SS}\) for a given \(N_{\rm SS}\)), as described in Sec. III.
Nevertheless, given \(P_{T}\) and \(\tau\), there exists only a limited set of values of \(v\), \(T_{\rm SS}\) at \(N_{\rm SS}\) for which the SNR constraints are met. As a consequence, some bars are missing in Fig. 4(b), which indicates that the corresponding problem is infeasible. For example, for \(T_{\rm SS}=160\) ms and \(v\geq 1\) m/s, there are no values of \(N_{\rm gNB}\) for which \({\rm SNR}_{k}\geq\tau,\ \forall k\). This is also observed in Fig. 4(a), where the misdetection probability at \(T_{\rm SS}=160\) ms is greater than zero for \(v\geq 1\) m/s. Similarly, \(v\geq 2\) m/s is infeasible for \(T_{\rm SS}\geq 80\) ms, whereas for \(v\leq 4\) m/s and \(T_{\rm SS}\leq 20\) ms the problem is feasible, which yields \(N^{*}=5.4\) on average.1 This is because increasing \(v\) or \(T_{\rm SS}\) increases the average angular offset as per Eq. (10), and may cause the UEs to lose beam alignment sooner, thus making the problem infeasible.
Footnote 1: Notice that, while we constrain \(N^{*}\) to be an integer in each Monte Carlo simulation, here \(N^{*}\) represents the average of different realizations.
### _Impact of \(P_{t}\) and \(\tau\)_
Fig. 5 illustrates the average optimal number of antennas \(N^{*}\) as a function of the transmission power \(P_{T}\) and \(T_{\rm SS}\), for \(\tau\in\{3,7\}\) dB, \(v=1\) m/s and \(N_{\rm SS}=8\). We observe that as \(P_{T}\) decreases and \(\tau\) increases, \(N^{*}\) increases. Indeed, increasing the number of antennas leads to a higher (best case) beamforming gain, thus possibly improving the minimum SNR at the UEs. At the same time, decreasing \(P_{T}\) and/or increasing \(\tau\) also reduces the set of values for which the problem is feasible, as demonstrated by the missing bars in Fig. 5. In fact, although the angular offset does not directly depend on \(P_{T}\) and \(\tau\), a
smaller \(P_{T}\) or a higher \(\tau\) effectively impose progressively stricter constraints on the problem, as per C1 in (11). For instance, {\(P_{T}=18\) dBm, \(\tau=3\) dBm, \(T_{\rm SS}\leq 160\) ms} is a feasible configuration, whereas {\(P_{T}=12\) dBm, \(\tau=3\) dBm, \(T_{\rm SS}>40\) ms} is not.
### _Feasibility Regions_
For given values of \(P_{T}\) and \(\tau\), there exists only a limited set of values of \(v\), \(T_{\rm SS}\), and \(N_{\rm SS}\) for which the problem in (11) is feasible, i.e., the SNR constraints are guaranteed. Table II reports these feasibility regions for \(P_{T}=18\) dBm and \(\tau\in\{3,7,10\}\) dB, in terms of the highest product of \(v\) and \(T_{\rm SS}\) supported by the system. We recall that, as observed in Sec. III, both \(v\) and \(T_{\rm SS}\) have the same impact on the angular offset, and the feasibility regions are perfectly defined by the product \(vT_{SS}\). For example, for \(N_{\rm SS}=8\) and \(\tau=7\) dB, the feasibility region is upper bounded by \(vT_{SS}=0.08\) m. The results in Table II have been obtained for different values of \(N_{\rm SS}\) and \(\tau\), and \(v\leq 25\) m/s using a similar analysis as in Sec. IV-A.
Fig. 6 depicts the feasibility regions in terms of the upper bounds of parameters \(N_{\rm SS}\) and \(T_{\rm SS}\) for which the optimization problem in (11) is feasible, for \(P_{T}=18\) dBm and \(\tau\in\{3,7,10\}\) dB. These plots were generated using the values in Table II, and are intended to provide guidelines towards the optimal 5G NR beam management configurations to minimize the energy consumption for RedCap devices. In general, we observe that as \(\tau\) increases, the feasibility regions become smaller. This is because increasing the threshold \(\tau\) translates into a tighter constraint on the SNR (C1 in (11)), thus the optimization problem yields larger values of \(N^{*}\) to increase the beamforming gain. However, this also implies narrower beams, which in turn reduce the angular offset which can be tolerated by the system.
Furthermore, the size of the feasibility regions is inversely proportional to \(v\) and \(T_{\rm SS}\), as expected from the analysis in Sec. III. Indeed, if the UEs move faster, or if the beam management process takes longer, the angular offset in Eq. (10) also increases, and so does the probability that the UEs would lose beam alignment. However, we can see from Eq. (9) that \(T_{\rm SS}\) does not influence the beam management time \(T_{\rm BM}\) if \(S_{D}\leq N_{\rm SS}\), i.e., if sending the SSBs requires exactly one burst [19]. Based on the expression of \(S_{D}\) in Eq. (5), this condition is true if \(N_{\rm gNB}\leq 3,5,11,21\), for \(N_{\rm SS}=8,16,32,64\) respectively. In general, it is convenient to choose \(N_{\rm gNB}\) accordingly, to limit the impact of \(T_{\rm SS}\) on the shape of the feasibility regions.
### _Energy Consumption_
The feasibility regions in Fig. 6 show that the smallest (highest) feasible \(T_{\rm SS}\) (\(N_{\rm SS}\)) (i.e., the bottom-right part of the feasibility region) would be the optimal configuration for the beam management. Indeed, this choice implies faster beam alignment and better SNR on average, but also entails the highest overhead as more time resources are used for sending control signals at the expense of data transmissions [6, Fig.
Fig. 4: Misdetection probability (top) and \(N^{*}\) (bottom) vs. the SS burst periodicity \(T_{\rm SS}\) and the UE speed \(v\), considering \(P_{T}=18\) dBm, sensitivity threshold \(\tau=7\), \(N_{\rm SS}=8\).
17]. Furthermore, let \(\overline{\text{EC}}_{t}\) be the average energy consumption for sending SSBs over time, which can be expressed as [7]:
\[\overline{\text{EC}}_{t}=(P_{\text{UE}}T_{\text{SSB}})N_{\text{SS}}/T_{\text{SS}}, \tag{12}\]
where \(P_{\text{UE}}T_{\text{SSB}}\) represents the average energy consumption for sending one SSB, as per Eq. (4).
Overall, there exists a trade-off between the beam management periodicity and the resulting overhead and \(\overline{\text{EC}}_{t}\), which leads to the optimal values of \(T_{\text{SS}}\) and \(N_{\text{SS}}\). We thus propose to operate in the top-left portion of the feasibility regions, i.e., choosing the highest possible \(T_{\text{SS}}\) at \(N_{\text{SS}}=8\). In this way, we minimize the average energy consumption per unit time, while still satisfying the SNR constraints as we are in the feasibility regions. For instance, for \(P_{T}=18\) dBm and \(\tau\in\{3,7,10\}\) dB, the optimal configuration for \((N_{\text{SS}},T_{\text{SS}})\) at \(v=1\) m/s is \((8,160\) ms), \((8,80\) ms) and \((8,20\) ms), respectively.
## V Conclusions and Future Work
In this work, we explored the 5G NR beam management design for RedCap devices in an InF-SH scenario. In this scenario, and during beam management, a moving device may lose alignment with the associated beam, potentially resulting in the UE going misdetected. Therefore, we formalized an optimization problem to minimize the energy consumption during beam management, while ensuring that some desired QoS requirements, measured in terms of the misdetection probability, are met. Through simulations, we identified the feasibility regions where the problem can be solved, and proposed the optimal values of the beam management parameters for RedCap devices, such as the optimal SSB size and periodicity, to maintain a minimum energy consumption while optimizing latency and overhead. As part of our future work, we will generalize our optimization problem to other scenarios like smart agriculture, introduce additional mobility models, as well as consider more sophisticated optimization methods, e.g., based on machine learning.
## Acknowledgment
This work was partially supported by the European Union under the Italian National Recovery and Resilience Plan (NRRP) of NextGenerationEU, partnership on "Telecommunications of the Future" (PE0000001 - program "RESTART").
|
2309.14949 | Towards Real-World Test-Time Adaptation: Tri-Net Self-Training with
Balanced Normalization | Test-Time Adaptation aims to adapt source domain model to testing data at
inference stage with success demonstrated in adapting to unseen corruptions.
However, these attempts may fail under more challenging real-world scenarios.
Existing works mainly consider real-world test-time adaptation under non-i.i.d.
data stream and continual domain shift. In this work, we first complement the
existing real-world TTA protocol with a globally class imbalanced testing set.
We demonstrate that combining all settings together poses new challenges to
existing methods. We argue the failure of state-of-the-art methods is first
caused by indiscriminately adapting normalization layers to imbalanced testing
data. To remedy this shortcoming, we propose a balanced batchnorm layer to swap
out the regular batchnorm at inference stage. The new batchnorm layer is
capable of adapting without biasing towards majority classes. We are further
inspired by the success of self-training~(ST) in learning from unlabeled data
and adapt ST for test-time adaptation. However, ST alone is prone to over
adaption which is responsible for the poor performance under continual domain
shift. Hence, we propose to improve self-training under continual domain shift
by regularizing model updates with an anchored loss. The final TTA model,
termed as TRIBE, is built upon a tri-net architecture with balanced batchnorm
layers. We evaluate TRIBE on four datasets representing real-world TTA
settings. TRIBE consistently achieves the state-of-the-art performance across
multiple evaluation protocols. The code is available at
\url{https://github.com/Gorilla-Lab-SCUT/TRIBE}. | Yongyi Su, Xun Xu, Kui Jia | 2023-09-26T14:06:26Z | http://arxiv.org/abs/2309.14949v1 | # Towards Real-World Test-Time Adaptation: Tri-Net Self-Training with Balanced Normalization
###### Abstract
Test-Time Adaptation aims to adapt source domain model to testing data at inference stage with success demonstrated in adapting to unseen corruptions. However, these attempts may fail under more challenging real-world scenarios. Existing works mainly consider real-world test-time adaptation under non-i.i.d. data stream and continual domain shift. In this work, we first complement the existing real-world TTA protocol with a globally class imbalanced testing set. We demonstrate that combining all settings together poses new challenges to existing methods. We argue the failure of state-of-the-art methods is first caused by indiscriminately adapting normalization layers to imbalanced testing data. To remedy this shortcoming, we propose a balanced batchnorm layer to swap out the regular batchnorm at inference stage. The new batchnorm layer is capable of adapting without biasing towards majority classes. We are further inspired by the success of self-training (ST) in learning from unlabeled data and adapt ST for test-time adaptation. However, ST alone is prone to over adaption which is responsible for the poor performance under continual domain shift. Hence, we propose to improve self-training under continual domain shift by regularizing model updates with an anchored loss. The final TTA model, termed as TRIBE, is built upon a tri-net architecture with balanced batchnorm layers. We evaluate TRIBE on four datasets representing real-world TTA settings. TRIBE consistently achieves the state-of-the-art performance across multiple evaluation protocols. The code is available at [https://github.com/Gorilla-Lab-SCUT/TRIBE](https://github.com/Gorilla-Lab-SCUT/TRIBE).
## 1 Introduction
The recent success of deep neural networks relies on the assumption of generalizing pre-trained model to i.i.d. testing domain [41]. When deep learning models are to be deployed on real-world applications, robustness to out-of-distribution testing data, e.g. visual corruptions caused by lighting conditions, adverse weather, etc. becomes a major concern. Recent studies revealed such corruptions could severely deteriorate the generalization of model pre-trained on clean training samples [36; 14; 31]. Importantly, the corruption on testing data is often unknown and sometimes unpredictable before deployment. Therefore, a new line of works emerge by adapting pre-trained models to testing data distribution at inference stage, a.k.a. test-time adaptation (TTA) [37; 40; 33]. The success of test-time adaptation is often achieved by distribution alignment [33; 24], self-supervised training [4] and self-training [11], all demonstrating remarkable improvement of robustness on multiple types of visual corruptions in the testing data. Despite the unprecedented performance, existing TTA approaches are often developed under restrictive assumptions of testing data, e.g. stationary class distribution
and static domain shift, and this gives rise to many attempts to explore TTA methods for real-world testing data [43; 46; 10; 30].
The recently explored real-world TTA, a.k.a. wild TTA [30] or Practical TTA [46], settings mainly consider the challenges brought by local class-imbalance [30; 46; 10] and continual domain shift [43] which are expected to be encountered in real-world applications. Local class-imbalance is often observed when testing data are drawn in a non-i.i.d. manner [10]. Direct adaptation indiscriminately results in biased distribution estimation and the recent works proposed exponential batchnorm update [46] or instance batchnorm update [10] to tackle this challenge. In this work, our aim is to address beyond the local class-imbalance challenge by taking into account the fact that the global distribution of testing data could be severely imbalanced and the class distribution may shift over time. We provide an illustration of the more challenging scenario in Fig. 1. This additional challenge renders existing TTA methods ineffective as the class prevalence on testing data is unknown before inference stage and the model could be biased towards majority classes through blind test-time adaptation. Through empirical observations, this issue becomes particularly acute for methods relying on estimating global statistics for updating normalization layers[28; 21; 40]. It mainly owes to the fact that a single global distribution is estimated from the whole testing data on which samples are normalized. As such, the global distribution could easily bias towards majority classes, resulting in internal covariate shift [16]. To avoid biased batch normalization (BN), we propose a balanced batch normalization layer by modeling the distribution for each individual category and the global distribution is extracted from category-wise distributions. The balanced BN allows invariant estimation of distribution under both locally and globally class-imbalanced testing data.
Shift of domain over time occurs frequently in real-world testing data, e.g. a gradual change of lighting/weather conditions. It poses another challenge to existing TTA methods as the model could overly adapt to domain A and struggle with domain B when A shifts to B. To alleviate overly adapting to a certain domain, CoTTA [43] randomly reverts model weights to pre-trained weights and EATA [29] regularizes the adapted model weights against source pre-trained weights to avoid overly shifting model weights. Nevertheless, these approaches still do not explicitly address the challenge of constant shifting domains in testing data. As self-training has been demonstrated to be effective for learning from unlabeled data [32], we adopt a teacher-student framework for TTA. Nonetheless, direct self-training without regularization is prone to confirmation bias [1] and could easily overly adapt pre-trained model to a certain domain, causing degenerate performance upon seeing new domains. To avoid this over adaptation, we further introduce an anchor network, of which the weights are copied from pre-trained model and batchnorm layers are dynamically updated by testing samples. The anchored loss, realised as mean square error (MSE), between teacher and anchor network is jointly optimised with self-training loss to strike a balance between adaptation
Figure 1: Illustration of two challenging real-world TTA scenarios. Different colors indicate the proportions of semantic classes, horizontal axis indicates testing data domain (e.g. different corruptions) may shift over time and different imbalance factor (\(I.F.\)) controls the degree of global imbalance. We expect the testing data stream to exhibit both local and global class imbalance, termed as “class distribution is fixed (**GLI-TTA-F**)” and this distribution may also evolve over time, termed as “class distribution is varying (**GLI-TTA-V**)”.
to specific domain and being versatile on ever changing domains. We brand this design as a tri-net architecture. We demonstrate that with the help of tri-net, TTA maintains a good performance within a wider range of learning rate. We refer to the final model as **TRI**-net self training with **B**alanc**E**d normalization (**TRIBE**) in recognition of the tri-net architecture with balanced normalization layer.
We summarize the contributions of this work as follows.
* We are motivated by the challenges in real-world test-time adaptation and propose to tackle a challenging TTA setting where testing data is both locally and globally class-imbalanced and testing domain may shift over time.
* A novel balanced batch normalization layer is introduced to fit to testing data distribution with both local and global class imbalance.
* We further introduce a tri-net framework to facilitate adaptation under continually shifting testing domain. We demonstrate this tri-net design improves robustness to the choice of learning rate.
* We evaluate the proposed method, TRIBE, on four test-time adaptation datasets under different real-world scenarios, demonstrating superior performance to all state-of-the-art methods.
## 2 Related Work
**Unsupervised Domain Adaptation**: Machine learning models often assume both training and testing data are drawn i.i.d. from the same distribution. When such assumption is violated, generalizing source model to testing distribution is hampered by the domain shift, leading to degraded performance [42]. Unsupervised domain adaptation (UDA) improves model generalization by exploiting both labeled source domain data and unlabeled target domain data [9; 39; 25]. Common approaches towards UDA includes distribution alignment [12; 35; 48], adversarial learning [15], target clustering [38] and self-training [23]. Nevertheless, UDA is only effective when source and target domain data are simultaneously accessible. More importantly, in real-world applications the distribution in target domain is often predictable until inference stage which has motivated research into test-time adaptation.
**Test-Time Adaptation**: Adapting pre-trained model to target domain distribution at test-time improves model generalization to unseen distribution shift. Widely adopted test-time adaptation (TTA) protocol simultaneously evaluate on a stream of testing data and update model weights [37; 40; 17; 33; 8; 11; 4]. The state-of-the-art approaches towards TTA adopt self-training [40; 34; 8], distribution alignment [37; 33] and self-supervised learning [24; 4]. With the above techniques, generalization performance on testing data with corruptions has been substantially improved. Nonetheless, most of these are optimized towards the vanilla TTA protocol, thus these methods may not maintain the superior performance under more realistic TTA scenarios.
**Real-World Test-Time Adaptation**: Deploying TTA methods in real-world application requires tackling commonly encountered challenges. Recent works summarized multiple challenges that could appear in real-world test-time adaptation, including updating with small batchsize [30], non-i.i.d. or temporally correlated testing data [10; 43; 46; 2] and continually adapting to shifting domains [43; 46; 3]. Empirical observations demonstrate that these real-world challenges could pose great challenges to existing TTA methods. Despite the recent efforts in developing TTA robust to non-i.i.d. testing data, we argue that a systematic investigation into more diverse real-world challenges, including global class-imbalance, is missing. This work propose a principled way to simulate these challenges and develop a self-training based method with balanced batchnorm to achieve the state-of-the-art performance.
## 3 Methodology
### Real-World TTA Protocol
We denote a stream of testing data as \(\mathbf{x}_{0},\mathbf{x}_{1},\cdots,\mathbf{x}_{T}\) where each \(\mathbf{x}_{t}\) is assumed to be drawn from a distribution \(\mathcal{P}(\mathbf{x}|d_{t},y_{t})\) conditioned on two time-varying variables, namely the testing domain indicator \(d_{t}\in\{1,\cdots K_{d}\}\) and the class label \(y_{t}\in\{1,\cdots K_{c}\}\), where \(K_{d}\) and \(K_{c}\) refer to the number
of domains (e.g. type of corruptions) and number of semantic classes. In the real-world TTA protocol, both the testing domain indicator and class label distribution could be subject to constant shift, in particular, we assume the domain indicator to exhibit a gradual and slowly shift over time. This is manifested by many real-world applications, e.g. the lighting and weather conditions often changes slowly. We further point out that testing samples are often class imbalanced both locally within a short period of time and globally over the whole testing data stream. Therefore, we model the testing data stream as sampling from a hierarchical probabilistic model. Specifically, we denote a prior \(\alpha\in\mathbb{R}^{K_{c}}\) parameterizing a Dirichlet distribution \(\mathbf{q}_{c}\sim Dir(K_{c},\alpha)\). Within a stationary local time window, e.g. a minibatch of testing samples, the labels of testing samples are drawn from a categorical distribution \(y\sim Cat(K_{c},\mathbf{q}_{c})\) where \(\mathbf{q}_{c}\) is drawn from the conjugate prior distribution \(Dir(K_{c},\alpha)\). The corrupted testing sample is then assumed to be finally sampled from a complex distribution conditioned on the domain indicator \(d_{t}\) and class label \(y_{t}\), written as \(\mathbf{x}\sim\mathcal{P}(\mathbf{x}|d_{t},y_{t})\). The domain indicator can be modeled as another categorical distribution parameterized by a fixed probability \(d_{t}\sim Cat(K_{d},\mathbf{q}_{d})\). A hierarchical probabilistic model simulating the real-world TTA protocol is presented in Fig. 2. We notice the probabilistic model can instantiate multiple existing TTA protocols. For instance, when testing data are locally class imbalanced, as specified by [46; 10], a uniform proportion parameter \(\alpha=\sigma\mathbf{1}\) is chosen with a scale parameter \(\sigma\) controlling the degree of local imbalanceness and \(\mathbf{q}_{c}\) is re-sampled every mini-batch. We can easily simulate global class-imbalance by specifying a non-uniform \(\alpha\). We defer a more detailed discussion of simulating real-world TTA protocols with the hierarchical probabilistic model to the supplementary.
### Balanced Batch Normalization
Batch normalization (BN) [16] plays a critical role in enabling more stable model training, reducing sensitivity to hyper-parameters and initialization. When testing data features a distribution shift from the source training data, the regular practice of freezing BN statistics for inference fails to maintain the generalization [46; 28; 10; 22; 30]. Hence, adapting BN statistics to testing data distribution becomes a viable solution to TTA [28]. To adapt model subject to locally imbalanced testing data, the robust batch normalization [46] updates BN mean and variance in a moving average manner on testing data. The robust BN smooths out the bias estimated within each minibatch and achieves competitive performance under non-i.i.d. testing data stream.
Despite being successful in non-i.i.d. testing data stream, a naive moving average update policy struggles in adapting to globally imbalanced testing domain. For example, evidenced in the empirical evaluation in Tab. 1, the performance of RoTTA [46] degenerates substantially under more severely global imbalanced testing data. We ascribe the poor performance to the fact that a single BN will bias towards majority classes and normalizing samples from the minority classes with biased statistics will result in severe covariate shift within internal representations. This will eventually cause mis-classifying the minority classes and lower the macro average accuracy. To remedy the bias in adapting BN statistics, we propose a Balanced Batchnorm layer which maintains \(K_{c}\) pairs of statistics separately for each semantic class, denoted as \(\{\mu_{k}\}_{k=1\cdots K_{c}}\), \(\{\sigma_{k}\}_{k=1\cdots K_{c}}\). To update category-wise statistics, we apply an efficient iterative updating approach with the help of pseudo labels predictions as follows,
Figure 2: An illustration of the proposed real–world TTA simulation protocol with a hierarchical probabilistic model. A non-uniform \(\alpha\) results in globally imbalanced testing data distribution.
\[\mu_{k}^{t}=\mu_{k}^{t-1}+\delta_{k}, \sigma_{k}^{2t}=\sigma_{k}^{2t-1}-\delta_{k}^{2}+\eta\sum_{b=1}^{B} 1(\hat{y}_{b}=k)\frac{1}{HW}\sum_{h=1}^{H}\sum_{w=1}^{W}\left[(F_{bhw}-\mu_{k}^{t -1})^{2}-\sigma_{k}^{2t-1}\right] \tag{1}\] \[s.t. \delta_{k}=\eta\sum_{b=1}^{B}1(\hat{y}_{b}=k)\frac{1}{HW}\sum_{h=1 }^{H}\sum_{w=1}^{W}(F_{bhw}-\mu_{k}^{t-1}),\]
where \(\mathbf{F}\in\mathbb{R}^{B\times C\times H\times W}\) denotes the input for Balanced BN layer and \(\hat{y}_{b}\) is the pseudo label predicted by the adapted model in the inference step. With the above design BN statistics for each individual class is separately updated and the global BN statistics are derived from all category-wise statistics as in Eq. 2.
\[\mu_{g}=\tfrac{1}{K_{c}}\sum_{k=1}^{K_{c}}\mu_{k}^{t},\quad\sigma_{g}^{2}= \tfrac{1}{K_{c}}\sum_{k=1}^{K_{c}}\left[\sigma_{k}^{2t}+(\mu_{g}-\mu_{k}^{t}) ^{2}\right]. \tag{2}\]
Nevertheless, we found when the number of categories is large or the pseudo labels are highly untrustworthy, e.g. the baseline accuracy on ImageNet-C is very low, the above updating strategy might be less effective due to its reliance on the pseudo labels. Therefore, we combine the class-agnostic updating strategy (Robust BN) and the category-wise updating strategy with a balancing parameter \(\gamma\) as below.
\[\mu_{k}^{t}= \mu_{k}^{t-1}+(1-\gamma)\delta_{k}+\gamma\frac{1}{K_{c}}\sum_{k^{ \prime}=1}^{K_{c}}\delta_{k^{\prime}}, \tag{3}\] \[\sigma_{k}^{2t}= \sigma_{k}^{2t-1}+(1-\gamma)\left\{-\delta_{k}^{2}+\eta\sum_{b=1 }^{B}1(\hat{y}_{b}=k)\frac{1}{HW}\sum_{h=1}^{H}\sum_{w=1}^{W}\left[(F_{bhw}- \mu_{k}^{t-1})^{2}-\sigma_{k}^{2t-1}\right]\right\}+\] \[\gamma\cdot\frac{1}{K_{c}}\sum_{k^{\prime}=1}^{K_{c}}\left\{- \delta_{k^{\prime}}^{2}+\eta\sum_{b=1}^{B}1(\hat{y}_{b}=k^{\prime})\frac{1}{ HW}\sum_{h=1}^{H}\sum_{w=1}^{W}\left[(F_{bhw}-\mu_{k^{\prime}}^{t-1})^{2}- \sigma_{k}^{2t-1}\right]\right\}.\]
Specifically, when \(\gamma=0\) the updating strategy is the pure class-balanced updating strategy and when \(\gamma=1\) the updating strategy degrades to the rule in Robust BN. In all experiments of this paper, we leverage \(\gamma=0.0\) in CIFAR10-C, \(\gamma=0.1\) in CIFAR100-C due to the large number of class and \(\gamma=0.5\) in ImageNet-C due to the highly untrustworthy pseudo labels. The instance-level momentum coefficient \(\eta\) in Balanced BN is set to \(0.0005\times K_{c}\).
### Tri-Net Self-Training
Self-Training (ST) has demonstrated tremendous effectiveness in multiple tasks [32; 19]. ST updates the model through constraining the prediction consistency between original samples and corresponding augmented samples. In this work, we adopt an approach similar to semi-supervised learning [32]
Figure 3: Illustration of the proposed method. We replace the Batchnorm layer of the source model with our proposed Balanced Batchnorm for imbalanced testing set. During test time adaptation, we optimize the combination of self-training loss \(\mathcal{L}_{st}\) and anchor loss \(\mathcal{L}_{anc}\).
to fine-tune the model to adapt the testing data. In specific, as illustrated in Fig. 3, we introduce teacher \(f_{t}(\mathbf{x};\Theta)\) and student \(f_{s}(\mathbf{x};\Theta)\) networks where the BN layers are independently updated while other weights are shared. The pseudo labels for testing sample are predicted by the teacher network and only the confident pseudo labels are employed for training the student network. Specifically, we denote the probabilistic posterior as \(\mathbf{p}=h(f(\mathbf{x}))\) and define the self-training loss in Eq. 4, where \(\mathbf{p}^{s}=h(f_{s}(\mathcal{A}(\mathbf{x});\Theta)),\mathbf{p}^{t}=h(f_{t} (\mathbf{x};\Theta))\), \(\hat{\mathbf{p}}^{t}\) refers to the one-hot pseudo labels of \(\mathbf{p}^{t}\), \(\mathcal{A}\) refers to a strong data augmentation operation, \(\mathcal{H}\) refers to entropy and cross-entropy losses and \(H_{0}\) defines a thresholding hyper-parameter.
\[\mathcal{L}_{st}=\frac{\sum_{b=1}^{B}\mathbb{1}\left(\mathcal{H}(\mathbf{p}^{ t}_{b})<H_{0}\cdot\log K_{c}\right)\cdot\mathcal{H}(\hat{\mathbf{p}}^{t}_{b}, \mathbf{p}^{s}_{b})}{\sum_{b=1}^{B}\mathbb{1}\left(\mathcal{H}(\mathbf{p}^{t }_{b})<H_{0}\cdot\log K_{c}\right)}, \tag{4}\]
A recent study revealed that self-training is effective for TTA [34], however without additional regularizations self-training is easily subject to confirmation bias [1]. This issue would only exacerbate when test data distribution is highly imbalanced, thus leading to over adaptation or collapsed predictions. To avoid over adapting model to a certain test domain, we further propose to incorporate an additional network branch as anchor for regularization.
**Anchor Network**: We use a frozen source domain network as the anchor network to regularize self-training. In particular, we copy the source model weights, freeze all weights and swap regular BN layers with the proposed Balanced BN layers. To regularize self-training, we design an anchored loss as the mean square error between the posterior predictions of teacher and anchor networks as in Eq. 5. As three network branches are jointly utilized, it gives rise to the term of tri-net self-training.
\[\mathcal{L}_{anc}=\frac{\sum_{b=1}^{B}\mathbb{1}(\mathcal{H}(\mathbf{p}^{t}_{b })<H_{0}\cdot\log K_{c})||\mathbf{p}^{t}_{b}-\mathbf{p}^{a}_{b}||_{2}^{2}}{K_ {c}\sum_{b=1}^{B}\mathbb{1}\left(\mathcal{H}(\mathbf{p}^{t}_{b})<H_{0}\cdot \log K_{c}\right)} \tag{5}\]
We finally simultaneously optimize the self-training and anchored losses \(\mathcal{L}=\mathcal{L}_{st}+\lambda_{anc}\mathcal{L}_{anc}\) w.r.t. the affine parameters of the Balanced BN layers for efficient test-time adaptation.
## 4 Experiment
### Experiment Settings
**Datasets**: We evaluate on four test-time adaptation datasets, including **CIFAR10-C**[14], **CIFAR100-C**[14], **ImageNet-C**[14] and **MNIST-C**[27]. Each of these benchmarks includes 15 types of corruptions with 5 different levels of severity. CIFAR10/100-C both have 10,000 testing samples evenly divided into 10/100 classes for each type of corruptions. ImageNet-C has 5,000 testing samples for each corruption unevenly divided into 1,000 classes 1. We evaluate all methods under the largest corruption severity level 5 and report the classification error rate (\(\%\)) throughout the experiment section. We include the detailed results of **MNIST-C**[27] in the supplementary.
Footnote 1: ImageNet-C is only evaluated in a subset with 5,000 testing samples on RobustBench: [https://github.com/RobustBench/robustbench](https://github.com/RobustBench/robustbench)
**Hyper-parameters**: For CIFAR10-C and CIFAR100-C experiments, we follow the official implementations from previous TTA works [40; 43; 46] and respectively adopt a standard pre-trained WideResNet-28 [47] and ResNeXt-29 [44] models from RobustBench [5] benchmark, for the fair comparison. For ImageNet-C experiments, the standard pre-trained ResNet-50 [13] model in torchvision is adopted. For most competing methods and our TRIBE, we leverage the Adam [18] optimizer with the learning rate 1e-3 in CIFAR10/100-C and ImageNet-C experiments. As an exception, for Note [10] and TTAC [33] we use the learning rate released in their official implementations. We use a batchsize of 64 for CIFAR10/100-C and 48 for ImageNet-C. Other hyper-parameters of our proposed model are listed as follow: \(\lambda_{anc}=0.5,\eta=0.0005\times K_{c}\) in all datasets, in CIFAR10-C \(H_{0}=0.05,\gamma=0.\), in CIFAR100-C \(H_{0}=0.2,\gamma=0.1\) and in ImageNet-C \(H_{0}=0.4,\gamma=0.5\). Adequate hyper-paramter analysis, provided in the supplementary, demonstrate that the hyper-parameters used into TRIBE are not sensitive. The data augmentations used in TRIBE are described in the supplementary. All of our experiments can be performed on a single NVIDIA GeForce RTX 3090 card.
**TTA Evaluation Protocol**: We evaluate under two real-world TTA protocols, namely the **GLI-TTA-F** and **GLI-TTA-V**. For both protocols, we create a global class imbalanced testing set following the long-tail dataset creation protocol [6], we choose three imbalance factor \(I.F.\) as 1, 10, 100 and 200 for evaluation where GLI-TTA degrades into PTTA setting [46] with \(I.F.=1\). A default scale parameter \(\sigma=0.1\) is chosen to control local class imbalance. To simulate continually shifting domains, we sample without replacement the domain indicator after all testing samples are predicted. For better reproducibility we provide the sequence of domains in the supplementary. Under the GLI-TTA-F setting, we fix the proportion parameter \(\alpha\) throughout the experiment. Under the GLI-TTA-V setting, we randomly permute class indices after adaptation to each domain (type of corruption) to simulate time-varying class distribution.
**Competing Methods**: We benchmark against the following TTA methods [28; 21; 40; 2; 10; 33; 46; 29]. Direct testing (**TEST**) performs inference on test streaming data without adaptation. Prediction-time batch normalization (**BN**) [28] replaces the running statistics with the batch statistics on each testing minibatch for normalization. Pseudo Label (**PL**) [21] updates the parameters of all normalization layers by minimizing the cross-entropy loss with predicted pseudo labels. Test-time entropy minimization (**TENT**) [40] updates the affine parameters of all batchnorm layers by minimizing the entropy of predictions. Laplacian adjusted maximum-likelihood estimation (**LAME**) [2] adjusts the predictions of the model through maximizing the likelihood estimation without updating any parameters. Continual test-time adaptation (**CoTTA**) [43] performs mean-teacher architecture, and randomly selects and restores the parameters of the model to source model. **PETAL**. [3] leverages fisher information to instruct the parameter restoration. Non-i.i.d. test-time adaptation (**NOTE**) [10] optionally updates the batchnorm statistics when the distance between the instance statistics of the test sample and the source model's statistics is less than a threshold. Test-time anchored clustering (**TTAC**) [33] minimizes the KL-Divergence between the source and target domain distributions. Robust test-time adaptation (**RoTTA**) [46] replaces the batchnorm layers with Robust Batch Normalization for better estimation of target domain batchnorm statistics. Finally, we evaluate our **TRIBE** with tri-net self-training and Balanced Batchnorm layers.
### Real-World Test Time Adaptation Results
Under the proposed real-world test-time adaptation protocol, the classification errors averaged over continuously adapting to all 15 types of corruptions under different degrees of global imbalance are calculated. We report the results in Tab. 1 for CIFAR10-C and Tab. 2 for CIFAR100-C. We make the following observations from the results. i) Direct testing without any adaptation is even stronger than many TTA methods. For example, only LAME, TTAC, RoTTA and our TRIBE
\begin{table}
\begin{tabular}{l|c|c|c|c} \hline \multirow{2}{*}{Method} & \multicolumn{4}{c}{**Fixed Global Class Distribution (GLI-TTA-F)**} \\ \cline{2-5} & \(I.F.=1\) & \(I.F.=10\) & \(I.F.=100\) & \(I.F.=200\) \\ \hline TEST & 43.50 / 43.50 & 42.64 / 43.79 & 41.71 / 43.63 & 41.69 / 43.47 \\ BN [28] & 75.20 / 75.20 & 70.77 / 66.77 & 70.00 / 50.72 & 70.13 / 47.34 \\ PL [21] & 82.90 / 82.90 & 72.43 / 70.59 & 70.09 / 55.29 & 70.38 / 49.86 \\ TENT [40] & 86.00 / 86.60 & 78.15 / 74.90 & 71.10 / 85.89 & 69.15 / 53.37 \\ LAME [21] & 39.50 / 39.50 & 38.45 / 40.07 & 37.48 / 41.80 & 37.52 / 42.59 \\ COTTA [33] & 83.20 / 83.20 & 73.64 / 71.48 & 71.32 / 56.44 & 70.78 / 49.98 \\ NOTE [10] & 31.10 / 31.10 & 36.79 / 30.22 & 42.59 / 30.75 & 45.45 / 31.17 \\ TTAC [33] & 23.01 / 23.01 & 31.20 / 29.11 & 43.40 / 37.37 & 46.27 / 38.75 \\ PETAL [3] & 81.05 / 81.05 & 73.97 / 71.64 & 71.14 / 56.11 & 71.06 / 50.57 \\ RoTTA [46] & 25.20 / 25.20 & 27.41 / 26.31 & 30.50 / 29.08 & 32.45 / 30.04 \\ \hline \hline
**TRIBE** & **16.14\(\_\)-\(\_\)-\(\_\)-\(\_\)-\(\_\)-\(\_\)** & **16.14\(\_\)-\(\_\_\)-\(\_\)** & **20.98\(\_\)-\(\_\)-\(\_\)-\(\_\)-\(\_\)** & **19.53\(\_\)-\(\_
could consistently outperform direct testing (TEST) on both CIFAR10-C and CIFAR100-C datasets, suggesting the necessity to develop robust TTA approaches. ii) Global class imbalance poses a great challenge to existing robust TTA methods. For example, the previous state-of-the-art, RoTTA achieves \(25.2\%\) and \(35.0\%\) on CIFAR10-C and CIFAR100-C respectively, while the error rose to \(30.04\%\) and \(37.93\%\) under severely global imbalanced testing set (\(I.F.=200\)). The same observation applies to other competing methods. In comparison, TRIBE is able to maintain relatively better performance under more severe global imbalanced testing set. iii) We further notice that TRIBE consistently outperform all competing methods in absolute accuracy. Importantly, under balanced global distribution (\(I.F.=1\)), TRIBE outperforms the best performing model, TTAC, by \(7\%\) on CIFAR10-C. The margin is maintained under more imbalanced testing set (\(I.F.=200\)). iv) TRIBE maintains a more consistent performance from \(I.F.=10\) to \(I.F.=200\) on both CIFAR10-C and CIFAR100-C, while other competing methods degenerate substantially. This is attributed to the introduction of Balanced BN layer better accounting for severe class imbalance and anchored loss avoiding over adaptation across the different domains.
We further evaluate TTA performance on ImageNet-C dataset of which the testing set is naturally class imbalanced. Therefore, we only simulate local class imbalance for the testing data stream and allow \(\alpha\) equal to the marginalized class distribution. We present both averaged and domain specific classification error in Tab. 3. We make similar observations with results on CIFAR10/100-C. Some competitive TTA methods perform exceptionally worse than direct testing while TRIBE again outperforms all competing methods both in terms of averaged error rate and winning on 11/15 corruption types.
\begin{table}
\begin{tabular}{l|c c c c c c c c c c c c c c} \hline \multirow{2}{*}{Method} & \multicolumn{6}{c}{**Fixed Global Class Distribution (GL-TTA-F)**} \\ & \(I.F.=1\) & \(I.F.=10\) & \(I.F.=10\) & & \(I.F.=100\) & & \(I.F.=200\) \\ \hline TEST & 46.40 / 46.40 & 46.96 / 46.52 & 47.53 / 45.91 & 47.59 / 39.94 \\ BN [28] & 52.90 / 52.90 & 46.05 / 42.29 & 47.01 / 40.01 & 47.38 / 35.26 \\ PL [21] & 88.90 / 88.90 & 68.51 / 69.71 & 53.46 / 57.26 & 49.41 / 49.26 \\ TENT [40] & 92.00 / 82.90 & 76.88 / 79.08 & 76.82 / 65.96 & 50.45 / 58.45 \\ LAME [2] & 40.30 / 40.50 & 43.66 / 44.88 & 44.15 / 46.64 & 43.81 / 40.33 \\ COT [43] & 52.00 / 52.20 & 44.48 / 49.03 & 45.46 / 38.77 & 45.87 / 33.72 \\ NOTE [10] & 73.80 / 73.80 & 57.71 / 58.86 & 54.44 / 57.10 & 53.54 / 52.48 \\ TTAC [33] & 34.10 / 34.10 & 40.48 / 38.28 & 47.84 / 41.47 & 49.78 / 38.00 \\ PETAL [3] & 55.03 / 55.03 & 45.14 / 41.91 & 44.63 / 38.52 & 44.75 / 33.81 \\ RoTA [46] & 35.00 / 35.00 & 40.00 / 39.03 & 45.08 / 42.04 & 46.78 / 37.93 \\ \hline
**TRIBE** & **33.26\(\rightarrow\)-\(\times\)-33.32.6\(\rightarrow\)-4.33.32.6\(\rightarrow\)-4.33.31.0\(\rightarrow\)-4.34.31.5\(\rightarrow\)-4.33.32.34.98.34.98.34** & **32.29\(\rightarrow\)-\(\times\)-31.54.10** \\ \hline \hline \end{tabular}
\begin{tabular}{l|c c c c c c c c c c c c c c} \hline \multirow{2}{*}{Method} & \multicolumn{6}{c}{**Time-Varying Global Class Distribution (GL-TTA-F)**} \\ & \(I.F.=1\) & \(I.F.=1\) & \(I.F.=100\) & & \(I.F.=200\) \\ \hline TEST & 46.40 / 46.40 & 45.85 / 46.65 & 45.34 / 46.94 & 45.16 / 40.61 \\ BN [28] & 52.90 / 52.90 & 45.10 / 42.47 & 45.15 / 38.30 & 45.37 / 33.45 \\ PL [21] & 89.80 / 88.90 & 68.16 / 66.62 & 52.83 / 48.39 & 53.68 / 48.28 \\ TENT [40] & 92.80 / 92.80 & 7.11 / 76.51 & 65.42 / 63.48 & 62.45 / 53.57 \\ LAME [2] & 40.50 / 40.50 & 42.82 / 45.45 & 42.47 / 47.82 & 42.23 / 41.45 \\ COT [43] & 52.00 / 52.20 & 43.74 / 41.03 & 43.83 / 37.93 & 43.96 / 32.69 \\ NOTE [10] & 73.00 / 73.80 & 58.07 / 58.46 & 51.61 / 55.95 & 54.31 / 43.65 \\ TTAC [33] & 34.10 / 34.10 & 38.56 / 38.68 & 42.07 / 41.05 & 42.87 / 35.80 \\ PETAL [3] & 55.03 / 55.03 & 44.36 / 41.54 & 44.11 / 38.33 & 44.43 / 32.84 \\ RoTTA [46] & 35.00 / 35.00 & 39.56 / 39.77 & 42.20 / 39.93 & **33.76** / **33.76** / **33.76** / **33.76** / **33.76** / **33.82** \\ \hline
**TRIBE** & **33.26\(\rightarrow\)-\(\times\)-3.32.6\(\rightarrow\)-4.33.32.6\(\rightarrow\)-4.
**Results on Individual Corruption**: We adapt the model continually to constant shifting domains (corruption types). We report the average classification error for each individual type of corruptions in Fig. 4. We conclude from the plots that i) BN, PL and TENT normalize the features using the statistics calculated within current mini-batch, thus they all perform much worse than methods considering robust batchnorm e.g. NOTE, ROTTA and TRIBE. ii) There is a strong correlation of performance across different methods suggesting certain corruptions, e.g. "shot", "gaussian noise" and "impulse noise", are inherently more difficult. Nevertheless, TRIBE always outperforms competing methods on these challenging corruptions. iii) Some competing methods achieve close to TRIBE accuracy on easier corruptions, but they often perform much worse on the upcoming corruptions. Overall, TRIBE exhibits much lower variance across all domains when continually adapted. This suggests the anchored loss potentially helps TRIBE to avoid over adapting to easier domains.
### Ablation & Additional Study
**Effect of Individual Components**: We investigate the effectiveness of proposed components in Tab. 4. Specifically, we first compare adaptation by updating batchnorm statistics. It is apparent that Balanced BN is substantially better than Robust BN [46] when separately applied. When a two branch self-training (teacher & student net) is applied, we witness a clear improvement from the direct testing baseline. However the improvement is less significant by combining self-training with Balanced BN. This is probably caused by over adaptation to testing domains causing poor generalization to continually changing domains. This negative impact is finally remedied by introducing a tri-net architecture (Anchored Loss) which helps regularize self-training to avoid over adaptation.
**Comparing Batchnorm Layers**: To evaluate the effectiveness of our proposed Balanced BN, we run forward pass for global and local class-imbalanced testing samples for multiple batch normalization modules proposed for real-world TTA, with results presented in Tab. 5. We observe our proposed Balanced BN outperforms others with a large margin (\(2.58\sim 9.79\%\)), especially under severely global class imbalance (\(I.F.=200\)). It further confirms that Balanced BN is more suitable for handling both global and local class-imbalanced testing data.
\begin{table}
\begin{tabular}{c|c c c c|c c|c c} \hline \hline Method & EMA Model & BatchNorm & Self-Training & Anchored Loss & CIFAR10-C & CIFAR100-C & Avg. \\ \hline TEST & – & BN & – & – & 41.71 / 43.63 & 47.53 / 45.91 & 44.62 / 44.77 \\ ROTTA [46] & ✓ & Robust BN & ✓ & – & 30.50 / 29.08 & 45.68 / 42.04 & 38.09 / 35.56 \\ – & – & Robust BN & – & – & 43.48 / 32.29 & 40.45 / 36.94 & 41.97 / 34.62 \\ – & – & Balanced BN & – & – & 29.00 / 26.38 & 39.55 / 36.59 & 34.28 / 31.49 \\ – & – & BN & ✓ & – & 37.67 / 38.94 & 37.12 / 44.77 & 37.40 / 41.86 \\ – & – & Balanced BN & ✓ & – & 36.58 / 65.88 & 37.21 / 44.83 & 36.50 / 55.36 \\ – & – & BN & ✓ & ✓ & 36.76 / 29.19 & 36.16 / 36.26 & 36.46 / 32.73 \\ MT* & ✓ & Balanced BN & ✓ & – & 23.76 / 25.18 & 36.01 / 35.72 & 29.89 / 30.45 \\ \hline
**TRBE** & – & Balanced BN & ✓ & ✓ & **19.53 / 24.66** & **32.31 / 34.98** & **25.92 / 29.82** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Ablation study on CIFAR10/100-C under GLI-TTA-F (\(I.F.=100\)) protocol. We report classification error as evaluation metric. MT* indicates Mean Teacher is adapted to TTA task by removing the labeled loss term.
Figure 4: Performances on each individual domain (corruption) under GLI-TTA protocols on CIFAR10-C dataset.
**Hyper-parameter Robustness**: Selecting appropriate hyper-parameter plays an important role in TTA [49]. As TTA assumes no labeled data in testing set, selecting appropriate hyper-parameter becomes non-trivial. We argue that the tri-net design is naturally more robust to the choice of learning rate. As illustrated in Fig. 5, TRIBE is very stable w.r.t. the choice of learning rate while other methods, e.g. TTAC and NOTE, prefer a much narrower range of learning rate. More hyper-parameter analysis details can be found in the supplementary.
**Computation Cost Measured in Wall-Clock Time**: We evaluate the computation cost of our proposed TRIBE in this section. It's worthy note that, we only update the weight and bias parameters in batchnorm layers so that most of parameters across three models are shared, with only a small fraction of the remaining independent parameters. On the other hand, we have implemented Balanced BN as efficiently as possible using cpp in order to make the algorithm more efficient and more effective. Here, we provide the computation cost analysis of our TRIBE and several SOTA TTA methods in Tab. 6. We make the following observations. Our model undeniably got the best result and by a huge margin with others, and our model didn't take too much extra time compared to other SOTAs (only 1ms more than ROTTA), which we feel is acceptable for a TTA method.
## 5 Conclusion
In this work, we explore improving test-time adaptation algorithm's robustness to real-world challenges, including non-i.i.d. testing data stream, global class imbalance and continual domain shift. To adapt to imbalanced testing data, we propose a Balanced Batchnorm layer consisting of multiple category-wise BN layers to achieve unbiased estimation of statistics. We further propose a tri-net architecture with student, teacher and anchor networks to regularize self-training based TTA. We demonstrate the effectiveness of the overall method, TRIBE, on simulated real-world test-time adaptation data streams. We achieve the state-of-the-art performance on all benchmarks created from four TTA datasets.
**Limitations**: TRIBE replaces regular Batchnorm layer with a customized Balanced Batchnorm layer, thus introducing additional storage overhead. Moreover, some recent Transformer based backbone network prefer Layernorm to Batchnorm [7], thus potentially limiting the application of TRIBE. But recent studies revealed opportunities to integrate batchnorm to vision Transformer networks [45].
\begin{table}
\begin{tabular}{c|c|c} \hline \hline Method & Error Rate & Inference and Adaptation Time per Sample (second) \\ \hline TEST & 41.71 & 0.0005 \\ BN & 70.00 & 0.0005 \\ TENT & 71.10 & 0.0009 \\ NOTE & 42.59 & 0.0020 \\ COTTA & 71.32 & 0.0190 \\ TTAC (w/o queue) & 43.40 & 0.0021 \\ PETAL & 71.14 & 0.0163 \\ ROTTA & 30.50 & 0.0033 \\ \hline
**TRIBE** & **19.53** & 0.0043 \\ \hline \hline \end{tabular}
\end{table}
Table 6: The per-sample wall time (measured in seconds) on CIFAR10-C under GLI-TTA-F (IF=100) protocol. |
2310.20557 | Multiconfigurational time-dependent density functional theory for atomic
nuclei: Technical and numerical aspects | The nuclear time-dependent density functional theory (TDDFT) is a tool of
choice for describing various dynamical phenomena in atomic nuclei. In a recent
study, we reported an extension of the framework - the multiconfigurational
TDDFT (MC-TDDFT) model - that takes into account quantum fluctuations in the
collective space by mixing several TDDFT trajectories. In this article, we
focus on technical and numerical aspects of the model. We outline the
properties of the time-dependent variational principle that is employed to
obtain the equation of motion for the mixing function. Furthermore, we discuss
evaluation of various ingredients of the equation of motion, including the
Hamiltonian kernel, norm kernel, and kernels with explicit time derivatives. We
detail the numerical methods for resolving the equation of motion and outline
the major assumptions underpinning the model. A technical discussion is
supplemented with numerical examples that consider collective quadrupole
vibrations in $^{40}$Ca, particularly focusing on the issues of convergence,
treatment of linearly dependent bases, energy conservation, and prescriptions
for the density-dependent part of an interaction. | Petar Marević, David Regnier, Denis Lacroix | 2023-10-31T15:40:43Z | http://arxiv.org/abs/2310.20557v2 | Multiconfigurational time-dependent density functional theory for atomic nuclei: Technical and numerical aspects
###### Abstract
The nuclear time-dependent density functional theory (TDDFT) is a tool of choice for describing various dynamical phenomena in atomic nuclei. In a recent study, we reported an extension of the framework - the multiconfigurational TDDFT (MC-TDDFT) model - that takes into account quantum fluctuations in the collective space by mixing several TDDFT trajectories. In this article, we focus on technical and numerical aspects of the model. We outline the properties of the time-dependent variational principle that is employed to obtain the equation of motion for the mixing function. Furthermore, we discuss evaluation of various ingredients of the equation of motion, including the Hamiltonian kernel, norm kernel, and kernels with explicit time derivatives. We detail the numerical methods for resolving the equation of motion and outline the major assumptions underpinning the model. A technical discussion is supplemented with numerical examples that consider collective quadrupole vibrations in \({}^{40}\)Ca, particularly focusing on the issues of convergence, treatment of linearly dependent bases, energy conservation, and prescriptions for the density-dependent part of an interaction.
Keywords:Nuclear Dynamics Time-Dependent Density Functional Theory Multi-Configurational Time-Dependent Density Functional Theory Time-Dependent Generator Coordinate Method Nuclear Energy Density Functionals Configuration Mixing
## 1 Introduction
The nuclear time-dependent density functional theory (TDDFT) [1; 2; 3; 4; 5; 6; 7] is a tool of choice for describing the dynamical phenomena in atomic nuclei such as collective vibrations, low-energy heavy-ion reactions, or fission. Similarly to TDDFT approaches used in various branches of physics and chemistry [8; 9], it models the dynamics of a complex many-body system in terms of a product-type wave function whose diabatic time evolution is determined by a set of Schrodinger-like equations for the corresponding single-(quasi)particle states. While such an approach includes the one-body dissipation mechanism and is well-suited for calculating mean values of observables, it yields quasi-classical equations of motion in the collective space [10; 11]. Consequently, it drastically underestimates fluctuations of observables and is unable to account for quantum many-body effects such as tunneling in collective potential energy landscapes. The numerous attempts to include quantum fluctuations beyond the basic TDDFT framework can be broadly classified into two categories. On the one hand, the deterministic approaches include methods based on the truncation of the Bogolyubov-Born-Green-Kirkwood-Yvon (BBGKY) hierarchy [12; 13; 14], leading to various TD-nRDM models [15; 16; 17], the statistical treatment of complex internal degrees of freedom, leading to the Fokker-Planck framework [18], and the Balian-Veneroni variational principle [19; 20; 21]. On the other hand, the stochastic approaches aim to replace a complex initial problem with a set of simpler problems, as seen in the stochastic mean-field theory [22; 23; 24; 25; 26] or the exact quantum jump technique [27; 28]. However, most of these methods face challenges when applied in conjunction with TDDFT due to the lack of a clear prescription for treating effects beyond the independent particle approximation. The stochastic mean-field theory, combined with TDDFT, has been applied with some success in describing fluctuations in fission [29]. Nevertheless, as demonstrated in Refs. [30; 31], a
proper description of quantum effects in the collective space requires a genuine quantization that accounts for the interference between different trajectories. Despite this, a fully quantum multiconfigurational extension, which is nowadays routinely employed in static calculations [32; 33; 34; 35], has until very recently not been implemented in the TDDFT case.
Today, the multiconfigurational models of nuclear dynamics are largely restricted to the adiabatic time-dependent generator coordinate method (TDGCM) [36; 37; 38; 39], typically supplemented with the Gaussian overlap approximation (GOA) [40; 41]. Despite their significant success in describing numerous aspects of fission [42; 43; 44], the existing TDGCM implementations consider only static states on the adiabatic collective potential energy landscapes and do not account for dissipation of the collective motion into disordered internal single-particle motion. Recently, the TDGCM framework was extended with a statistical dissipative term in the case of fission [45; 46], while an earlier attempt to explicitly include two-quasiparticle excitations [47] is yet to be implemented in a computationally feasible framework. However, the adiabatic approximation still imparts significant practical and formal difficulties to all these models [43], including the need to consider an extremely large number of static configurations, discontinuous potential energy surfaces, and the ill-defined scission line for fission. Consequently, a fully quantum framework is called for that would leverage the dissipative and fluctuation aspects of nuclear dynamics by removing the adiabatic assumption and mixing time-dependent configurations.
Theoretical foundations of such a framework were laid out already in the 1980s by Reinhard, Cusson, and Goeke [48; 49]. However, the rather limited computational capabilities of the time prevented any practical implementations beyond simplified models applied to schematic problems. A step towards more realistic applications was recently made in [31], where a multiconfigurational model was used to study the pair transfer between two simple superfluid systems interacting with a pairing Hamiltonian. Soon after, a collision between two \(\alpha\)-particles was studied within a fully variational model based on the Gaussian single-particle wave functions and a schematic Hamiltonian interaction [50]. While it was argued that the model described quantum tunneling, a discussion ensued on whether the observed phenomenon can indeed be considered tunneling [51; 52; 53]. Recently, we reported the first calculations in atomic nuclei where TDDFT configurations were mixed based on the energy density functionals (EDFs) framework [54]. In there, we have shown that the collective multiphonon states emerge at high excitation energies when quantum fluctuations in the collective space are included beyond the independent particle approximation. A similar model, based on relativistic EDFs, was employed in a study of nuclear multipole vibrations [55] and subsequently extended with pairing correlations to make it applicable to the fission phenomenon [56].
The ongoing developments and the increase of computational capabilities should, in the near future, render these models applicable to a wide range of nuclear phenomena. The goal of this manuscript is to provide more details on technical and numerical aspects of the multi-configurational time-dependent density functional theory (MC-TDDFT) framework reported in [54]. In Sec. 2, we outline properties of the MC-TDDFT state and show how the time-dependent variational principle leads to the equation of motion for the mixing function. In Sec. 3, we discuss evaluation of various ingredients of the equation of motion, including the Hamiltonian kernel, the norm kernel, and kernels with explicit time derivatives. Sec. 4 contains details on resolving the equation of motion and calculating various observables. The technical discussion is supplemented with numerical examples in Sec. 5. Finally, Sec. 6 brings summary of the present work.
## 2 The MC-TDDFT state and its time evolution
### Preliminaries
Motivated by the well-known static generator coordinate method (GCM) [57; 58], the dynamic MC-TDDFT state can be written as
\[\ket{\Psi(t)}=\int d\mathbf{q}\;f_{\mathbf{q}}(t)\ket{\Phi_{\mathbf{q}}(t)}, \tag{1}\]
where \(\mathbf{q}\) denotes a set of continuous generating coordinates, \(\ket{\Phi_{\mathbf{q}}(t)}\) are the time-dependent, many-body generating states, and \(f_{\mathbf{q}}(t)\) is the mixing function which is to be determined through a variational principle (see Sec. 2.2 and 2.3). Depending on the application, there exists a large freedom in choosing various ingredients of Eq. (1):
* The generating coordinates represent collective degrees of freedom associated to modes whose quantum fluctuations are being considered. In static DFT, they are often related to the magnitude or the phase of a complex order parameter corresponding to one or several broken symmetries [59]. Within adiabatic TDGCM studies of nuclear fission, one typically considers multipole moments [42; 43], pairing strength [60; 61; 62] and occasionally also the nuclear temperature [45; 46]. In recent dynamical MC-TDDFT studies, a gauge angle was considered as a generating
coordinate in the case of pair transfer between superfluid systems [31], the relative position and momentum for collisions [50], and multipole boost magnitudes or multipolarities for vibrations [54; 55].
* The generating states are typically chosen as Slater determinants [50; 54; 55] or the \(U(1)\)-symmetry-breaking quasiparticle vacua [31]. The corresponding single-particle wave functions may be built upon a simple _ansatz_, such as Gaussians [50], or can be obtained from microscopic calculations, based on schematic interactions [31] or actual EDFs [54; 55; 56]. In the limiting case where the generating states are time-independent and are obtained through energy minimization under constraints, we recover the conventional adiabatic TDGCM framework1. Irrespective of the nature of generating states, an optimal choice of the basis set will take into account minimization of overlaps within the set, with the goal of reducing linear dependencies and ensuring that each state carries a sufficient distinct physical information. Any remaining linear dependencies are later explicitly removed, as described in Sec. 3.3 and 5.3.
Footnote 1: In [48], the model based on (1) was branded TDGCM since it represented a time-dependent extension of the Hill-Wheeler-Griffin’s GCM framework [57; 58]. The same naming convention was adopted in Refs. [50] and [55; 56]. However, over the past decade the term TDGCM became largely associated to adiabatic fission models employing time-independent generating states [38; 39]. Therefore, to avoid any confusion and underline the distinction, we use MC-TDDFT to refer to models such as the present one that mixes states which are not necessarily adiabatic.
* In principle, one could apply the variational principle with respect to both the mixing function \(f_{\mathbf{q}}(t)\) and the generating states \(|\Phi_{\mathbf{q}}(t)\rangle\). Such a strategy generally yields a rather complicated set of coupled time-dependent differential equations. It is employed, for example, in quantum chemistry within the multi-configurational time-dependent Hartree-Fock (MC-TDHF) framework [63]. Note, however, that this framework encompasses only the special case of Hamiltonian theories with orthogonal generating states. Applications in nuclear physics, where the generating states are typically non-orthogonal, have so far remained restricted to the toy-model calculation of Ref. [50]. A simplification that has been adopted in recent applications [54; 31; 55] is to treat variationally only the mixing function, while assuming that the generating states follow independent trajectories. In Ref. [48] it was shown that the lowest order of GOA yields independent trajectories even when the generating states are treated variationally.
### Equation of motion for the mixing function
The backbone idea of the MC-TDDFT framework is to look for an approximate solution of the time-dependent Schrodinger equation that takes the form of Eq. (1) and is parametrized by the complex mixing function \(f_{\mathbf{q}}(t)\). Different variants of the time-dependent variational principle can be used to obtain the dynamical equation for the mixing function [64]. In this work, we consider the following action:
\[\begin{split} S(f,f^{*},\xi_{1})&=\int_{t_{0}}^{t_{ 1}}\,dt\,\langle\Psi(t)|\hat{H}-i\hbar\partial_{t}|\Psi(t)\rangle\\ &+\int_{t_{0}}^{t_{1}}\,dt\,\xi_{1}(t)\Big{(}\,\langle\Psi(t)| \Psi(t)\rangle-1\Big{)},\end{split} \tag{2}\]
where \(|\Psi(t)\rangle\) is a function of both \(f(t)\) and \(f^{*}(t)\). Here, the first term integrates the Lagrangian of the system over time and the second term imposes normalization of the solution by introducing a real Lagrange multiplier \(\xi_{1}(t)\). We look for a mixing function \(f(t)\) that makes this action stationary,
\[\delta S=0, \tag{3}\]
where the variation is taken with respect to any complex function \(f(t)\) and any value of the Lagrange multiplier \(\xi_{1}(t)\). This equation is formally equivalent to the system of equations
\[\frac{\partial S}{\partial f}=0,\quad\frac{\partial S}{\partial f^{*}}=0,\quad \frac{\partial S}{\partial\xi_{1}}=0. \tag{4}\]
Writing explicitly the derivatives of the action yields a set of integro-differential equations for the mixing function,
\[i\hbar\dot{f}^{\dagger}\mathcal{N}(t) =f^{\dagger}(t)\Big{[}-\mathcal{H}(t)+\mathcal{D}(t)-i\hbar\dot {\mathcal{N}}(t)\] \[\qquad-\xi_{1}(t)\mathcal{N}(t)\Big{]}, \tag{5a}\] \[i\hbar\mathcal{N}(t)\dot{f}(t) =\Big{[}\mathcal{H}(t)-\mathcal{D}(t)+\xi_{1}(t)\mathcal{N}(t) \Big{]}f(t),\] (5b) \[f^{\dagger}(t)\mathcal{N}(t)f(t) =1. \tag{5c}\]
We use here a compact matrix notation with respect to the collective coordinate \(\mathbf{q}\). The norm (overlap) kernel reads
\[\mathcal{N}_{\mathbf{q}\mathbf{q}^{\prime}}(t)=\langle\Phi_{\mathbf{q}}(t)|\Phi_{\mathbf{q}^{ \prime}}(t)\rangle\,. \tag{6}\]
Equations (5a) and (5b) also involve the Hamiltonian kernel
\[\mathcal{H}_{\mathbf{q}\mathbf{q}^{\prime}}(t)=\langle\Phi_{\mathbf{q}}(t)|\hat{H}|\Phi_{ \mathbf{q}^{\prime}}(t)\rangle\,, \tag{7}\]
and the time derivative kernel defined as
\[\mathcal{D}_{\mathbf{q}\mathbf{q}^{\prime}}(t)=\langle\Phi_{\mathbf{q}}(t)|i\hbar\partial _{t}|\Phi_{\mathbf{q}^{\prime}}(t)\rangle\,. \tag{8}\]
The identity
\[i\hbar\dot{\mathcal{N}}(t)=\mathcal{D}(t)-\mathcal{D}^{\dagger}(t) \tag{9}\]
implies that Eq. (5a) is just the conjugate transpose of Eq. (5b). Finally, one can insert these equations into the time derivative of Eq. (5c) to show that any value of the Lagrange parameter \(\xi_{1}(t)\) gives a proper solution of the system of equations as long as the initial state is normalized. In fact, the \(\xi_{1}(t)\) term in the equation of motion only multiplies the function \(f(t)\) by a time-dependent phase during the dynamics. Setting this Lagrange parameter to zero yields the compact equation of motion
\[i\hbar\dot{f}(t)=\mathcal{N}^{-1}(t)\Big{[}\mathcal{H}(t)-\mathcal{D}(t)\Big{]} f(t). \tag{10}\]
This equation differs from the adiabatic TDGCM equation to the extent that (i) all the kernels are time-dependent and (ii) there is an additional term involving the time derivative kernel \(\mathcal{D}(t)\).
### Dealing with time-dependent, non-orthogonal generating states
In the present case, a particular caution is necessary because we employ a family of generating states \(|\Phi_{\mathbf{q}}(t)\rangle\) which is generally linearly dependent. Consequently, the mapping between the mixing functions \(f_{\mathbf{q}}(t)\) and the many-body state \(|\Psi(t)\rangle\) is not bijective [11]. More specifically, at each time \(t\), one may diagonalize the norm kernel
\[\mathcal{N}_{\mathbf{qq^{\prime}}}(t)=\sum_{k}\mathcal{U}_{\mathbf{q}k}(t)\lambda_{k} (t)\mathcal{U}^{\dagger}_{\mathbf{q^{\prime}}k}(t). \tag{11}\]
The columns of \(\mathcal{U}(t)\) form an orthonormal eigenbasis of the space of mixing functions \(\mathcal{F}\) so that we can always expand them as
\[f_{\mathbf{q}}(t)=\sum_{k}f_{k}(t)\mathcal{U}_{\mathbf{q}k}(t). \tag{12}\]
Furthermore, the norm eigenvalues \(\lambda_{k}(t)\) can be used to split the full space \(\mathcal{F}\) into the direct sum
\[\mathcal{F}=\mathcal{I}(t)\oplus\mathcal{K}(t). \tag{13}\]
Here, \(\mathcal{I}(t)\) is the _image_ of \(\mathcal{N}(t)\), sometimes also referred to as the _range_ of \(\mathcal{N}(t)\)[65]. It corresponds to the vector space of functions spanned by the columns of \(\mathcal{U}(t)\) associated to \(\lambda_{k}(t)>0\). Equivalently, the _kernel_ vector space \(\mathcal{K}(t)\) is spanned by the columns of \(\mathcal{U}(t)\) with \(\lambda_{k}(t)=0\). In the following, we sort by convention the eigenvalues of \(\mathcal{N}(t)\) in the ascending order so that the first \(d\!-\!r\) eigenvalues will be zero, where \(d\) and \(r\) are the dimension and the rank of \(\mathcal{N}(t)\), respectively. We can then introduce the projectors \(\mathcal{P}^{\mathcal{I}}(t)\) and \(\mathcal{P}^{\mathcal{K}}(t)\) on the subspaces \(\mathcal{I}(t)\) and \(\mathcal{K}(t)\), respectively, with the corresponding matrices
\[\mathcal{P}^{\mathcal{I}}_{\mathbf{qq^{\prime}}}(t)=\sum_{k>d-r}\mathcal{U}_{\mathbf{ q}k}(t)\mathcal{U}^{\dagger}_{\mathbf{q^{\prime}}k}(t), \tag{14}\]
\[\mathcal{P}^{\mathcal{K}}_{\mathbf{qq^{\prime}}}(t)=\sum_{k\leq d-r}\mathcal{U}_{ \mathbf{q}k}(t)\mathcal{U}^{\dagger}_{\mathbf{q^{\prime}}k}(t). \tag{15}\]
The sum of the two projectors satisfies
\[\mathcal{P}^{\mathcal{I}}(t)+\mathcal{P}^{\mathcal{K}}(t)=\mathbbm{1}_{ \mathcal{F}}. \tag{16}\]
Moreover, per definition, the norm matrix is entirely contained in the image subspace and the same applies to the kernel of any observable \(\hat{O}\),
\[\mathcal{P}^{\mathcal{K}}(t)\mathcal{N}(t)=0, \tag{17a}\] \[\mathcal{P}^{\mathcal{I}}(t)\mathcal{N}(t)=\mathcal{N}(t),\] (17b) \[\mathcal{P}^{\mathcal{I}}(t)\mathcal{O}(t)=\mathcal{O}(t), \tag{17c}\]
where \(\mathcal{O}_{\mathbf{qq^{\prime}}}(t)=\langle\Phi_{\mathbf{q}}(t)|\hat{O}|\Phi_{\mathbf{q^ {\prime}}}(t)\rangle\).
With these definitions at hand, we can now make explicit a property of the MC-TDDFT _ansatz_ which is well known already from the static GCM framework [11]: at any time, the component of the mixing function \(f(t)\) belonging to the kernel space \(\mathcal{K}(t)\) does not contribute to the many-body state \(|\psi(f(t))\rangle\). In other words,
\[|\psi[f(t)]\rangle=|\psi\left[\mathcal{P}^{I}(t)f(t)\right]\rangle. \tag{18}\]
Going one step further, one can show that the _ansatz_ (1) provides a one-to-one mapping between the complex mixing functions living in the space \(\mathcal{I}(t)\) and the MC-TDDFT many-body states. Although this property brings no formal difficulties, it has to be taken into account when numerically simulating the time evolution of a system. Indeed, the naive equation of motion (10) could easily lead to the accumulation of large or even diverging components of \(f(t)\) in \(\mathcal{K}(t)\), or to fast time oscillations of \(f(t)\) in this subspace. This type of unphysical behavior may prevent a reliable computation of the mixing function in practice.
To circumvent this problem, it is possible to look for a solution of the time-dependent variational principle that has a vanishing component in \(\mathcal{K}(t)\) for all \(t\),
\[\mathcal{P}^{\mathcal{K}}(t)f(t)=0. \tag{19}\]
Such a solution is obtained by minimizing the augmented action
\[\begin{split}\tilde{S}(f,f^{*},\xi_{1},\xi_{2})&=S(f,f^{*},\xi_{1})\\ &+\int_{t_{0}}^{t_{1}}\,dt\,\xi_{2}(t)||\mathcal{P}^{\mathcal{K} }f||^{2}.\end{split} \tag{20}\]
Compared to Eq. (2), this relation introduces a new term with the Lagrange parameter \(\xi_{2}(t)\) that ensures the constraint (19). The same reasoning as in Sec. 2.2 leads to the modified equation of motion
\[i\hbar\dot{f}(t)=\mathcal{N}^{-1}(t)\Big{[}\mathcal{H}(t)-\mathcal{D}(t)\Big{]} f(t)+i\hbar\dot{\mathcal{P}}^{\mathcal{I}}(t)f(t). \tag{21}\]
The last term on the right hand side ensures that the mixing function stays in the subspace \(\mathcal{I}(t)\) at all time. In the same way as for \(\xi_{1}(t)\) in (4), any value of \(\xi_{2}(t)\) leads to a proper solution of the variational principle as long as \(f\) is solution of (21). We therefore set it to zero. Solving Eq. (21) instead of Eq. (10) provides a better numerical stability at the price of estimating \(\dot{\mathcal{P}}^{\mathcal{I}}(t)\) at each time step.
### Equation of motion for the collective wave function
In principle, the mixing function could be determined by numerically integrating Eq. (21). However, like in the static GCM, it is useful to introduce the collective wave function \(g(t)\) as
\[g(t)=\mathcal{N}^{1/2}(t)f(t). \tag{22}\]
The square root of the norm kernel is defined by the relation
\[\mathcal{N}_{\mathbf{q}\mathbf{q}^{\prime}}(t)=\int_{\mathbf{q}\mathbf{q}^{\prime\prime}}d\bm {q}^{\prime\prime}\mathcal{N}^{1/2}_{\mathbf{q}\mathbf{q}^{\prime\prime}}(t)\mathcal{ N}^{1/2}_{\mathbf{q}^{\prime\prime}\mathbf{q}^{\prime}}(t). \tag{23}\]
At any time, the collective wave function \(g(t)\) belongs to the subspace \(\mathcal{I}(t)\) and uniquely defines the MC-TDDFT state. Following a standard procedure, we also transform kernels \(\mathcal{O}(t)\) to their collective operators \(\mathcal{O}^{c}(t)\),
\[\mathcal{O}^{c}(t)=\mathcal{N}^{-1/2}(t)\mathcal{O}(t)\mathcal{N}^{-1/2}(t). \tag{24}\]
This provides a useful mapping
\[\langle\hat{O}\rangle(t)=g^{\dagger}(t)\mathcal{O}^{c}(t)g(t) \tag{25}\]
for any observable \(\hat{O}\). Inserting the definition of the collective wave function into Eq. (10) or (21) yields the equivalent equation of motion for the collective wave function,
\[i\hbar\dot{g}(t)=\Big{(}\mathcal{H}^{c}(t)-\mathcal{D}^{c}(t)+i\hbar\dot{ \mathcal{N}}^{1/2}(t)\mathcal{N}^{-1/2}(t)\Big{)}g(t). \tag{26}\]
For numerical purposes, the total kernel on the right hand side of Eq. (26) can be recast in an explicitly Hermitian form,
\[i\hbar\dot{g}(t)=\Big{[}\mathcal{H}^{c}(t)+\mathcal{T}^{c}_{1}(t)+\mathcal{T }^{c}_{2}(t)\Big{]}g(t), \tag{27}\]
with two Hermitian kernels
\[\mathcal{T}^{c}_{1}(t)=-\frac{1}{2}\big{(}\mathcal{D}^{c}(t)+ \mathcal{D}^{c\dagger}(t)\big{)}, \tag{28a}\] \[\mathcal{T}^{c}_{2}(t)=\frac{i\hbar}{2}\big{(}\dot{\mathcal{N}}^{ 1/2}(t)\mathcal{N}^{-1/2}(t)-\mathcal{N}^{-1/2}(t)\dot{\mathcal{N}}^{1/2}(t) \big{)}. \tag{28b}\]
The equation of motion (27) is the one that is numerically solved in the numerical examples of Sec. 5 and in our previous work [54].
### Definition of the generating states
In this work, the generating states are built as Slater determinants of independent single-particle states
\[\ket{\Phi_{\mathbf{q}}(t)}=\prod_{k=1}^{A}a_{k}^{\mathbf{q}\dagger}(t)\ket{0}, \tag{29}\]
where \(A\) is the number of particles, \(\ket{0}\) is the particle vacuum, and \(\{a_{k}^{\mathbf{q}\dagger}(t),a_{k}^{\mathbf{q}}(t)\}\) is a set creation and annihilation operators associated to the single-particle states. The single-particle states can be expanded in the spatial representation,
\[a_{k}^{\mathbf{q}\dagger}(t)= \sum_{\sigma}\int_{\mathbf{r}}d^{3}\mathbf{r}\varphi_{k}^{\mathbf{q}}(\mathbf{r} \sigma;t)c_{\mathbf{r}\sigma}^{\dagger}, \tag{30}\] \[a_{k}^{\mathbf{q}}(t)= \sum_{\sigma}\int_{\mathbf{r}}d^{3}\mathbf{r}\varphi_{k}^{\mathbf{q}*}(\mathbf{r} \sigma;t)c_{\mathbf{r}\sigma}, \tag{31}\]
where \(c_{\mathbf{r}\sigma}^{\dagger}\) (resp. \(c_{\mathbf{r}\sigma}\)) creates (resp. annihilates) a nucleon of spin \(\sigma\) at position \(\mathbf{r}\). The \(k\)-th single-particle wave function \(\varphi_{k}^{\mathbf{q}}\) of the generating state labeled by \(\mathbf{q}\) reads
\[\varphi_{k}^{\mathbf{q}}(\mathbf{r}\sigma;t)=\langle\mathbf{r}\sigma|a_{k}^{\mathbf{q}\dagger} (t)|0\rangle. \tag{32}\]
The single-particle wave functions (32) for neutrons or protons can be decomposed as
\[\begin{split}\varphi_{k}^{\mathbf{q}}(\mathbf{r}\sigma;t)&= \Big{(}\varphi_{k,0}^{\mathbf{q}}(\mathbf{r};t)+i\varphi_{k,1}^{\mathbf{q}}(\mathbf{r};t)\Big{)} \chi_{\uparrow}(\sigma)\\ &+\Big{(}\varphi_{k,2}^{\mathbf{q}}(\mathbf{r};t)+i\varphi_{k,3}^{\mathbf{q} }(\mathbf{r};t)\Big{)}\chi_{\downarrow}(\sigma).\end{split} \tag{33}\]
The four real spatial functions \(\varphi_{k,\alpha}^{\mathbf{q}}(\mathbf{r};t)\) with \(\alpha=0,3\) correspond, respectively, to the real spin-up, imaginary spin-up, real spin-down, and imaginary spin-down component, and \(\chi_{\uparrow/\downarrow}(\sigma)\) are the eigenstates of the \(z\) component of the spin operator.
Starting from some initial conditions, the generating states \(\ket{\Phi_{\mathbf{q}}(t)}\) are then assumed to evolve independently from each other, according to the nuclear TDHF equations [1; 2; 3],
\[i\hbar\dot{\rho}_{\mathbf{q}}(t)=\Big{[}h[\rho_{\mathbf{q}}(t)],\rho_{\mathbf{q}}(t)\Big{]}, \tag{34}\]
where \(\rho_{\mathbf{q}}(t)\) is the one-body density matrix corresponding to \(|\Phi_{\mathbf{q}}(t)\rangle\) and \(h[\rho_{\mathbf{q}}(t)]\) is the single-particle Hamiltonian derived from a Skyrme EDF [7; 32].
## 3 Calculation of kernels
### The norm kernel
The overlap of two Slater determinants is given by the determinant of the matrix containing overlaps between the corresponding single-particle states [66; 67],
\[\mathcal{N}_{\mathbf{qq^{\prime}}}(t)=\det M_{\mathbf{qq^{\prime}}}(t). \tag{35}\]
In the absence of isospin mixing, the total overlap corresponds to the product of overlaps for neutrons (\(\tau=n\)) and protons (\(\tau=p\)),
\[\mathcal{N}_{\mathbf{qq^{\prime}}}(t)=\prod_{\tau=n,p}\mathcal{N}_{\mathbf{qq^{\prime }}}^{(\tau)}(t)=\prod_{\tau=n,p}\det M_{\mathbf{qq^{\prime}}}^{(\tau)}(t), \tag{36}\]
where the elements of \(M_{\mathbf{qq^{\prime}}}^{(\tau)}(t)\) read
\[\left[M_{\mathbf{qq^{\prime}}}^{(\tau)}(t)\right]_{kl}=\langle\varphi_{k}^{\mathbf{q} (\tau)}(t)|\varphi_{l}^{\mathbf{q^{\prime}}(\tau)}(t)\rangle\,, \tag{37}\]
or explicitly
\[\left[M_{\mathbf{qq^{\prime}}}^{(\tau)}(t)\right]_{kl}=\sum_{\sigma}\int\,d^{3} \mathbf{r}\varphi_{k}^{\mathbf{q}(\tau)*}(\mathbf{r}\sigma;t)\varphi_{l}^{\mathbf{q^{\prime}} (\tau)}(\mathbf{r}\sigma;t). \tag{38}\]
In addition to the norm kernel matrix \(\mathcal{N}(\mathrm{t})\), Eq. (27) requires evaluation of the square root of its inverse, \(\mathcal{N}^{-1/2}(t)\). This matrix is straightforward to calculate when \(\mathcal{N}(t)\) is non-singular. The case of a singular norm kernel matrix is discussed in Sec. 3.3.
### The Hamiltonian kernel
#### 3.2.1 General expressions
Motivated by the generalized Wick theorem [68], the Hamiltonian kernel2 can be expressed as
Footnote 2: Since we are not dealing with a genuine Hamiltonian operator but with a density-dependent energy density functional, the ”Hamiltonian kernel” is somewhat of a misnomer. Consequences of this distinction were thoroughly discussed in the literature [69; 70; 71; 72]. The main practical consequence for our calculations is that it is necessary to introduce a prescription for the density-dependent component of the functional, as explained in Sec. 3.2.3.
\[\mathcal{H}_{\mathbf{qq^{\prime}}}(t)=E_{\mathbf{qq^{\prime}}}(t)\mathcal{N}_{\mathbf{qq^{ \prime}}}(t), \tag{39}\]
where the energy kernel \(E_{\mathbf{qq^{\prime}}}(t)\) is obtained as a spatial integral of the energy density kernel
\[E_{\mathbf{qq^{\prime}}}(t)=\int\,d^{3}\mathbf{r}\mathcal{E}_{\mathbf{qq^{\prime}}}(\mathbf{r };t). \tag{40}\]
The energy density kernel itself corresponds to the sum of kinetic, nuclear (Skyrme), and Coulomb components,
\[\mathcal{E}_{\mathbf{qq^{\prime}}}(\mathbf{r};t)=\mathcal{E}_{\mathbf{qq^{\prime}}}^{\text {Kin}}(\mathbf{r};t)+\mathcal{E}_{\mathbf{qq^{\prime}}}^{\text{Sky}}(\mathbf{r};t)+ \mathcal{E}_{\mathbf{qq^{\prime}}}^{\text{Cou}}(\mathbf{r};t), \tag{41}\]
and it is a functional of the one-body, non-local transition density
\[\rho_{\mathbf{qq^{\prime}}}(\mathbf{r}\sigma,\mathbf{r^{\prime}}\sigma^{\prime};t)=\frac{ \langle\Phi_{\mathbf{q}}(t)|c_{\mathbf{r^{\prime}}\sigma^{\prime}}^{\dagger}c_{\mathbf{r} \sigma}|\Phi_{\mathbf{q^{\prime}}}(t)\rangle}{\langle\Phi_{\mathbf{q}}(t)|\Phi_{\mathbf{q ^{\prime}}}(t)\rangle}. \tag{42}\]
This density is used to derive various local transition density components that will appear in (41). The explicit expressions for all the components are given in Appendix A.
#### 3.2.2 Energy density components
To start with, the kinetic energy density can be simply calculated as
\[\mathcal{E}_{\mathbf{qq^{\prime}}}^{\text{Kin}}(\mathbf{r};t)=\frac{\hbar^{2}}{2m}\sum _{\tau=n,p}\tau_{\mathbf{qq^{\prime}}}^{(\tau)}(\mathbf{r};t), \tag{43}\]
where \(m\) is the nucleon mass and \(\tau_{\mathbf{qq^{\prime}}}^{(\tau)}(\mathbf{r};t)\) is the local transition kinetic density [Eq. (45)].
Furthermore, the nuclear potential component of the energy density is derived from the Skyrme pseudopotential [7]. The proton-neutron representation of the energy density is equivalent to the one used in Ref. [73], except that the diagonal local densities are substituted by transition local densities defined in Ap
pendix A. The full expression reads
\[\mathcal{E}^{\text{Sky}}_{\mathbf{q}\mathbf{q}^{\prime}}(\mathbf{r};t) =B_{1}\rho^{2}_{\mathbf{q}\mathbf{q}^{\prime}}(\mathbf{r};t)+B_{2}\sum_{\tau=n, p}\rho^{(\tau)2}_{\mathbf{q}\mathbf{q}^{\prime}}(\mathbf{r};t)\] \[+B_{3}\big{(}\rho_{\mathbf{q}\mathbf{q}^{\prime}}(\mathbf{r};t)\tau_{\mathbf{q}\bm {q}^{\prime}}(\mathbf{r};t)-\mathbf{j}^{2}_{\mathbf{q}\mathbf{q}^{\prime}}(\mathbf{r};t)\big{)}\] \[+B_{4}\sum_{\tau=n,p}\big{(}\rho^{(\tau)}_{\mathbf{q}\mathbf{q}^{\prime}}( \mathbf{r};t)\tau^{(\tau)}_{\mathbf{q}\mathbf{q}^{\prime}}(\mathbf{r};t)-\mathbf{j}^{(\tau)2}_{\mathbf{ q}\mathbf{q}^{\prime}}(\mathbf{r};t)\big{)}\] \[+B_{5}\rho_{\mathbf{q}\mathbf{q}^{\prime}}(\mathbf{r};t)\Delta\rho_{\mathbf{q}\bm {q}^{\prime}}(\mathbf{r};t)\] \[+B_{6}\sum_{\tau=n,p}\rho^{(\tau)}_{\mathbf{q}\mathbf{q}^{\prime}}(\mathbf{r} ;t)\Delta\rho^{(\tau)}_{\mathbf{q}\mathbf{q}^{\prime}}(\mathbf{r};t)\] \[+B_{7}\rho^{\alpha}_{D}(\mathbf{r};t)\rho^{2}_{\mathbf{q}\mathbf{q}^{\prime}} (\mathbf{r};t)\] \[+B_{8}\rho^{\alpha}_{D}(\mathbf{r};t)\sum_{\tau=n,p}\rho^{(\tau)2}_{ \mathbf{q}\mathbf{q}^{\prime}}(\mathbf{r};t)\] \[+B_{9}\rho_{\mathbf{q}\mathbf{q}^{\prime}}(\mathbf{r};t)\big{(}\nabla\cdot J_ {\mathbf{q}\mathbf{q}^{\prime}}(\mathbf{r};t)\big{)} \tag{44}\] \[+B_{9}\mathbf{j}_{\mathbf{q}\mathbf{q}^{\prime}}(\mathbf{r};t)\cdot\big{(}\nabla \times\mathbf{s}_{\mathbf{q}\mathbf{q}^{\prime}}(\mathbf{r};t)\big{)}\] \[+B_{9}\sum_{\tau=n,p}\rho^{(\tau)}_{\mathbf{q}\mathbf{q}^{\prime}}(\mathbf{r} ;t)\big{(}\nabla\cdot J^{(\tau)}_{\mathbf{q}\mathbf{q}^{\prime}}(\mathbf{r};t)\big{)}\] \[+B_{9}\mathbf{j}^{(\tau)}_{\mathbf{q}\mathbf{q}^{\prime}}(\mathbf{r};t)\cdot\big{(} \nabla\times\mathbf{s}^{(\tau)}_{\mathbf{q}\mathbf{q}^{\prime}}(\mathbf{r};t)\big{)}\] \[+B_{10}\mathbf{s}_{\mathbf{q}\mathbf{q}^{\prime}}(\mathbf{r};t)+B_{11}\sum_{\tau=n,p}\mathbf{s}^{(\tau)2}_{\mathbf{q}\mathbf{q}^{\prime}}(\mathbf{r};t)\] \[+B_{12}\rho^{\alpha}_{D}(\mathbf{r};t)\mathbf{s}^{2}_{\mathbf{q}\mathbf{q}^{ \prime}}(\mathbf{r};t)\] \[+B_{13}\rho^{\alpha}_{D}(\mathbf{r};t)\sum_{\tau=n,p}\mathbf{s}^{(\tau)2} _{\mathbf{q}\mathbf{q}^{\prime}}(\mathbf{r};t).\]
The coupling constants \(B_{i}\) and parameter \(\alpha\) are defined in Appendix B. The \(\rho_{D}(\mathbf{r};t)\) density is defined in Sec. 3.2.3.
Finally, the Coulomb component is composed of the direct and the exchange contribution,
\[\mathcal{E}^{\text{Cou}}_{\mathbf{q}\mathbf{q}^{\prime}}(\mathbf{r};t)=\mathcal{E}^{\text{ Cou,Dir}}_{\mathbf{q}\mathbf{q}^{\prime}}(\mathbf{r};t)+\mathcal{E}^{\text{Cou,Exc}}_{\mathbf{q}\mathbf{q}^{ \prime}}(\mathbf{r};t). \tag{45}\]
The direct contribution is calculated as
\[\mathcal{E}^{\text{Cou,Dir}}_{\mathbf{q}\mathbf{q}^{\prime}}(\mathbf{r};t)=\frac{1}{2}\ U^{\text{Cou,Dir}}_{\mathbf{q}\mathbf{q}^{\prime}}(\mathbf{r};t)\rho^{(p)}_{ \mathbf{q}\mathbf{q}^{\prime}}(\mathbf{r};t), \tag{46}\]
where \(\rho^{(p)}_{\mathbf{q}\mathbf{q}^{\prime}}(\mathbf{r};t)\) is the local proton density [Eq. (A.6)] and \(U^{\text{Cou,Dir}}_{\mathbf{q}\mathbf{q}^{\prime}}(\mathbf{r};t)\) is the Coulomb potential obtained as the solution of the Poisson equation
\[\Delta U^{\text{Cou,Dir}}_{\mathbf{q}\mathbf{q}^{\prime}}(\mathbf{r};t)=-4\pi\frac{e^{2}}{ 4\pi\epsilon_{0}}\rho^{(p)}_{\mathbf{q}\mathbf{q}^{\prime}}(\mathbf{r};t). \tag{47}\]
The real and the imaginary component of the potential are obtained by solving the corresponding differential equations separately, subject to the Dirichlet condition at the boundary \(\mathbf{r}_{B}\),
\[U^{\text{Cou,Dir}}_{\mathbf{q}\mathbf{q}^{\prime}}(\mathbf{r}_{B};t)=\frac{e^{2}}{4\pi \epsilon_{0}}\frac{Z_{\mathbf{q}\mathbf{q}^{\prime}}(t)}{|\mathbf{r}_{B}|}. \tag{48}\]
Here, \(Z_{\mathbf{q}\mathbf{q}^{\prime}}(t)\) is a complex number,
\[Z_{\mathbf{q}\mathbf{q}^{\prime}}(t)=\int\,d^{3}\mathbf{r}\rho^{(p)}_{\mathbf{q}\mathbf{q}^{\prime}} (\mathbf{r};t), \tag{49}\]
naturally giving a boundary condition for both the real and the imaginary component of the potential. Note that the condition (48) is based on the multipole expansion of a generalized charge truncated at zeroth order. Eventually, higher orders could be included as well. Finally, the exchange contribution is calculated at the Slater approximation
\[\mathcal{E}^{\text{Cou,Exc}}_{\mathbf{q}\mathbf{q}^{\prime}}(\mathbf{r};t)=-\frac{3}{4} \frac{e^{2}}{4\pi\epsilon_{0}}\Big{(}\frac{3}{\pi}\Big{)}^{1/3}\Big{[}\rho^{(p)} _{D}(\mathbf{r};t)\Big{]}^{4/3}, \tag{50}\]
where \(\rho^{(p)}_{D}(\mathbf{r};t)\) is the local proton density calculated according to a prescription, as described in Sec. 3.2.3.
#### 3.2.3 Density-dependent prescription
The local transition density is generally a complex quantity and its non-integer powers are not uniquely defined. Consequently, a prescription is needed to evaluate \(\rho^{\alpha}_{D}(\mathbf{r})\) in (44) and (50). This is a well-known feature of multi-reference EDF models which has been thoroughly discussed in the literature [59; 72]. In the present implementation, we opt for the average density prescription,
\[\rho^{\alpha}_{D}(\mathbf{r};t)=\Big{[}\frac{1}{2}\Big{(}\rho_{\mathbf{q}\mathbf{q}}(\mathbf{r} ;t)+\rho_{\mathbf{q}^{\prime}\mathbf{q}^{\prime}}(\mathbf{r};t)\Big{)}\Big{]}^{\alpha}, \tag{51}\]
which is always real and reduces to the diagonal local density when \(\mathbf{q}=\mathbf{q}^{\prime}\). An alternative form of the average density prescription [74],
\[\rho^{\alpha}_{D}(\mathbf{r};t)=\frac{1}{2}\Big{(}\rho^{\alpha}_{\mathbf{q}\mathbf{q}}(\mathbf{r} ;t)+\rho^{\alpha}_{\mathbf{q}^{\prime}\mathbf{q}^{\prime}}(\mathbf{r};t)\Big{)}, \tag{52}\]
satisfies the same properties but is obviously not equivalent to (51). Other choices, such as the mixed density prescription and the projected density prescription, have also been considered in the literature, primarily in the context of symmetry restoration [59; 72]. Sensitivity of calculations to the choice of prescription is discussed in Sec. 5.5.
### Inverse of the norm kernel
Solving Eq. (27) requires inverting the norm kernel matrix \(\mathcal{N}(\text{t})\). The matrix \(\mathcal{N}^{-1/2}(t)\) is then plugged into the last term of (27), and is also used to evaluate the collective kernels \(\mathcal{H}^{c}(t)\) and \(\mathcal{D}^{c}(t)\), according to (24). Formally, the square root inverse \(\mathcal{N}^{-1/2}(t)\) is soundly defined in the image subspace \(\mathcal{I}(t)\) only. Consequently, this linear operator always acts on functions belonging
to the image subspace, both in the equation of motion [Eq. (27)] and in the definition of collective kernels [Eq. (24)]. We compute its matrix elements in the \(\mathbf{q}\) representation as
\[\mathcal{N}^{-1/2}_{\mathbf{qq^{\prime}}}(t)=\sum_{k>d-r}\mathcal{U}_{\mathbf{q}k}(t) \lambda_{k}^{-1/2}(t)\mathcal{U}^{\dagger}_{\mathbf{q^{\prime}}k}(t), \tag{53}\]
where the sum runs only over strictly positive eigenvalues \(\lambda_{k}\).
In practical applications, diagonalizing the norm kernel typically yields several eigenvalues that are numerically close to zero but not exactly vanishing. It is well known from static GCM [32; 34] and TDGCM [39] that taking into account the inverse of these eigenvalues and the associated eigenstates in the sum (53) gives rise to numerical instabilities. A standard procedure consists of introducing a cutoff parameter \(\lambda_{\rm cut}\) and considering all norm eigenvalues \(\lambda_{k}<\lambda_{\rm cut}\) as numerical zeros. In all the following applications, the square root of the inverse norm kernel is therefore approximated as the sum (53) running only over the eigenvalues \(\lambda_{k}>\lambda_{\rm cut}\). The particular value of \(\lambda_{\rm cut}\) depends on the application and should be carefully checked on a case-by-case basis (see Sec. 5.3). Too large cutoff values may lead to significant errors in estimation of the inverse, while too low values magnify numerical instabilities. Note that the described approach is equivalent to solving the problem in the collective space spanned by the so-called natural states,
\[|k(t)\rangle=\sum_{\mathbf{q}}\frac{U_{\mathbf{q}k}(t)}{\sqrt{\lambda_{k}(t)}}\left| \Phi_{\mathbf{q}}(t)\right\rangle, \tag{54}\]
with \(\dim_{k}\leq\dim_{\mathbf{q}}\), and \(\dim_{\mathbf{q}}\) is the dimension of the \(\mathbf{q}-\)basis space.
### Kernels with explicit time derivatives
To ensure hermiticity of the total collective kernel on the right hand side of Eq. (27), it is crucial to use a consistent numerical prescription when evaluating its various ingredients. This particularly applies to kernels that include an explicit differentiation with respect to time, such as the \(\mathcal{D}_{\mathbf{qq^{\prime}}}(t)\), \(\dot{\mathcal{N}}_{\mathbf{qq^{\prime}}}(t)\), and \(\dot{\mathcal{N}}^{1/2}_{\mathbf{qq^{\prime}}}(t)\) kernel.
#### 3.4.1 The \(\mathcal{D}_{\mathbf{qq^{\prime}}}(t)\) kernel
We assume that the time derivative of a generating state \(|\Phi_{\mathbf{q}}(t)\rangle\) is well represented by finite differences,
\[\partial_{t}\left|\Phi_{\mathbf{q}}(t)\right\rangle\approx\frac{1}{\mathcal{A}t} \big{(}\left|\Phi_{\mathbf{q}}(t)\right\rangle-\left|\Phi_{\mathbf{q}}(t_{-})\right\rangle \big{)}, \tag{55}\]
where \(t_{-}=t-\Delta t\) and \(\Delta t\) is the time step. The time-derivative kernel (8) can then be simply evaluated as
\[\mathcal{D}_{\mathbf{qq^{\prime}}}(t)=\frac{i\hbar}{\Delta t}\big{(}\left\langle \Phi_{\mathbf{q}}(t)|\Phi_{\mathbf{q^{\prime}}}(t)\right\rangle-\left\langle\Phi_{\bm {q}}(t)|\Phi_{\mathbf{q^{\prime}}}(t_{-})\right\rangle\big{)}. \tag{56}\]
Calculation of the time-derivative kernel was therefore reduced to evaluation of two overlap kernels equivalent to those in Eq. (6).
4.2 The \(\dot{\mathcal{N}}_{\mathbf{qq^{\prime}}}(t)\) and \(\dot{\mathcal{N}}^{1/2}_{\mathbf{qq^{\prime}}}(t)\) kernels
We start by evaluating the \(\dot{\mathcal{N}}_{\mathbf{qq^{\prime}}}(t)\) kernel,
\[\dot{\mathcal{N}}_{\mathbf{qq^{\prime}}}(t)=\left\langle\Phi_{\mathbf{q}}(t)|\dot{ \Phi}_{\mathbf{q^{\prime}}}(t)\right\rangle+\left\langle\dot{\Phi}_{\mathbf{q}}(t)| \Phi_{\mathbf{q^{\prime}}}(t)\right\rangle. \tag{57}\]
Using the finite differences scheme of (55), we obtain
\[\begin{split}\dot{\mathcal{N}}_{\mathbf{qq^{\prime}}}(t)& =\frac{1}{\Delta t}\big{(}2\left\langle\Phi_{\mathbf{q}}(t)|\Phi_{ \mathbf{q^{\prime}}}(t)\right\rangle-\left\langle\Phi_{\mathbf{q}}(t)|\Phi_{\mathbf{q^{ \prime}}}(t_{-})\right\rangle\\ &\qquad-\left\langle\Phi_{\mathbf{q}}(t_{-})|\Phi_{\mathbf{q^{\prime}}}(t )\right\rangle\big{)}.\end{split} \tag{58}\]
Similarly as before, we only need to evaluate three overlap kernels. In the next step, we can determine the \(\dot{\mathcal{N}}^{1/2}(t)\) kernel by recognizing that
\[\dot{\mathcal{N}}(t)=\dot{\mathcal{N}}^{1/2}(t)\mathcal{N}^{1/2}(t)+\mathcal{N }^{1/2}(t)\dot{\mathcal{N}}^{1/2}(t) \tag{59}\]
represents a special case of the Sylvester equation [75]. If \(\mathcal{N}^{1/2}(t)\) has all positive, non-zero eigenvalues, there exists a unique solution which can be written as
\[\text{vec}\big{(}\dot{\mathcal{N}}^{1/2}(t)\big{)}=\mathcal{S}^{-1}(t)\ \text{vec}\big{(}\dot{\mathcal{N}}(t)\big{)}, \tag{60}\]
where the vectorization operator "vec" corresponds to stacking the columns of a \(n\times n\) matrix into a vector of length \(n^{2}\) and
\[\mathcal{S}(t)=\mathbb{1}\otimes\mathcal{N}^{1/2}(t)+\big{(}\mathcal{N}^{1/2} (t)\big{)}^{T}\otimes\mathbb{1} \tag{61}\]
is a complex matrix belonging to \(\mathbb{C}^{n^{2}\times n^{2}}\). We recover the desired kernel in its matrix form with the inverse of the vectorization operator.
\[\dot{\mathcal{N}}^{1/2}(t)=\text{vec}^{-1}\big{[}\mathcal{S}^{-1}(t)\ \text{vec} \big{(}\dot{\mathcal{N}}(t)\big{)}\big{]}. \tag{62}\]
Note that the described procedure requires inverting the \(\mathcal{S}(t)\) matrix, whose dimension grows as a square of the number of basis states. However, for the basis sizes envisioned in applications of MC-TDDFT (from several states to several tens of states), such inversions are feasible. Should the need for even larger bases occur, hermiticity of the norm matrix may be used to further reduce the dimensionality of the problem (for example, by using the half-vectorization instead of the vectorization operation). Finally, this procedure to solve the Sylvester equation involves inversion of the matrix
\(\mathcal{S}(t)\) which can be positive semi-definite, similarly to \(\mathcal{N}^{-1/2}\). Following the same procedure, we diagonalize the matrix that should be inverted, keep only the non-zero eigenvalues, and invert the matrix in this subspace. This yields the \(\mathcal{S}(t)\) matrix which can be safely used in Eq. (62)3.
Footnote 3: Several numerical tests can be performed to verify the procedure. To start with, thus obtained \(\mathcal{N}^{1/2}(t)\) matrix should verify Eq. (59). Moreover, it should reduce to the usual expression when all the eigenvalues are non-zero. As a third test, when plugged into Eq. (27), it should lead to a unitary time evolution. Finally, when two identical TDDFT states are mixed, the evolution of the MC-TDDFT state should reduce to the evolution of the basis state.
## 4 Resolution of the equation of motion and calculation of observables
Once all the expressions for collective kernels have been established as described in Sec. 3, the MC-TDDFT calculations proceed in three major steps: (i) choosing a set of initial conditions relevant for the physical case under study, (ii) integrating in time the equation of motion for the basis states [Eq. (34)] and the collective wave function [Eq. (27)], and (iii) computing observables of interest.
### The initial conditions
To start with, the initial mixing functions \(f_{\mathbf{q}}(0)\) need to be chosen. This choice is somewhat arbitrary as it is entirely guided by the physical scenario one aims to simulate. For example, in [54] we mixed three TDDFT states (\(\mathbf{q}=1,2,3\)) and considered two sets of initial conditions. In the first case, we set \(f_{1}(0)=1,f_{2}(0)=f_{3}(0)=0\), rendering the initial MC-TDDFT state equal to the first TDDFT state. In the second case, the mixing functions were determined by diagonalizing the initial collective Hamiltonian kernel, thus starting calculations from the actual multiconfigurational ground state. Of course, alternative choices are also possible.
Furthermore, the initial total collective kernel on the right hand side of (27) needs to be determined. While the \(\mathcal{H}^{c}(t)\) and \(\mathcal{N}^{-1/2}(t)\) are uniquely defined at \(t=0\), this is not the case for the \(\mathcal{D}^{c}(t)\) and \(\mathcal{N}^{1/2}(t)\) that include explicit time derivatives and are calculated with the finite difference scheme. The value of these kernels at \(t=0\) is therefore estimated by propagating the set of basis states by \(\Delta t\) and using the finite differences scheme. Since for sufficiently small time steps the collective kernels evolve very smoothly, the overall dynamics is not significantly impacted by this choice.
### Numerical schemes for time propagation
The nuclear TDHF equation [Eq. (34)] can be efficiently resolved by using any of the popular numerical schemes. In the present implementation, we use the fourth order Runge-Kutta method (RK4) which was in [31] shown to provide better norm conservation properties than the Crank-Nicolson scheme. On the other hand, the equation of motion for collective wave functions [Eq. (27)] is resolved by the direct method, that is
\[g_{\mathbf{q}}(t_{0}+\Delta t)=\exp\Big{(}-\frac{i}{\hbar}\mathcal{T}(t_{0}) \Delta t\Big{)}g_{\mathbf{q}}(t_{0}), \tag{63}\]
where \(\mathcal{T}(t)\) is the total collective kernel on the right hand side of (27). The direct method appears feasible for smaller sets of basis states. For larger sets, an alternative method such as the RK4 may be better suited.
### Calculation of observables
Following Eq. (25), the collective wave function can be used to calculate the expectation value of any observable in the MC-TDDFT state at any time \(t\). Generally, the collective kernel \(\mathcal{O}^{c}(t)\) can be calculated from the usual kernel according to (24). In the specific case of a one-body, spin-independent, local observable, the generalized Wick theorem yields directly
\[\mathcal{O}_{\mathbf{q}\mathbf{q}^{\prime}}(t)=\mathcal{N}_{\mathbf{q}\mathbf{q}^{\prime}}(t) \int\,d^{3}\mathbf{r}O(\mathbf{r})\rho_{\mathbf{q}\mathbf{q}^{\prime}}(\mathbf{r};t), \tag{64}\]
where \(\rho_{\mathbf{q}\mathbf{q}^{\prime}}(\mathbf{r};t)\) is the transition particle density from Eq. (65) and \(O(\mathbf{r})\) is the coordinate space representation of the corresponding operator. For example, for the multipole moment operator \(O_{lm}(\mathbf{r})=r^{l}\mathrm{Y}_{lm}(\theta,\phi)\), where \(\mathrm{Y}_{lm}(\theta,\phi)\) are the spherical harmonics. Furthermore, the variance of such one-body observable in a normalized MC-TDDFT state can be calculated as
\[\sigma_{O}^{2}(t)=\langle\Psi(t)|\hat{O}^{2}|\Psi(t)\rangle-\langle\Psi(t)| \hat{O}|\Psi(t)\rangle^{2}\,. \tag{65}\]
An explicit expression of the variance as a function of the one-body density is given in Appendix C.
## 5 Illustrative calculations
As an illustrative example, we consider the doubly-magic nucleus \({}^{40}\)Ca. Like in Ref. [54], the calculations are performed using a newly developed code based on the finite element method [76; 77]. The nuclear dynamics is simulated in a three-dimensional box of length \(L\), with a regular mesh of \(N\) cells in each spatial direction and a finite element basis of \(n\)-th order polynomials. We employ the SLy4d EDF [78], whose parameters were
adjusted without the center of mass correction, making it particularly well suited for dynamical studies. Unless stated otherwise, the average density prescription of the form (51) is used for the density-dependent part of an EDF.
In Sec. 5.1, we briefly demonstrate the convergence of TDDFT calculations with the new code. In Sec. 5.2, we discuss the convergence of collective dynamics when two TDDFT trajectories are mixed. In Sec. 5.3, we discuss the treatment of linear dependencies in the TDDFT basis, using an example of mixing of three trajectories. Furthermore, in Sec. 5.4 we address the issue of energy conservation within the MC-TDDFT framework. Finally, the influence of the density prescription on results is discussed in Sec. 5.5.
### Convergence of TDDFT calculations
In Fig. 1(a), we demonstrate the convergence of the calculated ground-state binding energy \(E_{B}\) by plotting \(\Delta E_{B}=|(E_{B}-E_{0})/E_{0}|\cdot 100\) as a function of the mesh step size \(\Delta x=L/N\), where \(E_{0}\) is the fully converged value (up to the sixth decimal point). An equivalent quantity for the ground-state root mean square radius, \(\Delta R\), is shown in Fig. 1(b). The box length is fixed to \(L=24\) fm in all calculations and polynomials of the order \(n=3,4,5\) are considered for the finite element basis. As expected, the bases of higher order polynomials systematically require smaller numbers of cells (larger \(\Delta x\)) to obtain a comparable convergence. For example, the binding energy in the (\(n=3\), \(\Delta x_{1}=24/14\) fm \(\approx 1.71\) fm) calculation converges within \(0.05\%\), while already the (\(n=5,\Delta x_{2}=2.4\) fm) calculation converges within \(0.001\%\). The equivalent holds for radii, even though they converge at a somewhat faster rate.
In the next step, the \({}^{40}\)Ca ground state is boosted in the isoscalar quadrupole direction by applying the instantaneous boost operator [3; 80], \(\exp(i\eta\hat{Q}_{20})\), where \(\eta=5.7\cdot 10^{-3}\) fm\({}^{-2}\) is the boost magnitude and \(\hat{Q}_{20}\) is the axially-symmetric quadrupole moment operator. To verify the convergence of the resulting TDDFT dynamics, in Fig. 2 we compare the time evolutions for \((n=3,\Delta x_{1})\) and \((n=5,\Delta x_{2})\) space discretizations. The time propagation is performed using the fourth order Runge-Kutta method and is traced up to \(t=3\) zs.
Figure 1: Convergence \(\Delta X=|(X-X_{0})/X_{0}|\cdot 100\) of the ground-state binding energy (\(X\equiv E_{B}\), panel (a)) and the ground-state root mean square radius (\(X\equiv R=\sqrt{(r^{2})}\), panel (b)) as a function of the mesh step size \(\Delta x=L/N\), where \(L=24\) fm is the box length and \(N\) is the number of finite element cells per spatial dimension. Convergence patterns are compared for finite element bases of polynomials of the order \(n=3,4,5\). The fully converged values \(E_{0}=-339.118594\) MeV and \(R_{0}=3.413466\) fm were obtained with \((n=5,\Delta x=24/26\) fm \(\approx 0.92\) fm). These values agree within at least \(1\) keV and \(0.003\) fm, respectively, with those obtained using the HFBTHO computational framework [79]. Note that the radius is fully converged for \((n=5,\Delta x\leq 1.5\) fm); the corresponding values are plotted as \(10^{-5}\).
Figure 2: (a): Isoscalar quadrupole moment of the \(q_{20}\)-boosted TDDFT state for different spatial and temporal discretization schemes: \((n=3,\Delta x_{1},\Delta t_{1})\) in red, \((n=3,\Delta x_{1},\Delta t_{2})\) in blue, and \((n=5,\Delta x_{2},\Delta t_{2})\) in green, with \(\Delta x_{1}\approx 1.71\) fm, \(\Delta x_{2}=2.4\) fm, \(\Delta t_{1}=5\cdot 10^{-4}\) zs, and \(\Delta t_{2}=10^{-4}\) zs. (b): Numerical error of the TDDFT energy, \(\Delta E_{\rm TDDFT}(t)=\left(E(t)-E(0)\right)/E(0)\cdot 100\), for the three cases above.
The \((n=3,\Delta x_{1})\) dynamics is well converged for a wide range of time steps \(\Delta t\); as can be seen in Fig. 2(a), the quadrupole moments \(q_{20}(t)\) obtained with \(\Delta t_{1}=5\cdot 10^{-4}\) zs and \(\Delta t_{2}=10^{-4}\) zs are essentially indistinguishable. A similar holds for the \((n=5,\Delta x_{2})\) case, even though achieving convergence with higher order basis functions and/or finer spatial meshes will generally necessitate using smaller time steps \(\Delta t\). For example, the \((n=5,\Delta x_{2})\) calculations do not converge for \(\Delta t_{1}\). However, the \(q_{20}(t)\) obtained with \(\Delta t_{2}\) is indistinguishable from the two curves obtained with \((n=3,\Delta x_{1})\). This indicates that the minor difference in ground-state convergence of the two sets of spatial parameters bears no significant consequence for the subsequent TDDFT dynamics.
This is further corroborated by Fig. 2(b), showing the variation of the TDDFT energy \(E(t)\) as a function of time, \(\Delta E_{\rm TDDFT}(t)=\Big{(}E(t)-E(0)\Big{)}/E(0)\cdot 100\). The energy should be exactly conserved within the TDDFT framework - therefore, the small variations observed in Fig. 2(b) stem from numerical effects. Standard causes for such variations include the discretization errors in the estimation of the spatial derivatives, leading to a non-Hermitian mean-field Hamiltonian, as well as approximations of the time propagator acting on the single-particle wave functions that break unitarity [81; 82]. Even though calculations with \((n=5,\Delta x_{2})\) yield comparatively smaller variations, these remain rather low in the \((n=3,\Delta x_{1})\) case; under \(0.0004\%\) or less than \(1\) keV. In addition, note that variations are independent of the time step.
### Convergence of the collective dynamics
As a first example of configuration mixing, we consider a mixed state composed of two TDDFT configurations,
\[\left|\Psi_{A}(t)\right\rangle=f_{1}(t)\left|\Phi_{1}(t)\right\rangle+f_{2}(t) \left|\Phi_{2}(t)\right\rangle. \tag{66}\]
Here, for \(\left|\Phi_{1}(t)\right\rangle\) we take the TDDFT state described in the previous section, corresponding to the \({}^{40}\)Ca ground state boosted in the isoscalar quadrupole direction by \(\eta_{1}=5.7\cdot 10^{-3}\) fm\({}^{-2}\). Equivalently, \(\left|\Phi_{2}(t)\right\rangle\) corresponds to the ground state boosted by \(\eta_{2}=1.376\cdot 10^{-2}\) fm\({}^{-2}\). This choice of quadrupole boosts yields states with excitation energies of about \(E_{1}(t=0)=0.25\) and \(E_{2}(t=0)=1.46\) MeV above the Hartree-Fock ground state. At \(t=0\), we set \(f_{1}(0)=1\) and \(f_{2}(0)=0\) so that the initial mixed state corresponds to the first basis state. The equation of motion for the collective wave function [Eq. (27)] is resolved by the direct method [Eq. (63)]. As before, the box length is fixed to \(L=24\) fm. We consider the three sets of spatial and temporal parameters described in Sec. 5.1.
To start with, the initial eigenvalues of the norm kernel matrix read \(\lambda_{1}(0)=0.011502\) and \(\lambda_{2}(0)=1.988498\) for the \((n=3,\Delta x_{1})\) case, and \(\lambda_{1}(0)=0.011491\) and \(\lambda_{2}(0)=1.988509\) and for the \((n=5,\Delta x_{2})\) case. In Fig. 3(a), we show the time evolution of norm eigenvalues with respect to their initial values, \(\lambda(t)-\lambda(0)\), for all three sets of parameters. The two eigenvalues oscillate in counterphase, such that their sum (the horizontal line in the middle of Fig. 3(a)) remains constant up to numerical accuracy. In this case, the dimension of the collective space is the same as the dimension of the basis space, \(\dim_{k}=\dim_{\mathbf{q}}=2\). However, in many practical implementations the norm eigenstates corresponding to very small eigenvalues will need to be removed to ensure a stable numerical solution. This issue is addressed in Sec. 5.3.
In Fig. 3(b), we show the squared modulus of the collective wave function for the three parameter sets from Sec. 5.1. Once again, the \((n=3,\Delta x_{1})\) calculations are well-converged with respect to the time step \(\Delta t\). Overall, the components of the collective wave function exhibit an oscillatory behavior. Their sum remains equal to one at all times, reflecting the unitarity of collective dynamics. Furthermore, the \((n=5,\Delta x_{2})\) curves are initially indistinguishable from the \(n=3\) curves, but start to deviate for \(t>1\) zs.
The source of these minor deviations can be traced back to different convergence profiles of the off-diagonal kernel elements in the equation of motion. As demonstrated earlier in Fig. 2, the diagonal components of the Hamiltonian kernel - that is, the TDDFT energies - show excellent convergence with respect to the choice of spatial discretization parameters. However, the off-diagonal components involve transition densities and are therefore expected to exhibit weaker convergence for the same choice of parameters. To shed more light on this issue, in Fig. 3(c) we show the difference, in percentage, between the off-diagonal component of the Hamiltonian kernel [Eq. (7)] calculated with the two sets of spatial parameters, \(\Delta\mathcal{H}_{12}(t)=\Big{|}\Big{(}\mathcal{H}_{12}^{n=3}(t)-\mathcal{H}_{ 12}^{n=5}(t)\Big{)}/\mathcal{H}_{12}^{n=5}(t)\Big{|}\cdot 100\), where \(\mathcal{H}_{12}^{n=3}(t)\) is calculated with \((n=3,\Delta x_{1})\) and \(\mathcal{H}_{12}^{n=5}(t)\) with \((n=5,\Delta x_{2})\). Since \(\mathcal{H}_{12}(t)\) is a complex quantity, the corresponding real and imaginary components are plotted separately. Furthermore, the right \(y\)-axis of Fig. 3(c) shows the real and the imaginary component of \(\mathcal{H}_{12}^{n=5}(t)\). Please note that this quantity is _not_ a constant; the question of energy conservation is addressed in more detail in Sec. 5.4.
Initially, both the real and the imaginary component of \(\Delta\mathcal{H}_{12}(t)\) are relatively small. In addition, they stay well under 1% for the largest part of time evolution. The sole exception are the regions around \(t\) values where either the real or the imaginary component of \(\mathcal{H}_{12}(t)\) changes sign (around 0.85 zs and 2.6 zs for the former and 1.7 zs for the latter). In those cases, the denominator in \(\Delta\mathcal{H}_{12}(t)\) becomes very small and the entire quantity tends to diverge. Consequently, for plotting purposes, all points with \(\Delta\mathcal{H}_{12}(t)>1\%\) are shown as 1%. Nevertheless, note that the absolute value of deviation remains under 1 MeV throughout the entire time evolution. While not drastic, such a deviation is sufficient to cause minor discrepancies seen in Fig. 3(b).
To examine the impact of this effect on an observable, in Fig. 3(d) we show time evolution of the isoscalar quadrupole moment of the MC-TDDFT state, \(q_{20}(t)=\langle\hat{Q}_{20}\rangle\left(t\right)\) [Eq. (25) with \(\hat{O}=\hat{Q}_{20}\)]. The three sets of parameters again yield essentially indistinguishable results, except for some minor deviations of the \((n=5,\Delta x_{2})\) curve for larger values of \(t\).
Overall, a particular choice of spatial parameters will reflect a compromise between the feasibility of computational cost and the required accuracy. In the current case, the \((n=5,\Delta x_{2})\) parameters appear to yield somewhat more accurate results, but at the price of about three times longer computational time per iteration. Of course, this difference will become even larger as the basis size increases. Therefore, in the following, we will be using the \((n=3,\Delta x_{1})\) spatial parametrization - a choice which was also made in Ref. [54].
### Treatment of linear dependencies in the basis
In the next example, we consider mixing of three TDDFT configurations,
\[\left|\Psi_{B}(t)\right\rangle=\sum_{i=1}^{3}f_{i}(t)\left|\Phi_{i}(t)\right\rangle. \tag{67}\]
Here, \(\left|\Phi_{1}(t)\right\rangle\) and \(\left|\Phi_{3}(t)\right\rangle\) correspond to the two configurations from the previous section. Furthermore, \(\left|\Phi_{2}(t)\right\rangle\)
Figure 3: Mixing of two TDDFT configurations, discussed in Sec. 5.2, for three sets of spatial and temporal parameters, described in caption to Fig. 2. (a): Time evolution of the norm kernel matrix eigenvalues with respect to their initial values. The two eigenvalues oscillate in counterphase, such that their sum always remains constant. (b): Squared modulus of the collective wave function. Note that the \(|g_{1}(t)|^{2}\) component is distinguished from the \(|g_{2}(t)|^{2}\) component by the crossed markers while keeping the same color convention. Their sum remains equal to one at all times, reflecting the unitarity of collective dynamics. (c): The difference, in percentage, between the off-diagonal component of the Hamiltonian kernel calculated with the two sets of spatial parameters, \(\Delta\mathcal{H}_{12}(t)=\left|\left(\mathcal{H}_{12}^{n=3}(t)-\mathcal{H}_{ 12}^{n=5}(t)\right)\middle/\mathcal{H}_{12}^{n=5}(t)\right|\cdot 100\), where \(\mathcal{H}_{12}^{n=3}(t)\) is calculated with \((n=3,\Delta x_{1})\) and \(\mathcal{H}_{12}^{n=5}(t)\) with \((n=5,\Delta x_{2})\). The right \(y\)-axis shows the time evolution of \(\mathcal{H}_{12}^{n=5}(t)\). In both cases, the real and the imaginary component are plotted separately. The difference \(\Delta\mathcal{H}_{12}(t)\) diverges when \(\mathcal{H}_{12}^{n=5}(t)\) changes sign - consequently, all points with \(\Delta\mathcal{H}_{12}(t)>1\%\) are plotted as 1%. (d): The isoscalar quadrupole moment of the MC-TDDFT state obtained with the three sets of parameters.
is generated in the same manner, by applying the \(q_{20}\)-boost of the magnitude \(\eta_{2}=1.14\cdot 10^{-2}\) fm\({}^{-2}\) to the ground state. This yields and excited state with \(E_{2}(t=0)=1.00\) MeV. This choice of basis states is equivalent to the one made in Ref. [54]. We also set \(f_{1}(0)=1\) and \(f_{2}(0)=f_{3}(0)=0\) so that the initial mixed state corresponds to the first basis state and coupling between the trajectories kicks in with time. We use \((n=3,\Delta x_{1})\), \(\Delta t_{1}=5\cdot 10^{-4}\) zs, and set the box size to \(L=24\) fm.
In Fig. 4(a), the time evolution of the eigenvalues of the corresponding norm matrix are shown. Starting from \(\lambda_{1}(0)=8\cdot 10^{-6}\), \(\lambda_{2}(0)=0.012162\), and \(\lambda_{3}(0)=2.987830\), the three components oscillate in time, such that their sum remains constant (up to numerical accuracy). The amplitude of these oscillations is rather small, in agreement with the fact that each basis state itself describes a small-amplitude nuclear oscillation.
The collective wave function should, at all times, be contained in the image of \(\mathcal{N}(t)\). In Fig. 4(b), we show the projection of the collective wave function onto the image subspace, \(|\mathcal{P}^{\mathcal{I}}(t)g(t)|^{2}\), for three different choices of the collective space [Eq. (54)]. For \(\dim_{k}=\dim_{\mathbf{q}}=3\), per definition, we have \(|\mathcal{P}^{\mathcal{I}}(t)g(t)|^{2}=1\) for all \(t\). However, the inclusion of a very small eigenvalue \(\lambda_{1}(t)\) causes numerical instabilities. This leads to, for example, spurious small oscillations of the center of mass or to the total collective kernel on the right hand side of Eq. (26) not being exactly Hermitian. On the other hand, the \(\dim_{k}=1\) choice yields \(|\mathcal{P}^{\mathcal{I}}(t)g(t)|^{2}\ \approx 0.992\), reflecting the fact that removing the relatively large norm eigenstate with \(\lambda_{2}(t)\approx 0.01\) removes a portion of physical information as well. The collective space should correspond to the smallest subspace of the full Hilbert space containing all the basis states; for too large cutoffs, however, the collective space does not anymore contain all the basis states. Consequently, the \(\dim_{k}=2\) choice is optimal in this case - while being numerically stable, it also ensures \(|\mathcal{P}^{\mathcal{I}}(t)g(t)|^{2}=1\) up to \(\approx 10^{-7}\). An equivalent analysis could be carried out looking at the Frobenius norm of \(|\mathcal{P}^{I}(t)\mathcal{N}(t)-\mathcal{N}(t)|\) [Eq. (17b)] or to the partial sum of eigenvalues \(\sum_{k\in\mathrm{Im}(\mathcal{N})}\lambda_{k}\).
In Fig. 4(c), we show the squared modulus of the resulting collective wave function, which again exhibits an oscillatory behavior. In particular, one can notice a close resemblance of \(|g_{1}(t)|^{2}\) and \(|g_{3}(t)|^{2}\) to the collective wave function from Fig. 3(b). This is entirely expected, since the corresponding collective spaces are both of dimension 2 and spanned by very similar states.
Finally, Fig. 4(d) shows the isoscalar quadrupole moment of the \(|\Phi_{1}(t)\rangle\) TDDFT trajectory and of the MC-TDDFT state. As already noted in Ref. [54], the TDDFT curve exhibits nearly harmonic oscillations of
Figure 4: Mixing of three TDDFT configurations, discussed in Sec. 5.3, for \((n=3,\Delta x_{1})\) and \(\Delta t_{1}=5\cdot 10^{-4}\) zs. (a): Time evolution of the norm kernel matrix eigenvalues. The three components oscillate in time, such that their sum remains constant. (b): Projection of the collective wave function onto the image subspace, \(|\mathcal{P}^{\mathcal{I}}(t)g(t)|^{2}\), for three different choices of the norm eigenvalue cutoff (or, equivalently, three different dimensions of the collective space \(\dim_{k}\)). The \(\dim_{k}=2\) provides numerical stability while ensuring \(|\mathcal{P}^{\mathcal{I}}(t)g(t)|^{2}=1\) up to \(\approx 10^{-7}\). (c): Squared modulus of the collective wave function for \(\dim_{k}=2\). (d): The isoscalar quadrupole moment of a single TDDFT trajectory (red) and the MC-TDDFT state (green) for \(\dim_{k}=2\).
a single frequency, consistently with what is usually observed within TDDFT when the Landau damping effect is absent. The other two TDDFT trajectories (not shown) oscillate at the same frequency, but with slightly larger amplitudes. On the other hand, the MC-TDDFT curve is markedly more complex, exhibiting multiple frequencies, which can be related to the emergence of collective multiphonon excitations [83; 84] in a requantized collective model. Consistently with remarks above, the MC-TDDFT curve closely resembles the corresponding curves from Fig. 3(d). Calculating the Fourier transform of the quadrupole response yields the multiphonon spectrum, as discussed in Ref. [54] and Sec. 5.5. The MC-TDDFT model is marked by a significantly larger fluctuation in quadrupole moment - see Ref. [54] for physical discussion and Appendix C for technical details of calculating it.
### Conservation of energy
To address the question of energy conservation within the MC-TDDFT framework, we consider the same example of mixing of three TDDFT configurations from the previous section. In Fig. 5(a), we show the time evolution of energies of the three TDDFT trajectories, as well as the energy of the MC-TDDFT state. As expected, the TDDFT energies are constant (up to numerical accuracy, see discussion in Sec. 5.1). On the other hand, the MC-TDDFT energy is markedly _not_ a constant.
Due to the choice of initial conditions, at \(t=0\) this energy corresponds to the energy of the first basis state. However, for \(t>0\), the MC-TDDFT energy oscillates with the amplitude of about \(1.5\) MeV. What may seem surprising at first glance is that these oscillations are not bounded by the TDDFT energies. In fact, it is straightforward to show that the amplitude of oscillations is limited by the eigenvalues of the collective Hamiltonian and not by the energies of the basis states.
Following Eq. (25), the MC-TDDFT energy can be calculated as
\[E_{\rm MC-TDDFT}(t)=g^{\dagger}(t)\mathcal{H}^{c}(t)g(t). \tag{68}\]
The collective Hamiltonian \(\mathcal{H}^{c}(t)\) can be recast into the diagonal form,
\[\mathcal{H}^{c}(t)=U_{H}(t)\Lambda_{H}(t)U_{H}^{\dagger}(t). \tag{69}\]
Since the collective Hamiltonian matrix is Hermitian, \(\Lambda_{H}(t)\) is a real and diagonal matrix of eigenvalues and \(U_{H}(t)\) is the unitary matrix of eigenstates. Consequently, we have
\[E_{\rm MC-TDDFT}(t)=\tilde{g}^{\dagger}(t)\Lambda_{H}(t)\tilde{g}(t), \tag{70}\]
where \(\tilde{g}(t)=U_{H}^{\dagger}(t)g(t)U_{H}(t)\). It then follows that the MC-TDDFT energy is bounded by the lowest and the highest eigenvalue of the collective Hamiltonian.
In Fig. 5(b), we show the time evolution of the eigenvalues of the collective Hamiltonian (\(\dim_{k}=2\)), alongside the energy of the mixed state. We verify that the MC-TDDFT energy is indeed bounded from below by the lowest eigenvalue of the collective Hamiltonian. Moreover, it never even approaches the upper limit, which is given by the second eigenvalue. This is consistent with the fact that the dynamics of the mixed state is largely driven by the dominant eigenstate of the
Figure 5: Energy conservation in the case of mixing of three TDDFT configurations, discussed in Sec. 5.3, for \((n=3,\Delta x_{1})\) and \(\Delta t_{1}\). (a): Time evolution of energies of the TDDFT configurations and of the MC-TDDFT state. While TDDFT energies are constants of motion, the MC-TDDFT energy is not conserved. (b) Time evolution of the eigenvalues of the collective Hamiltonian and of the MC-TDDFT state. Note the break along the \(y\)-axis. The MC-TDDFT energy is bounded by the two eigenvalues. (c): Deviation, in percentage, of different components of the MC-TDDFT energy, \(\Delta E_{i}=\Big{(}E_{i}(t)-E_{i}(0)\Big{)}/|E_{i}(0)|\cdot 100\). The kinetic, nuclear (Skyrme), and Coulomb component all contribute to the variation of the total energy.
collective Hamiltonian, with only minor admixtures of the other eigenstate.
To gain more insight into the energy of the mixed state, in Fig. 5(c) we show variations, in percentage, of different components of the MC-TDDFT energy. More precisely, we calculate \(\Delta E_{i}=\Big{(}E_{i}(t)-E_{i}(0)\Big{)}/|E_{i}(0)|\cdot 100\), for \(i=\) kinetic, nuclear (Skyrme), Coulomb. It is apparent that neither component alone is responsible for the variation of the total energy. Rather than that, variations in all components fluctuate with the magnitude of \(<0.5\%\). It is worth noting that this implies that the variations are unlikely to stem from any issue related to the density-dependent prescription. Indeed, the kinetic energy, which is free from any such spuriosities, nevertheless exhibits comparable variations.
Actually, the non-conservation of energy appears to be a feature of multiconfigurational models that are not fully variational [54; 55; 31; 56]. These models are formulated under a simplifying assumption that the time evolution of each TDDFT trajectory can be performed independently, using the existing TDDFT solvers. While significantly alleviating the computational burden, such an approximation disregards the feedback between the evolution of the mixing function and the basis states, thus removing the mechanism that would enforce a strict energy conservation on the MC-TDDFT level. At best, the approximate energy conservation can be imposed by controlling the lower and the upper limit of the collective Hamiltonian eigenvalues. This is a clear shortcoming of such models, which can be understood as an intermediate step that renders the computational implementation feasible and enables pioneering exploration of the MC-TDDFT capabilities in atomic nuclei. Extending the model with a variational principle that treats both the mixing function and basis states as variational parameters, like in the toy model study of Ref. [50], is a natural method of ensuring the full energy conservation, at the price of explicitly including the effect of configuration mixing on individual TDDFT trajectories.
### The density-dependent prescription
As mentioned in Sec. 3.2.3, a prescription is needed to evaluate the \(\rho_{D}^{\alpha}(\mathbf{r})\) density in (44) and (50). To quantify the impact of this choice on the nuclear dynamics, we adopt the same example as in the previous two subsections. This time, however, we consider two different prescriptions for the density-dependent part of an EDF. In addition to the average density prescription of the form (51), which was used in all the calculations up to now, we also consider the prescription of the form (52). Both prescriptions yield real densities and reduce to the diagonal local density in the TDDFT limit. However, they
Figure 6: Influence of the choice of the density-dependent prescription in the case of mixing of three TDDFT configurations, discussed in Sec. 5.3, for \((n=3,\Delta x_{1})\) and \(\Delta t_{1}\). (a): Squared modulus of the collective wave function with the AD1 prescription [Eq. (51)]. (b): Squared modulus of the collective wave function with the AD2 prescription [Eq. (52)]. (c): Comparison of the MC-TDDFT energy obtained with the two prescriptions. (d): Comparison of the isoscalar quadrupole moment of the MC-TDDFT state obtained with the two prescriptions.
are evidently not equivalent and some dependence of nuclear dynamics on this choice is, therefore, expected.
In Figs. 6(a) and 6(b), we compare the collective wave function obtained with the prescription (51) ("Average Density 1" - AD1) and the prescription (52) ("Average Density 2" - AD2). The main difference between the two panels appears to be a moderate shift in phase for larger values of \(t\). Other than that, the overall dynamics seems relatively unaffected.
The impact on observables is examined in Figs. 6(c) and 6(d), where we compare the energy and the isoscalar quadrupole moment of the MC-TDDFT state, respectively, obtained with the two prescriptions. The amplitude of variations in energy remains very similar for the two prescriptions. On the other hand, there appears a moderate shift in phase, similar to the one seen in the collective wave function. Furthermore, the quadrupole moment is essentially unaffected up to \(t\approx 0.75\) zs. Beyond this point, the difference in prescriptions starts to play a role, causing a moderate difference between the two curves.
One way to additionally quantify this difference is to calculate the excitation spectrum by performing a Fourier transform of the quadrupole response. In [54], we calculated this spectrum using the AD1 prescription and obtained the main giant resonance peak at about 18 MeV, in agreement with experiments. Moreover, two additional peaks were observed at approximately twice and thrice the energy of the main peak, which were interpreted as multiphonon excitations of the main giant resonance. By repeating the same procedure for the response obtained with the AD2 prescription, we obtain essentially the same spectrum, with all the peaks shifted by less than 0.1 MeV. This is an encouraging result, indicating that the main conclusions of [54] may not be very sensible to the choice of density prescription.
## 6 Summary
The nuclear TDDFT framework is a tool of choice for describing various dynamical phenomena in atomic nuclei. However, it yields quasi-classical equations of motions in the collective space and, consequently, drastically understimates fluctuations of observables. On the other hand, the MC-TDDFT model encompasses both the dissipation and the quantum fluctuation aspects of nuclear dynamics within a fully quantum framework. Starting from a general mixing of diabatic many-body configurations, the time-dependent variational principle yields the equation of motion for the mixing function whose resolution provides an access to various observables of interest.
In Ref. [54], we reported a study of quadrupole oscillations in \({}^{40}\)Ca where several TDDFT configurations were mixed based on the Skyrme EDF framework. We demonstrated that the collective multiphonon states emerge at high excitation energies when quantum fluctuations in the collective space are included beyond the independent particle approximation. In this work, we provided more technical and numerical details of the underlying MC-TDDFT model.
The central equation of motion [Eq. (27)], obtained through a time-dependent variational principle with the mixing function as a variational parameter, describes a unitary time evolution of the collective wave function. We discussed methods for consistent computation of different ingredients of the equation, including the Hamiltonian kernel, norm kernel, and kernels with explicit time derivatives, as well as the choice of initial conditions and the direct resolution method. A special attention needs to be given when inverting the norm kernel matrix, since linear dependencies in the TDDFT basis can lead to numerical instabilities. Within the current implementation of the model, the TDDFT configurations are assumed to evolve independently. This approximation simplifies the problem significantly, but at the price of rendering the total energy a non-conserved quantity on the MC-TDDFT level. Furthermore, the density dependence of existing EDFs requires employing a density prescription, which can be seen as an additional parameter of the model.
A technical discussion was supplemented with numerical examples, focusing on the issues of convergence, treatment of linearly dependent bases, energy conservation, and prescriptions for the density-dependent part of an EDF. To start with, we demonstrated the convergence of static and dynamic aspects of the new TDDFT solver, based on the finite element method. The time evolution of the MC-TDDFT state was shown to be unitary and well converged for a wide range of time steps and spatial meshes. Generally, finer spatial meshes require smaller time steps. Similarly, MC-TDDFT calculations require finer meshes than those used in TDDFT to achieve a comparable level of convergence. Linear dependencies in the basis need to be carefully treated, since including too small norm eigenvalues causes numerical instabilities, while excluding too large eigenvalues removes a part of physical information. The non-conservation of the MC-TDDFT energy is a combined effect of the kinetic, nuclear, and Coulomb components, and it is at the order of 0.5% in the considered example. Finally, the two versions of the average density prescription discussed in this work yield only minor differences in the collective dynamics.
The very recent implementations of the MC-TDDFT framework in real nuclei [54; 55; 56], based on the pioneering work by Reinhard and collaborators [48; 49] and following the toy-model study of [31], have demonstrated the predictive power of the model. Further developments of the theoretical framework and computational methods are expected to render the model applicable to a wider range of nuclear phenomena in the near future.
## Appendix A: Local transition densities
The spin expansion of the non-local transition density [Eq. (42)] for isospin \(\tau\) reads
\[\begin{split}\rho^{(\tau)}_{\mathbf{qq^{\prime}}}(\mathbf{r}\sigma,\mathbf{r^ {\prime}}\sigma^{\prime};t)&=\frac{1}{2}\rho^{(\tau)}_{\mathbf{qq^{ \prime}}}(\mathbf{r},\mathbf{r^{\prime}};t)\delta_{\sigma\sigma^{\prime}}\\ &+\frac{1}{2}\sum_{\mu}\left\langle\sigma|\hat{\sigma}_{\mu}| \sigma^{\prime}\right\rangle s^{(\tau)}_{\mathbf{qq^{\prime}},\mu}(\mathbf{r},\mathbf{r^ {\prime}};t),\end{split} \tag{13}\]
where \(\rho^{(\tau)}_{\mathbf{qq^{\prime}}}(\mathbf{r},\mathbf{r^{\prime}};t)\) is the non-local one-body transition particle density,
\[\rho^{(\tau)}_{\mathbf{qq^{\prime}}}(\mathbf{r},\mathbf{r^{\prime}};t)=\sum_{\sigma}\rho^ {(\tau)}_{\mathbf{qq^{\prime}}}(\mathbf{r}\sigma,\mathbf{r^{\prime}}\sigma;t), \tag{14}\]
\(s^{(\tau)}_{\mathbf{qq^{\prime}},\mu}(\mathbf{r},\mathbf{r^{\prime}};t)\) is the \(\mu\)-th component of the non-local one-body transition spin density,
\[s^{(\tau)}_{\mathbf{qq^{\prime}},\mu}(\mathbf{r},\mathbf{r^{\prime}};t)=\sum_{\sigma\sigma ^{\prime}}\rho^{(\tau)}_{\mathbf{qq^{\prime}}}(\mathbf{r}\sigma,\mathbf{r^{\prime}}\sigma ^{\prime};t)\left\langle\sigma^{\prime}|\hat{\sigma}_{\mu}|\sigma\right\rangle, \tag{15}\]
and \(\hat{\sigma}_{\mu}\) are the Pauli operators. The local variants of the particle density \(\rho^{(\tau)}_{\mathbf{qq^{\prime}}}(\mathbf{r};t)\), spin density \(\mathbf{s}^{(\tau)}_{\mathbf{qq^{\prime}}}(\mathbf{r};t)\), kinetic density \(\tau^{(\tau)}_{\mathbf{qq^{\prime}}}(\mathbf{r};t)\), current density \(\mathbf{j}^{(\tau)}_{\mathbf{qq^{\prime}}}(\mathbf{r};t)\), spin-current pseudotensor density \(J^{(\tau)}_{\mathbf{qq^{\prime}},\mu}(\mathbf{r};t)\), and spin-orbit current vector density \(\mathsf{J}^{(\tau)}_{\mathbf{qq^{\prime}}}(\mathbf{r};t)\) read
\[\rho^{(\tau)}_{\mathbf{qq^{\prime}}}(\mathbf{r};t) =\rho^{(\tau)}_{\mathbf{qq^{\prime}}}(\mathbf{r},\mathbf{r};t), \tag{16a}\] \[s^{(\tau)}_{\mathbf{qq^{\prime}},\mu}(\mathbf{r};t) =s^{(\tau)}_{\mathbf{qq^{\prime}},\mu}(\mathbf{r},\mathbf{r};t),\] (16b) \[\tau^{(\tau)}_{\mathbf{qq^{\prime}}}(\mathbf{r};t) =\nabla\cdot\nabla^{\prime}\rho^{(\tau)}_{\mathbf{qq^{\prime}}}(\mathbf{r },\mathbf{r^{\prime}};t)|_{\mathbf{r^{\prime}}=\mathbf{r}},\] (16c) \[j^{(\tau)}_{\mathbf{qq^{\prime}},\mu}(\mathbf{r};t) =\frac{1}{2i}(\nabla_{\mu}-\nabla^{\prime}_{\mu})\rho^{(\tau)}_{ \mathbf{qq^{\prime}}}(\mathbf{r},\mathbf{r^{\prime}};t)|_{\mathbf{r^{\prime}}=\mathbf{r}},\] (16d) \[J^{(\tau)}_{\mathbf{qq^{\prime}},\mu}(\mathbf{r};t) =\frac{1}{2i}(\nabla_{\mu}-\nabla^{\prime}_{\mu})s^{(\tau)}_{\mathbf{ qq^{\prime}},\nu}(\mathbf{r},\mathbf{r^{\prime}};t)|_{\mathbf{r^{\prime}}=\mathbf{r}},\] (16e) \[\mathsf{J}^{(\tau)}_{\mathbf{qq^{\prime}},\lambda}(\mathbf{r};t) =\sum_{\mu\nu}\epsilon_{\lambda\mu\nu}J^{(\tau)}_{\mathbf{qq^{\prime} },\mu\nu}(\mathbf{r};t). \tag{16f}\]
In the following paragraph, the explicit dependence on time and isospin is omitted for compactness.
For Slater generating states, the coordinate space representation of the non-local transition density (for either neutrons or protons) can be written as
\[\rho_{\mathbf{qq^{\prime}}}(\mathbf{r}\sigma,\mathbf{r^{\prime}}\sigma^{\prime})=\sum_{kl} \varphi^{\mathbf{q^{\prime}}}_{k}(\mathbf{r}\sigma)\Big{[}M^{-1}_{\mathbf{qq^{\prime}}} \Big{]}_{kl}\varphi^{\mathbf{q^{\prime}}}_{l}(\mathbf{r^{\prime}}\sigma^{\prime}), \tag{17}\]
where \(\Big{[}M^{-1}_{\mathbf{qq^{\prime}}}(t)\Big{]}_{kl}\) are (generally complex) elements of the inverted matrix of single-particle overlaps [Eq. (37)]. Given the decomposition (33), the local transition particle density reads
\[\rho_{\mathbf{qq^{\prime}}}(\mathbf{r})=\sum_{kl}\Big{[}M^{-1}_{\mathbf{qq^{\prime}}} \Big{]}_{kl}\Big{[}\rho^{R}_{\mathbf{qq^{\prime}}}(\mathbf{r})+i\rho^{I}_{\mathbf{qq^{ \prime}}}(\mathbf{r})\Big{]}_{kl} \tag{18}\]
with
\[\Big{[}\rho^{R}_{\mathbf{qq^{\prime}}}(\mathbf{r})\Big{]}_{kl} =\sum_{\alpha}\varphi^{\mathbf{q^{\prime}}}_{k,\alpha}(\mathbf{r})\varphi ^{\mathbf{q}}_{l,\alpha}(\mathbf{r}), \tag{19a}\] \[\Big{[}\rho^{I}_{\mathbf{qq^{\prime}}}(\mathbf{r})\Big{]}_{kl} =\varphi^{\mathbf{q^{\prime}}}_{k,1}(\mathbf{r})\varphi^{\mathbf{q}}_{l,0}(\bm {r})-\varphi^{\mathbf{q^{\prime}}}_{k,0}(\mathbf{r})\varphi^{\mathbf{q}}_{l,1}(\mathbf{r})\] \[\quad+\varphi^{\mathbf{q^{\prime}}}_{k,3}(\mathbf{r})\varphi^{\mathbf{q}}_{l,2 }(\mathbf{r})-\varphi^{\mathbf{q^{\prime}}}_{k,2}(\mathbf{r})\varphi^{\mathbf{q}}_{l,3}(\mathbf{r}), \tag{19b}\]
for \(\alpha=0,1,2,3\). Similarly, the local transition kinetic density reads
\[\tau_{\mathbf{qq^{\prime}}}(\mathbf{r})=\sum_{kl}\Big{[}M^{-1}_{\mathbf{qq^{\prime}}} \Big{]}_{kl}\Big{[}\tau^{R}_{\mathbf{qq^{\prime}}}(\mathbf{r})+i\tau^{I}_{\mathbf{qq^{ \prime}}}(\mathbf{r})\Big{]}_{kl}, \tag{20}\]
with
\[\Big{[}\tau^{R}_{\mathbf{qq^{\prime}}}(\mathbf{r})\Big{]}_{kl} =\sum_{\alpha}\big{(}\nabla\varphi^{\mathbf{q^{\prime}}}_{k,\alpha}( \mathbf{r})\big{)}\big{(}\nabla\varphi^{\mathbf{q}}_{l,\alpha}(\mathbf{r})\big{)}, \tag{21a}\] \[\Big{[}\tau^{I}_{\mathbf{qq^{\prime}}}(\mathbf{r})\Big{]}_{kl} =\big{(}\nabla\varphi^{\mathbf{q^{\prime}}}_{k,1}(\mathbf{r})\big{)}\big{(} \nabla\varphi^{\mathbf{q}}_{l,0}(\mathbf{r})\big{)}\] \[\quad-\big{(}\nabla\varphi^{\mathbf{q^{\prime}}}_{k,0}(\mathbf{r})\big{)} \big{(}\nabla\varphi^{\mathbf{q}}_{l,1}(\mathbf{r})\big{)}\] \[\quad+\big{(}\nabla\varphi^{\mathbf{q^{\prime}}}_{k,3}(\mathbf{r})\big{)} \big{(}\nabla\varphi^{\mathbf{q}}_{l,2}(\mathbf{r})\big{)}\] \[\quad-\big{(}\nabla\varphi^{\mathbf{q^{\prime}}}_{k,2}(\mathbf{r})\big{)} \big{(}\nabla\varphi^{\mathbf{q}}_{l,3}(\mathbf{r})\big{)}, \tag{21b}\]
with
\[\big{(}\nabla\varphi^{\mathbf{q^{\prime}}}_{k,\alpha}(\mathbf{r})\big{)} \big{(}\nabla\varphi^{\mathbf{q}}_{l,\beta}(\mathbf{r})\big{)}=\sum_{\mu}\big{(} \partial_{\mu}\varphi^{\mathbf{q^{\prime}}}_{k,\alpha}(\mathbf{r})\big{)}\big{(} \partial_{\mu}\varphi^{\mathbf{q}}_{l,\beta}(\mathbf{r})\big{)} \tag{22}\]
and \(\mu=x,y,z\). Furthermore, the \(\mu\)-th component of the local transition current density reads
\[j^{\mu}_{\mathbf{qq^{\prime}}}(\mathbf{r})=\frac{1}{2}\sum_{kl}\Big{[}M^
The components of the local transition spin density then read
\[s^{\mu}_{\mathbf{qq^{\prime}}}(\mathbf{r})=\sum_{kl}\Big{[}M^{-1}_{\mathbf{qq^{\prime}}}\Big{]} _{kl}\Big{[}s^{\mu,R}_{\mathbf{qq^{\prime}}}(\mathbf{r})+iS^{\mu,I}_{\mathbf{qq^{\prime}}}( \mathbf{r})\Big{]}_{kl}, \tag{13}\]
with
\[\Big{[}s^{x,R}_{\mathbf{qq^{\prime}}}(\mathbf{r})\Big{]}_{kl} =\varphi^{\mathbf{q^{\prime}}}_{k,0}(\mathbf{r})\varphi^{\mathbf{q}}_{l,2}(\bm {r})+\varphi^{\mathbf{q^{\prime}}}_{k,1}(\mathbf{r})\varphi^{\mathbf{q}}_{l,3}(\mathbf{r})\] \[+\varphi^{\mathbf{q^{\prime}}}_{k,2}(\mathbf{r})\varphi^{\mathbf{q}}_{l,0}( \mathbf{r})+\varphi^{\mathbf{q^{\prime}}}_{k,3}(\mathbf{r})\varphi^{\mathbf{q}}_{l,1}(\mathbf{r}), \tag{14a}\] \[\Big{[}s^{x,I}_{\mathbf{qq^{\prime}}}(\mathbf{r})\Big{]}_{kl} =\varphi^{\mathbf{q^{\prime}}}_{k,1}(\mathbf{r})\varphi^{\mathbf{q}}_{l,2}( \mathbf{r})-\varphi^{\mathbf{q^{\prime}}}_{k,2}(\mathbf{r})\varphi^{\mathbf{q}}_{l,1}(\mathbf{r})\] \[+\varphi^{\mathbf{q^{\prime}}}_{k,3}(\mathbf{r})\varphi^{\mathbf{q}}_{l,0}( \mathbf{r})-\varphi^{\mathbf{q^{\prime}}}_{k,0}(\mathbf{r})\varphi^{\mathbf{q}}_{l,3}(\mathbf{r})\] (14b) \[\Big{[}s^{y,R}_{\mathbf{qq^{\prime}}}(\mathbf{r})\Big{]}_{kl} =\varphi^{\mathbf{q^{\prime}}}_{k,0}(\mathbf{r})\varphi^{\mathbf{q}}_{l,3}( \mathbf{r})+\varphi^{\mathbf{q^{\prime}}}_{k,3}(\mathbf{r})\varphi^{\mathbf{q}}_{l,0}(\mathbf{r})\] \[-\varphi^{\mathbf{q^{\prime}}}_{k,1}(\mathbf{r})\varphi^{\mathbf{q}}_{l,2}( \mathbf{r})-\varphi^{\mathbf{q^{\prime}}}_{k,2}(\mathbf{r})\varphi^{\mathbf{q}}_{l,1}(\mathbf{r}),\] (14c) \[\Big{[}s^{y,I}_{\mathbf{qq^{\prime}}}(\mathbf{r})\Big{]}_{kl} =\varphi^{\mathbf{q^{\prime}}}_{k,0}(\mathbf{r})\varphi^{\mathbf{q}}_{l,2}( \mathbf{r})-\varphi^{\mathbf{q^{\prime}}}_{k,2}(\mathbf{r})\varphi^{\mathbf{q}}_{l,0}(\mathbf{r})\] \[+\varphi^{\mathbf{q^{\prime}}}_{k,1}(\mathbf{r})\varphi^{\mathbf{q}}_{l,3}( \mathbf{r})-\varphi^{\mathbf{q^{\prime}}}_{k,3}(\mathbf{r})\varphi^{\mathbf{q}}_{l,1}(\mathbf{r}),\] (14d) \[\Big{[}s^{z,R}_{\mathbf{qq^{\prime}}}(\mathbf{r})\Big{]}_{kl} =\varphi^{\mathbf{q^{\prime}}}_{k,0}(\mathbf{r})\varphi^{\mathbf{q}}_{l,0}( \mathbf{r})+\varphi^{\mathbf{q^{\prime}}}_{k,1}(\mathbf{r})\varphi^{\mathbf{q}}_{l,1}(\mathbf{r})\] \[-\varphi^{\mathbf{q^{\prime}}}_{k,2}(\mathbf{r})\varphi^{\mathbf{q}}_{l,2}( \mathbf{r})-\varphi^{\mathbf{q^{\prime}}}_{k,3}(\mathbf{r})\varphi^{\mathbf{q}}_{l,3}(\mathbf{r})\] (14e) \[\Big{[}s^{z,I}_{\mathbf{qq^{\prime}}}(\mathbf{r})\Big{]}_{kl} =\varphi^{\mathbf{q^{\prime}}}_{k,1}(\mathbf{r})\varphi^{\mathbf{q}}_{l,0}( \mathbf{r})-\varphi^{\mathbf{q^{\prime}}}_{k,0}(\mathbf{r})\varphi^{\mathbf{q}}_{l,1}(\mathbf{r})\] \[+\varphi^{\mathbf{q^{\prime}}}_{k,2}(\mathbf{r})\varphi^{\mathbf{q}}_{l,3}( \mathbf{r})-\varphi^{\mathbf{q^{\prime}}}_{k,3}(\mathbf{r})\varphi^{\mathbf{q}}_{l,2}(\mathbf{r}). \tag{14f}\]
Finally, the components of the spin-current pseudotensor density read
\[J^{\mu\nu}_{\mathbf{qq^{\prime}}}(\mathbf{r})=\frac{1}{2}\sum_{kl}\Big{[}M^{-1}_{\mathbf{ qq^{\prime}}}\Big{]}_{kl}\Big{[}J^{\mu\nu,R}_{\mathbf{qq^{\prime}}}(\mathbf{r})+iJ^{\mu\nu,I}_{ \mathbf{qq^{\prime}}}(\mathbf{r})\Big{]}_{kl} \tag{14g}\]
with
\[\Big{[}J^{\mu x,R}_{\mathbf{qq^{\prime}}}(\mathbf{r})\Big{]}_{kl} =\big{(}\partial_{\mu}\varphi^{\mathbf{q^{\prime}}}_{k,1}(\mathbf{r}) \big{)}\varphi^{\mathbf{q}}_{l,2}(\mathbf{r})-\varphi^{\mathbf{q^{\prime}}}_{k,1}(\mathbf{r}) \big{(}\partial_{\mu}\varphi^{\mathbf{q}}_{l,2}(\mathbf{r})\big{)}\] \[-\big{(}\partial_{\mu}\varphi^{\mathbf{q^{\prime}}}_{k,0}(\mathbf{r}) \big{)}\varphi^{\mathbf{q}}_{l,3}(\mathbf{r})+\varphi^{\mathbf{q^{\prime}}}_{k,0}(\mathbf{r}) \big{(}\partial_{\mu}\varphi^{\mathbf{q}}_{l,3}(\mathbf{r})\big{)}\] \[+\big{(}\partial_{\mu}\varphi^{\mathbf{q^{\prime}}}_{k,3}(\mathbf{r}) \big{)}\varphi^{\mathbf{q}}_{l,0}(\mathbf{r})-\varphi^{\mathbf{q^{\prime}}}_{k,3}(\mathbf{r}) \big{(}\partial_{\mu}\varphi^{\mathbf{q}}_{l,0}(\mathbf{r})\big{)}\] \[-\big{(}\partial_{\mu}\varphi^{\mathbf{q^{\prime}}}_{k,2}(\mathbf{r}) \big{)}\varphi^{\mathbf{q}}_{l,1}(\mathbf{r})+\varphi^{\mathbf{q^{\prime}}}_{k,2}(\mathbf{r}) \big{(}\partial_{\mu}\varphi^{\mathbf{q}}_{l,1}(\mathbf{r})\big{)},\] (14g) \[\Big{[}J^{\mu x,I}_{\mathbf{qq^{\prime}}}(\mathbf{r})\Big{]}_{kl} =\varphi^{\mathbf{q^{\prime}}}_{k,0}(\mathbf{r})\big{(}\partial_{\mu} \varphi^{\mathbf{q}}_{l,2}(\mathbf{r})\big{)}-\big{(}\partial_{\mu}\varphi^{\mathbf{q^{ \prime}}}_{k,0}(\mathbf{r})\big{)}\varphi^{\mathbf{q}}_{l,2}(\mathbf{r})\] (14h) \[+\varphi^{\mathbf{q^{\prime}}}_{k,1}(\mathbf{r})\big{(}\partial_{\mu} \varphi^{\mathbf{q}}_{l,0}(\mathbf{r})\big{)}-\big{(}\partial_{\mu}\varphi^{\mathbf{q^{ \prime}}}_{k,2}(\mathbf{r})\big{)}\varphi^{\mathbf{q}}_{l,0}(\mathbf{r})\] \[+\varphi^{\mathbf{q^{\prime}}}_{k,3}(\mathbf{r})\big{(}\partial_{\mu} \varphi^{\mathbf{q}}_{l,1}(\mathbf{r})\big{)}-\big{(}\partial_{\mu}\varphi^{\mathbf{q^{ \prime}}}_{k,3}(\mathbf{r})\big{)}\varphi^{\mathbf{q}}_{l,1}(\mathbf{r}),\] (14h) \[\Big{[}J^{\mu y,I}_{\mathbf{qq^{\prime}}}(\mathbf{r})\Big{]}_{kl} =\big{(}\partial_{\mu}\varphi^{\mathbf{q^{\prime}}}_{k,1}(\mathbf{r}) \big{)}\varphi^{\mathbf{q}}_{l,2}(\mathbf{r})-\varphi^{\mathbf{q^{\prime}}}_{k,1}(\mathbf{r}) \big{(}\partial_{\mu}\varphi^{\mathbf{q}}_{l,0}(\mathbf{r})\] (14h) \[-\big{(}\partial_{\mu}\varphi^{\mathbf{q^{\prime}}}_{k,0}(\mathbf{r}) \big{)}\varphi^{\mathbf{q}}_{l,3}(\mathbf{r})+\varphi^{\mathbf{q^{\prime}}}_{k,0}(\mathbf{r}) \big{(}\partial_{\mu}\varphi^{\mathbf{q}}_{l,3}(\mathbf{r})\big{)}\] \[+\big{(}\partial_{\mu}\varphi^{\mathbf{q^{\prime}}}_{k,2}(\mathbf{r}) \big{)}\varphi^{\mathbf{q}}_{l,1}(\mathbf{r})-\varphi^{\mathbf{q^{\prime}}}_{k,2}(\mathbf{r}) \big{(}\partial_{\mu}\varphi^{\mathbf{q}}_{l,1}(\mathbf{r})\big{)}\] \[-\big{(}\partial_{\mu}
\[B_{12} =\frac{1}{24}t_{3}x_{3},\] (B.171) \[B_{13} =-\frac{1}{24}t_{3}.\] (B.172)
Parameters \(t_{i}\), \(x_{i}\) (\(i=0,1,2,3\)), \(W\), and \(\alpha\) are the standard parameters of the Skyrme pseudopotential [7; 73].
## Appendix C Variance of a one-body operator in a normalized MC-TDDFT state
To evaluate the variance of Eq. (65), we need an expectation value of the \(\hat{O}^{2}\) operator in a normalized MC-TDDFT state. We start from
\[\langle\Psi(t)|\hat{O}^{2}|\Psi(t)\rangle=\int_{\mathbf{qq^{\prime}}}d\mathbf{q}\,d\mathbf{ q^{\prime}}g_{\mathbf{q}}^{*}(t)\mathcal{O}^{2^{c}}_{\mathbf{qq^{\prime}}}(t)g_{\mathbf{q^{ \prime}}}(t).\] (C.18)
Again, the collective kernel of the \(\hat{O}^{2}\) operator is calculated from (24) and the corresponding usual kernel follows from the generalized Wick theorem,
\[\mathcal{O}^{2}_{\mathbf{qq^{\prime}}}(t)=\mathcal{N}_{\mathbf{qq^{\prime}}}(t)\Big{[} \text{Tr}^{2}\big{(}O\rho^{\mathbf{qq^{\prime}}}(t)\big{)}\] (C.19) \[\qquad\qquad\qquad+\text{Tr}\big{(}O\rho^{\mathbf{qq^{\prime}}}(t)O( 1-\rho^{\mathbf{qq^{\prime}}}(t)\big{)}\Big{]}.\]
Here,
\[\text{Tr}\big{(}O\rho^{\mathbf{qq^{\prime}}}(t)\big{)}=\int\,d^{3}\mathbf{r}O(\mathbf{r}) \rho_{\mathbf{qq^{\prime}}}(\mathbf{r};t).\] (C.20)
Furthermore, the second trace corresponds to the sum of two terms,
\[\text{Tr}\big{(}O\rho^{\mathbf{qq^{\prime}}}(t)O(1-\rho^{\mathbf{qq^{\prime}}}(t) \big{)}=C_{1}^{\mathbf{qq^{\prime}}}(t)+C_{2}^{\mathbf{qq^{\prime}}}(t).\] (C.21)
The first term reads
\[C_{1}^{\mathbf{qq^{\prime}}}(t) =\int\,d^{3}\mathbf{r}O^{2}(\mathbf{r})\sum_{kl}\Big{[}M_{\mathbf{qq^{\prime} }}^{-1}(t)\Big{]}_{kl}\] (C.22) \[\times\bigg{\{}A_{kl}^{\mathbf{qq^{\prime}}}(\mathbf{r};t)+iB_{kl}^{\mathbf{ qq^{\prime}}}(\mathbf{r};t)\bigg{\}},\]
with
\[A_{kl}^{\mathbf{qq^{\prime}}}(\mathbf{r};t)=\sum_{\alpha}\varphi_{k,\alpha}^{\mathbf{q^{ \prime}}}(\mathbf{r};t)\varphi_{l,\alpha}^{\mathbf{q}}(\mathbf{r};t)\] (C.23)
and
\[B_{kl}^{\mathbf{qq^{\prime}}}(\mathbf{r};t) =\varphi_{k,1}^{\mathbf{q^{\prime}}}(\mathbf{r};t)\varphi_{l,0}^{\mathbf{q}} (\mathbf{r};t)\] (C.24) \[-\varphi_{k,0}^{\mathbf{q^{\prime}}}(\mathbf{r};t)\varphi_{l,1}^{\mathbf{q}} (\mathbf{r};t)\] \[+\varphi_{k,3}^{\mathbf{q^{\prime}}}(\mathbf{r};t)\varphi_{l,2}^{\mathbf{q}} (\mathbf{r};t)\] \[-\varphi_{k,2}^{\mathbf{q^{\prime}}}(\mathbf{r};t)\varphi_{l,3}^{\mathbf{q}} (\mathbf{r};t).\]
The second term reads
\[C_{2}^{\mathbf{qq^{\prime}}}(t) =\int\,d^{3}\mathbf{r}\int\,d^{3}\mathbf{r^{\prime}}O(\mathbf{r})O(\mathbf{r^{ \prime}})\] (C.25) \[\times\sum_{klmn}\Big{[}M_{\mathbf{qq^{\prime}}}^{-1}(t)\Big{]}_{kl} \Big{[}M_{\mathbf{qq^{\prime}}}^{-1}(t)\Big{]}_{mn}\] \[\times\bigg{\{}A_{kn}^{\mathbf{qq^{\prime}}}(\mathbf{r};t)+iB_{kn}^{\mathbf{ qq^{\prime}}}(\mathbf{r};t)\bigg{\}}\] \[\times\bigg{\{}A_{ml}^{\mathbf{qq^{\prime}}}(\mathbf{r^{\prime}};t)+iB_{ ml}^{\mathbf{qq^{\prime}}}(\mathbf{r^{\prime}};t)\bigg{\}}.\]
This work was supported in part by CNRS through the AIQI-IN2P3 funding. P. M. would like to express his gratitude to CEA and IJCLab for their warm hospitality during work on this project.
|
2305.00431 | Techniques to seed the self-modulation instability of a long proton
bunch in plasma | The Advanced Wakefield Experiment (AWAKE) at CERN relies on the seeded
Self-Modulation (SM) of a long relativistic proton bunch in plasma to
accelerate an externally injected MeV witness electron bunch to GeV energies.
During AWAKE Run 1 (2016-2018) and Run 2a (2021-2022), two seeding methods were
investigated experimentally: relativistic ionization front seeding and electron
bunch seeding. In the first one, a short laser pulse copropagates within the
proton bunch and ionizes the rubidium vapor, generating the plasma. In the
second, a short electron bunch propagates in plasma ahead of the proton bunch
and drives the seed wakefields. Both seeding methods will be further employed
during AWAKE Run 2b (2023-2024) to study their effect on the SM evolution in
the presence of a plasma density step. In this contribution, we will show the
main experimental results and discuss their impact for the future design of the
experiment, in particular for Run 2c (starting in 2028), where the plasma will
be split in two sections: one dedicated to SM of the proton bunch, and the
other to the electron acceleration process. | L. Verra, G. Zevi Della Porta, E. Gschwendtner, M. Bergamaschi, P. Muggli | 2023-04-30T09:21:03Z | http://arxiv.org/abs/2305.00431v1 | # Techniques to Seed the Self-Modulation Instability of a Long Proton Bunch in Plasma
###### Abstract
The Advanced Wakefield Experiment (AWAKE) at CERN relies on the seeded Self-Modulation (SM) of a long relativistic proton bunch in plasma to accelerate an externally injected MeV witness electron bunch to GeV energies. During AWAKE Run 1 (2016-2018) and Run 2a (2021-2022), two seeding methods were investigated experimentally: relativistic ionization front seeding and electron bunch seeding. In the first one, a short laser pulse copropagates within the proton bunch and ionizes the rubidium vapor, generating the plasma. In the second, a short electron bunch propagates in plasma ahead of the proton bunch and drives the seed wakefields. Both seeding methods will be further employed during AWAKE Run 2b (2023-2024) to study their effect on the SM evolution in the presence of a plasma density step. In this contribution, we will show the main experimental results and discuss their impact for the future design of the experiment, in particular for Run 2c (starting in 2028), where the plasma will be split in two sections: one dedicated to SM of the proton bunch, and the other to the electron acceleration process.
Proton bunches produced routinely by synchrotrons have large energy per proton (GeVs to TeVs) and high charge per bunch (>10 nC), resulting in a large amount of energy stored per bunch (>20 kJ). They can therefore drive wakefields [1] in plasma over long distance (10-10\({}^{3}\) m), potentially leading to high energy gain (1-100s GeV) by a witness electron (\(e^{-}\)) bunch in a single accelerating section, avoiding the complications of staging. This was demonstrated with numerical simulations [2] using a short \(p^{+}\) bunch (root mean square bunch length \(\sigma_{z}=100\)\(\mu\)m).
However, \(p^{+}\) bunches routinely produced, for example at CERN, are cm-long, much longer than the plasma electron wavelength in a plasma with electron density interesting for high-gradient acceleration. Since conventional bunch compression [3] and shaping [4] techniques are unpractical with relativistic protons, the transverse occurrence of the two-stream instability, the self-modulation (SM) instability [5, 6], is used to form a train of short microbunches that can resonantly drive wakefields with amplitude to the GV/m level.
Large energy gain by a witness bunch requires a long plasma. In the Advanced Wakefield Experiment (AWAKE) [7], we use field-ionization of rubidium vapor by a \(\sim 10^{12}\) W/cm\({}^{2}\) laser pulse to produce the plasma in a 10-m-long source (see Fig. 1(a)). In previous experiments, we demonstrated that the 6-cm-long, 400 GeV/c, 48 nC \(p^{+}\) bunch provided by the CERN Super Proton Synchrotron (SPS) self-modulates [8, 9] and that electrons can be injected and accelerated to energies >2 GeV [10]. However, laser ionization does not scale favourably to very long length (20-10\({}^{3}\) m), in particular because of energy depletion of the pulse by the ionization process. We therefore explore other plasma sources (e.g., discharge [11] and helicon [12]) that produce a preformed plasma.
Self-modulation is by nature an instability and must be seeded to reliably exploit it for reproducible particle acceleration. Moreover, for the future design of the experiment (see Fig. 1(b)), we foresee to use a first plasma (the "modulator") dedicated to the seeded SM of the \(p^{+}\) bunch, and a second one (the "accelerator") dedicated to the acceleration of the witness bunch. The two plasmas are separated by a \(\sim\) 1-m-long gap. This scheme is chosen to inject the 150 MeV witness \(e^{-}\) bunch on axis [13] and after SM has developed and saturated in the first plasma to produce a microbunch train. In addition, the first plasma will be equipped with a density step [14] to avoid the decaying of the amplitude of the wakefields due to their dephasing with respect to the microbunch train [6, 15].
In previous experiments we explored two seeding methods: relativistic ionization front (RIF) [16] and \(e^{-}\) bunch seeding [17]. In the following, we summarize the experimental results and discuss the advantages and disadvantages of each method, and their suitability for the future design of the experiment and for a high-energy accelerator for particle physics [18].
## 2 Relativistic Ionization Front Seeding
Laser ionization makes RIF seeding possible: when the ionizing laser pulse copropagates within the \(p^{+}\) bunch, the fast onset of the bunch-plasma interaction drives the initial seed wakefields, from which SM grows exponentially. Figure 2 shows two sets of consecutive time-resolved images of the \(p^{+}\) bunch after propagation in plasma. In Figure 2(a), SM is not seeded: the timing of the microbunch train (traveling from left to right) is not reproducible from event to event. In Figure 2(b), SM is seeded: each microbunch appears at the same time along the bunch for all events. Averaging the images of each set, one obtains a blurred image in the case of the instability, and a high-contrast images in the case of seeding (e.g., Fig. 3(b)), depending on whether the underlying distribution is reproducible throughout the events or not. The transition from the instability to the seeded regime occurs when the bunch density at the ionization front loca
tion is high enough to drive seed wakefields with amplitude \(>4\,\)MV/m [16]. Thus, in the seeded case, there is a part of the bunch ahead of the RIF that keeps propagating as in vacuum.
This method has the advantage of the inherited alignment of the \(p^{+}\) bunch with the plasma column, which makes it simpler in terms of operation. The seed wakefields act symmetrically on the bunch, and therefore the undesired asymmetric counterpart of SM, the hosing instability [19], is suppressed [20].
However, in the future setup of the experiment (Fig. 1(b)), the front of the bunch, left unmodulated after the first plasma, enters the second (preformed) plasma diverging and with large transverse size. In case the front of the bunch self-modulates in the second plasma, the wakefields that it drives may disrupt the structure of the (seeded) self-modulated back and spoil the acceleration process.
Recent experimental results [21] indicate that, over \(10\,\)m of propagation in plasma, a bunch with transverse size and divergence comparable to that of the bunch front entering the second plasma does not self-modulate ahead of the transition point along the bunch. Therefore, RIF seeding remains a viable option for AWAKE Run 2c and future accelerator schemes.
## 3 Electron bunch seeding
The initial seed wakefields can also be generated by a charged particle bunch propagating ahead of the \(p^{+}\) bunch. In AWAKE, we used the short \(19\,\)MeV \(e^{-}\) bunch previously used for acceleration experiments to demonstrate this experimentally. In this case, the ionizing laser pulse travels ahead of both electron and proton bunches and preforms the plasma. Figure 3 shows averaged time-resolved images of the front of the \(p^{+}\) bunch propagating in vacuum (a), and in plasma with the seed \(e^{-}\) bunch (b,c). The high contrast of averaged time-resolved images with plasma confirms that the timing of the microbunch train (i.e. of the wakefields) is reproducible from event to event with electron bunch seeding. We also demonstrated that the timing of the modulation is tied to the relative timing between the seed \(e^{-}\) and \(p^{+}\) bunch by delaying the seed by \(6.7\,\)ps (close to half a plasma electron period, for the density used). This results in a shift in time of the microbunch train by the same amount (Fig. (c)), that is clearly visible from the on-axis profiles of the two images (Fig. (d), blue line: profile of (b), red line: profile of (c)).
This method has the advantage of applying the seed wakefields on the entire \(p^{+}\) bunch, which would enter the second plasma fully self-modulated with reproducible timing from event to event. In fact, Figs. 3(b,c) show that the bunch self-modulates from the very front of the bunch. Moreover, the amplitude of the seed wakefields and the growth rate of the instability can be varied independently by varying the parameters of the \(e^{-}\) or \(p^{+}\) bunch, respectively, allowing for additional control on the development of the process [17].
The disadvantage of this method is that it requires aligning the \(e^{-}\) bunch trajectory, both in position and angle, with the plasma column and with the \(p^{+}\) bunch trajectory. Recent experimental results [22] show that, when the transverse
Figure 1: Experimental setup of AWAKE Run 1, 2a-b (a) and Run 2c (b).
Figure 2: Ten consecutive time-resolved images of the self-modulated \(p^{+}\) bunch in case of RIF propagating \(600\,\)ps (a) and \(350\,\)ps (b) ahead of the bunch center. The bunch propagates from left to right. \(n_{pe}=0.94\times 10^{14}\,\)cm\({}^{-3}\). Figure reproduced from [16].
seed wakefields act asymmetrically on the \(p^{+}\) bunch, the hosing instability, which is detrimental for the acceleration process, arises. Electron bunch seeding is also less practical than RIF seeding because it requires an additional electron source and beamline to provide the seed bunch.
## 3 Awake Run 2 Experimental Program
The experimental program of AWAKE Run 2 is divided in phases, each dedicated to particular physics milestones. The final goal is to generate high-quality and high-energy \(e^{-}\) bunches suitable for particle physics experiments. During Run 2a (2021-2022), we successfully demonstrated the \(e^{-}\) bunch seeding [17], studied the development of hosing instability [22] and we further tested the RIF seeding scheme [21]. Run 2b (2023-2024) will investigate the evolution of SM in the presence of a step in the plasma electron density. Numerical simulations [23] show that a sudden increase in plasma electron density limits the dephasing of wakefields with respect to the microbunch train and hence the decaying of the amplitude of the wakefields. This is important to maintain a high accelerating gradient for long distances in plasma. The evolution of the wakefields along the plasma will be studied by measuring the plasma recombination light [24, 25] and the energy spectrum of accelerated electrons. Both seeding methods will be further explored in the presence of the density step.
Run 2c (to start in 2028) requires major modifications of the existing facility, with the installation of a second plasma source, as well as an electron source and beamline providing the 150 MeV witness \(e^{-}\) bunch. The experiment will focus on acceleration of \(e^{-}\) bunches in the second plasma with control of the bunch quality. Both seeding methods will be tested again, in order to determine the final design of a plasma wakefield accelerator for applications, based on the self-modulation scheme. The following Run 2d will employ scalable plasma sources to reach even higher energy gains, on the path towards particle physics applications.
|
2309.06395 | Human-Centered Autonomy for UAS Target Search | Current methods of deploying robots that operate in dynamic, uncertain
environments, such as Uncrewed Aerial Systems in search \& rescue missions,
require nearly continuous human supervision for vehicle guidance and operation.
These methods do not consider high-level mission context resulting in
cumbersome manual operation or inefficient exhaustive search patterns. We
present a human-centered autonomous framework that infers geospatial mission
context through dynamic feature sets, which then guides a probabilistic target
search planner. Operators provide a set of diverse inputs, including priority
definition, spatial semantic information about ad-hoc geographical areas, and
reference waypoints, which are probabilistically fused with geographical
database information and condensed into a geospatial distribution representing
an operator's preferences over an area. An online, POMDP-based planner,
optimized for target searching, is augmented with this reward map to generate
an operator-constrained policy. Our results, simulated based on input from five
professional rescuers, display effective task mental model alignment, 18\% more
victim finds, and 15 times more efficient guidance plans then current
operational methods. | Hunter M. Ray, Zakariya Laouar, Zachary Sunberg, Nisar Ahmed | 2023-09-12T16:59:08Z | http://arxiv.org/abs/2309.06395v3 | # Human-Centered Autonomy for UAS Target Search
###### Abstract
Current methods of deploying robots that operate in dynamic, uncertain environments, such as Uncrewed Aerial Systems in search & rescue missions, require nearly continuous human supervision for vehicle guidance and operation. These methods do not consider high-level mission context resulting in cumbersome manual operation or inefficient exhaustive search patterns. We present a human-centered autonomous framework that infers geospatial mission context through dynamic feature sets, which then guides a probabilistic target search planner. Operators provide a set of diverse inputs, including priority definition, spatial semantic information about ad-hoc geographical areas, and reference waypoints, which are probabilistically fused with geographical database information and condensed into a geospatial distribution representing an operator's preferences over an area. An online, POMDP-based planner, optimized for target searching, is augmented with this reward map to generate an operator-constrained policy. Our results, simulated based on input from five professional rescuers, display effective task mental model alignment, 18% more victim finds, and 15 times more efficient guidance plans then current operational methods.
## I Introduction
Uncrewed Aerial Systems (UAS) are revolutionizing public safety as they provide new perspectives and rapid access to first responders in a variety of tasks including search & rescue (SAR), firefighting, water rescue, and law enforcement [1]. Operation of UAS often requires up to three team members to respectively pilot the vehicle, act as a technical specialist or observer, and coordinate platform integration across a mission. UAS autonomy can enable single-pilot operations with greater operator mobility and situational awareness by substantially reducing high task loads present in these settings. However, the unpredictable nature of public safety incidents puts challenging requirements on an autonomous system's adaptability to changing circumstances.
Autonomous systems must seamlessly fit into teams to be effective, taking unquestioning direction from an operator and rapidly processing novel information, such as new search requirements, to dynamically (re)program their goals. For a UAS involved in SAR, this direction involves the operator defining their preferences about where and how to search specific areas based on their experience and knowledge, which characterizes a portion of their mental model. However, the unstructured nature of these preferences must be condensed into an accessible form for computationally constrained planning and informed execution.
Building upon our experience in rescue operations, we develop a human-centered autonomous framework, shown in Figure 1, which enables dynamic planning for UAS SAR missions. Current methods of automated flight use an operator's inputs, \(\mathcal{I}\), as a deterministic policy, \(\pi\), to follow a set of waypoints or execute an exhaustive search over a polygonal area [1]. However, these \(\mathcal{I}\) also contain valuable context, which can infer higher level mission goals, and thereby define extensive autonomous behavior. With this new information, the autonomous agent must effectively balance the operator's direction while fulfilling the primary task of searching for the victim.
We build upon current modes of input through an augmented set of \(\mathcal{I}\) - priority definition, spatial semantic observations over ad-hoc geographical areas, and example waypoints - which informs an operator's geospatial preferences over a mission area. These inputs are processed using a probabilistic model and fused via Bayesian inference to estimate a geospatial distribution reflecting operator preferences. Our new, model-driven, approach is necessary as no two incidents are exactly the same and attempting to systematically learn overall preferences based on prior events is challenging due to multiple communication modalities, key nuances across incidents, and limited data availability. Instead, we dynamically fuse and then embed operator preferences into a reward model for a Partially Observable Markov Decision Process (POMDP) to generate an online policy.
In summary, our key technical contributions include: (1) a new interaction interface for operators to provide diverse modes of input, which inform a nuanced and complex set of preferences on autonomous behavior; (2) a probabilistic model that rapidly aligns task mental models for geospatial feature prioritization between autonomy and expert using
Fig. 1: Search & rescue incidents require operators to fuse multiple sources of information to direct an aircraft. Our human centred architecture fuses a variety of unstructured operator inputs to inform an optimal planning process that generates a policy for autonomous execution.
sparse training data and operator-augmented feature vectors; (3) a general POMDP formulation for collaborative target search using adaptive reward; and (4) proof-of-concept user study with five trained rescuers that leverages a realistic search scenario and export informed truth model to evaluate our approach in simulations against an operational baseline.
## II Problem Statement and Background
We motivate our work with a hypothetical incident, derived from real-life scenarios, without loss of generality. In this scenario, a middle-aged man, a frequent fly-fisherman, was reported missing by his wife after he failed to return from a daytime trip to the Walker Ranch Open Space in Boulder County, CO. The victim's vehicle was found at the southern trailhead. The first arriving unit includes a UAS team, which is tasked with performing a preliminary search of the area.
### _Problem Statement_
A search operation takes place over a larger state space, \(S\subseteq\mathbb{R}^{f}\), representing \(f\) mission influencing factors, including static (geography) or dynamic (team locations) variables. The operator acts as a means of fusing the contextual and environmental aspects of an evolving operation. They provide a set of inputs, \(\mathcal{I}\), that correspond to their preferences over locations to search. \(\mathcal{I}\) must be interpreted along with aircraft performance constraints, such as limited battery life, to generate a guidance policy, \(\pi\), that effectively searches an area. The aircraft can perform its internal navigation and control to follow \(\pi\) while looking for infrared hotpots or visual clues as to a static victim's location. This detection can rely on computer vision or the operator to supervise video feeds, however this is outside this paper's scope.
### _Related Work_
Given the information in the motivating mission, an officer directing operations gains awareness of the victim's physical condition, potential equipment, last-know-point, mindset, and behavior. These factors of their evolving mental model are then grounded in the mission area, which informs the resulting search methods and tasking [2]. If the UAS management system can minimize the need for input and monitoring, the operator can move across the environment and maintain situational awareness, resulting in a faster rescue time. However, this requires interpreting and then incorporating the mental model into the overall autonomy architecture.
While robots interpret mental models in multiple ways [3] and teams that use them improve their performance [4], they often rely on static, pre-defined architectures [5],[6]. Autonomous behavior has also been effectively guided by operators using a variety of methods, including implementing a POMDP to couple the operator's inputs with the search task planning [7], leveraging set plays [8], or using Bayesian priors on geological knowledge to define operator preferences [9]. While these methods effectively leverage the operator's knowledge to guide behavior, they require limited, specialized, or mission specific inputs. We aim to decouple the fusion of the operator's inputs with the vehicle planning and execution to enable a more flexible, 'plug-and-play', approach that accommodates diverse underlying autonomous architectures, such as interchangeable input types, fusion methods, or planners.
An operator's preferences have also been learned using Inverse Reinforcement Learning (IRL). However, whereas IRL relies on observed expected behaviour, such as near optimal reference trajectories, to directly infer reward functions [10], we infer higher-level preference distributions by fusing multi-modal inputs and readily available geographic database information. This distribution is factored into a reward function, complementing a planner's baseline performance with end user expertise, addressing the challenges found with rewards designed by engineers [11],[12].
Leveraging autonomy to aid rescuers in efficiently searching an area provides obvious benefits. Recent work in automated UAS flight planning [13] provides effective search patterns (e.g. lawnmower) often used by U.S. maritime SAR personnel [14]. While this can be improved with more explicit modeling of target location uncertainty [15], obtaining an accurate and informed prior is challenging. The fusion of our inputs into a preference distribution could be applied as a target location estimate, however, we aim to preserve mission level uncertainty of target location while accounting for dynamic operator preferences.
POMDPs are an effective tool for modeling these competing objectives (finding the victim and satisfying their operator) in their ability to reason over aleatoric and epistemic uncertainty [16]. POMDPs have already been applied to SAR, but these explicitly model domain information such as cell signals and crowd reports [17]. We don't assume this information is available and instead use a POMDP to model expert-provided domain knowledge as dynamic reward, allowing observations to inform and shape the target belief over time. This allows us to flexibly balance the operator's preferences against the agent's primary goal of finding the victim.
Methods for searching cluttered environments, such as kitchens, have applied POMDP planners. These complex spaces have been navigated by condensing them into multiple resolutions [18], incorporating natural language from the user [19], or sequentially decluttering the space [20]. While these studies focus on close quarters, we investigate planning over multiple square kilometers and flexibly integrating expressive human input from semantic language and sketches.
Compared to previous work, we rely on the operator to interpret the vast set of unstructured data involved in a SAR mission, which is conveyed through intuitive methods and fused with geographical data. During execution, our approach allows for _human-on-the-loop_ collaboration, where the human can provide inputs at any time during the search, but the system can still function competently without input. This paradigm enables the autonomous system to effectively balance the overarching goal of finding the victim while being informed and guided by dynamic operator preferences.
## III Methodology
We implement a two-step approach towards interpreting user inputs and generating an aircraft policy, as shown in Figure 1. An operator's inputs are captured and fused via a probabilistic graphical model to define a geospatial preference distribution. This distribution augments a POMDP reward function, generating a dynamic policy to inform aircraft guidance.
### _Input Fusion_
We define a grid \(\mathcal{G}\subset S\), which discretizes the operational region within \(S\subseteq\mathbb{R}^{2}\) into an \(n\times m\) grid, \(\mathcal{G}\), with areas \(g\in\mathcal{G}\) of uniform resolution. Each \(g\) contains a set of defining features compactly represented as vectors. This includes a static set of geographic features, \(\Phi\in\mathbb{R}^{n\bullet}\), which describe how \(g\) is related to the built and natural environment including trails or buildings. Each \(g\) is augmented with a set of user-defined features, \(\Psi\in\mathbb{R}^{n_{\mathcal{O}}}\), which describe \(g\)'s spatial context within a particular mission, such as its proximity to a staging area or location within a specific map sector.
We assume that the operator's preferences can be modeled as a set of visitation preferences over \(\Phi\) and \(\Psi\), which are represented as an unnormalized set of associated feature weights for geographic, \(\theta\in\mathbb{R}^{n\bullet}\), and semantic, \(\Delta\in\mathbb{R}^{n_{\mathcal{O}}}\), components. For a given \(g\) with operator reward, \(r\), the associated set of features, \(\Phi_{g}\) and \(\Psi_{g}\), are modeled to be linearly related to their respective weights, \(\theta\) and \(\Delta\),
\[r_{g}=\theta^{T}\Phi_{g}+\Delta^{T}\Psi_{g} \tag{1}\]
This linear relation enables the different features to be appropriately accounted for while being flexible to changing \(n_{\mathcal{O}}\). To account for uncertainty from imperfect mapping, limited or mistaken operator input and other unknown error sources, we seek to infer a probability distribution over \(r_{g}\), \(p(r_{g}|\mathcal{I})\). This inference is accomplished through the graphical model shown in Figure 2, which the operator engages with through a set of three possible inputs.
We define the set of \(\mathcal{I}\) to include \(n_{\mathcal{P}}\) priorities \(\mathcal{P}\), \(n_{\mathcal{O}}\) semantic geospatial observations \(\mathcal{O}\), and \(n_{\mathcal{S}}\) waypoints, \(\mathcal{S}\). All three types of \(\mathcal{I}\) can be provided at the start and while it is amenable to modification during the mission this is not explicitly shown in our experiments. Each \(\mathcal{P}\) acts as a positive, equally weighted prior on the respective geographic or semantic feature's weight. An input of \(\mathcal{O}\) is built with a combination of a 2D convex sketch, \(\mathcal{K}\), and a geospatial semantic reference label, where each additional \(\mathcal{O}\) increases the dimensions of \(\Delta\) and \(\Psi\). Finally, the set of \(\mathcal{S}\) includes locations for the aircraft to visit \((S=1)\) and those that should be avoided \((S=0)\).
Within the context of our reference mission, these inputs may take the following form: \(\mathcal{P}=[\mathrm{Trails},\,\mathrm{Section\,A}]\), \(\mathcal{O}=[\text{``Search north of the Bridge'', ``Search inside section A''}]\), \(\mathcal{K}=[\text{``Bridge'', ``Section A''}]\), \(\mathcal{S}_{\mathrm{visit},\,(S=1)}=[\text{Set of waypoints along a river}]\), \(\mathcal{S}_{\mathrm{avoid},\,(S=0)}=[\text{Set of waypoints near structures}]\).
While the set of \(\mathcal{P}\) and \(\mathcal{O}\) help define the estimates and add features to the environment, we need the observable set of \(\mathcal{S}\) to tie the known feature components \(\Phi\) and \(\Psi\) to their respective unknown weightings \(\theta\) and \(\Delta\). We assume that each \(g\) must reach a specific threshold of an operator's optimal positive or negative preference for it to be provided as a reference. We therefore model \(p(S_{g}|\theta,\Delta,\Phi_{g},\Psi_{g})\) as a logistic function in equations 2 and 3 with \(r_{g}\) defined as in equation 1,
\[p(S_{g}=1|\theta,\Delta,\Phi_{g},\Psi_{g})=\frac{\exp(r_{g})}{1+\exp(r_{g})}, \tag{2}\]
\[p(S_{g}=0|\theta,\Delta,\Phi_{g},\Psi_{g})=\frac{1}{1+\exp(r_{g})}. \tag{3}\]
To define \(\Phi\), geographic features can be extracted using publicly available government and open source datasets. A geographical information system software, such as ArcGIS, integrates a selected set of information, which is chosen to contain roads, trails, structures, stream lines, water bodies, and tree canopy. We aim to create a set of \(\Phi_{g}\) that accurately maps an operator's perspective of their respective value, which includes each \(g\)'s proximity to a relevant feature. Therefore for each \(g\) with a given resolution, we determine the distance, \(d\), to the closest respective feature, \(f\), to inform an adjacency metric, calculated as \(\Phi_{f,g}=\exp(\frac{-d}{\text{resolution}})\). All of the geoprocessing can be performed offline and saved into a resolution specific database.
To define \(\Psi\), we leverage prior work [21, 22, 23, 24] that probabilistically relates a geospatial semantic label to a given sketch, \(\mathcal{K}_{k}\). To reduce computational loads, we sequentially reduce the number of vertices to five points that maximize the overall area to form the resulting convex polytope. The available semantic labels includes a comprehensive set of canonical bearing labels {"N","NE","E","SE","SW","NW","NW"} and discrete ranges {"Inside", "Near", "Outside"}. Given a particular grid area \(g\), with a center location \(x_{g}\in\mathbb{R}^{2}\) we can approximate the likelihood of the semantic label using the softmax function, where each class contains a set of parameters \(w\in\mathbb{R}^{2}\) and \(b\in\mathbb{R}^{1}\), which constrain their boundaries along the sketch border [23]. Once defined, a Monte Carlo approximation is used to correlate the softmax classes with specific semantic labels [24], resulting in \(p(\mathrm{label}|\mathrm{class})\). We therefore define each \(\Phi_{g}\) as \(p(\mathrm{label}|g)\), the probability of the given grid point being represented by a certain label,
\[\Psi_{i,g}=p(\mathrm{label}|g)=\sum_{class}p(\mathrm{label}|\mathrm{class})p( \mathrm{class}|g) \tag{4}\]
Fig. 2: Graphical model of the presented algorithm with observed variables (white) and unobserved variables (grey).
Having defined the specific components of the graphical model, we must now perform inference over \(p(\theta,\Delta|\Phi,\Psi,\mathcal{P},\mathcal{S})\) to find the resulting reward. The weights, \(\theta\) and \(\Delta\), are each represented as multivariate Gaussians with respective means \(\mu_{\theta},\mu_{\Delta}\) and covariances \(\Sigma_{\theta},\Sigma_{\Delta}\), allowing us to quickly approximate the distribution through the Laplace approximation [25]. We take the resulting expected value of \(\theta\) and \(\Delta\) to estimate the posterior reward:
\[\hat{r}_{g} =\mu_{\theta}^{T}\Phi_{g}+\mu_{\Delta}^{T}\Psi_{g} \tag{5}\] \[\mathrm{var}(\hat{r}_{g}) =\Phi_{g}^{T}\Sigma_{\theta}\Phi_{g}+\Psi_{g}^{T}\Sigma_{\Delta} \Psi_{g}. \tag{6}\]
Additional derivations and details on model inference are discussed in the extended version of our paper [26].
### _Adaptive-Reward Target Search POMDP_
The objective for the planning module is to search for a static target with an unknown location while accounting for operator preferences. We do this by formulating and then solving a general target search POMDP that leverages the operator's geospatial preference distribution to find the target within the previously defined \(n\times m\) grid, \(\mathcal{G}\). We introduce our POMDP formulation below:
A _Partially Observable Markov Decision Process_ (POMDP) is a tuple \(\mathcal{P}=(\mathcal{S},\mathcal{A},\mathcal{O},\mathcal{T},\mathcal{R}, \mathcal{Z},\gamma)\), where: \(\mathcal{S},\mathcal{A}\), and \(\mathcal{O}\) are finite sets of states, actions and observations, respectively, \(\mathcal{T}:\mathcal{S}\times\mathcal{A}\times\mathcal{S}\rightarrow[0,1]\) is the transition probability function, \(\mathcal{R}:\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}\) is the immediate reward function, \(\mathcal{Z}:\mathcal{S}\times\mathcal{A}\times\mathcal{O}\rightarrow[0,1]\) is the probabilistic observation function, \(\gamma\in[0,1)\) is the discount factor. [16]
**State space**\(\mathcal{S}\): Four components: 2D robot position, 2D target position, and battery life of the search agent. Lastly, the state is augmented with a \(|n\times m|\) bitvector to encode whether a robot has visited each grid cell.
**Observations**\(\mathcal{O}\): The target location uncertainty is represented as a one-hot vector encoding of target occupancy in the \(\mathcal{M}\) cells adjacent to the true target cell. Additionally, the agent may receive a _target-not-found_ observation represented as a vector of zeros, leading to a cardinality of \(|\mathcal{M}+1|\).
The observation function, \(\mathcal{Z}\), models the agent's ability to accurately observe the target, informed by factors such as sensor capabilities or terrain complexity. This nuance can be expressed by choosing a Manhattan distance, \(D_{obs}\), to express how close the agent must be to receive a positive target observation. The agent receives a noisy observation of the target at each timestep. If within \(D_{obs}\), the agent can receive one of the following three observations given the true location of the target. The agent can receive a true positive observation with probability \(Z_{true}\). The agent can also receive a _proximal observation_ which returns a target observation near the true target location. Each cell defined as proximal to the true target cell will receive this observation with probability \(Z_{prox}\). Lastly, the agent can receive a false negative observation with probability \(Z_{neg}\) as shown in Figure 3. Otherwise, the agent receives the _target-not-found_ observation with 100% probability.
In the example shown in Figure 3, the agent receives a positive observation only if adjacent to the true target. Thus, \(\mathcal{O}\) is presented with \(\mathcal{M}=5\) and \(\mathcal{Z}\) being distributed over the true target cell and the 2 adjacent cells.
**Transitions**\(\mathcal{T}\): The agent can deterministically move in four cardinal directions: _up, down, left, right_ as well _stay_ in the grid cell. As the target is assumed to be static, their position remains the same. The state's battery component decreases a fixed \(B_{cost}\) amount each timestep. When the agent transitions from state \(s\) to \(s^{\prime}\), the occupancy history is updated, ensuring a record of \(g\) that the agent has visited.
**Rewards**\(\mathcal{R}(s,a,s^{\prime})=R_{time}+R_{target}+R_{op}\)
* \(R_{time}\): Time penalty for each step taken.
* \(R_{target}\): Reward collected for finding the target, i.e. if the agent and target positions in \(s^{\prime}\) are equivalent.
* \(R_{op}\): Each grid cell \(g\) has a corresponding reward \(\hat{r}_{g}\) obtained from the operator's geospatial preference distribution. To incorporate operator preferences, \(R_{op}\) will only be collected if the robot location in \(s^{\prime}\) has yet to be visited: \(R_{op}=\hat{r}_{g}\times\mathds{1}_{visited(s^{\prime})}\).
To incentivize the agent to travel to the regions that have a concentrated belief over the target location, the reward for finding the target needs to be higher than the highest value in the operator's geospatial preference distribution. If this distribution provides higher reward regions, the agent will more likely seek out those regions, ignoring valuable information about the target's potential location.
**Termination:** A _terminal_ or _absorbing_ state is a state from which no further transitions to other states are possible. Conditions for termination are: (i) the current robot position equals the target position, and (ii) the current battery equals the battery required to return to the initial state, with the initial battery life \(B_{max}\) tuned to represent operational constraints. Modeling the battery life as a terminal state ensures that the aircraft can return to base before the battery depletes and, in doing so, incentivizes exploration closer to base.
### _Planning_
Our POMDP formulation results in a very large state space (\(|\mathcal{S}|=B_{max}\times(n\times m)^{2}\times 2^{(n\times m)}\)) to plan over. As SAR requires fast and efficient planning, offline POMDP solvers are too slow for practical application. Our method is therefore amenable to any online Monte Carlo based solver, though we choose to implement it with POMCP [27]. Capturing the belief over target location using a particle filter additionally improves the formulation's robustness to large problem dimensionality.
Fig. 3: POMDP observation model.
**Rollout Policy:** A simple and effective rollout policy is a practical method for evaluating nodes of an online POMDP solver. Our rollout policy relies on an MDP abstraction of the larger POMDP by removing observations and only considering the agent's position, battery, and a randomly sampled target position. We solve the abstraction using discrete value iteration upon problem initialization. During the execution of the POMDP, this policy is evaluated for candidate methods of exploring the state space and, once completed, collects the full reward including the operator's preference distribution. While not accounting for agent's current belief or operator preferences, this approach retains acceptable grid exploration with low computational cost.
## IV User Evaluation Results and Discussion
In this section, we validate our approach against an operational baseline in a realistic simulation environment. We show that our approach can reliably interpret and plan over an operator's inputs to more efficiently find a missing person than current operational standards for autonomous flight.
To that end, we collected inputs from five first responders from the Boulder Emergency Squad (BES), a volunteer technical rescue team in Boulder, CO that has been using UAS for over eight years in SAR, firefighting, and law enforcement operations [1]. Subjects were ages 24-61 and were either UAS pilots or had supplementary accreditation in search management, reflecting subject matter expertise. Each rescuer provided a set of inputs, \(\mathcal{I}\), based on the Lost Fisherman scenario discussed previously. Their inputs informed randomized simulations with multiple vehicle launch locations and target positions, whereby we compare our system's performance against an operational baseline.
### _Data Collection_
To collect subject data, we walked the subjects through a testing scenario detailing the inputs that they could provide. We guided subjects towards appropriate inputs when necessary, such as pointing operators toward the priority definition input when they wanted certain features to be focused on. We aimed to replicate an intuitive interaction method uninhibited by software interface limitations and therefore collected inputs on a paper map using pens and markers, which was then translated into our reward model and inference routine post-hoc. Figure 4 shows the progressive fusion of rescuer 3's inputs and the final fusion results for rescuers 1,2, 4 and 5. All rescuers provided unique inputs, although often promoting similar features, such as trail intersections and primary streams. In the lower part of Figure 4, we see how rescuer 3's inputs are progressively fused, with their preferences concentrating in the prioritized areas and where the trail travels close to streams.
### _Input Fusion Validation_
Before evaluating the overall framework, we ensure we can accurately fuse an operator's inputs, aligning our estimate with their true geographic and feature preferences. We accomplish this by computing an error comparing the uncertainty-weighted reward estimate of 21 distributed locations against an operator's quartile ranking. We additionally evaluate the algorithm's internal positive and negative feature prioritization to understand the underlying motivation for alignment. This evaluation adapts a search engine ranking metric known as normalized Discounted Cumulative Reward [28], which normalizes the estimated ranking order by the subject-provided ranking to provide an optimal value of 1. This metric is repetitively evaluated with a set of weights sampled from \(p(\theta,\Delta|\Phi,\Psi,\mathcal{P},\mathcal{S})\). Additional details on these metrics is included in the extended version of our paper [26].
Input validation results are shown in Table I, where each metric is compared to a random baseline, shown in parentheses, which is used as no comparable baseline exists. Results across all subjects show very low error values. Positive feature rankings mostly show good alignment, though alignment rank suffered if limited inputs were provided or subjects provided feature weightings that were close together,
Fig. 4: Inputs from five rescuers overlay the resulting geospatial preference distribution. Lower row shows the progressive addition of Rescuer 3’s inputs and resulting concentration of reward around trails and streams within prioritized areas.
such as for Rescuer 4. Negative feature rankings proved to be more effective, thereby highlighting that the system can accurately discriminate features that should be avoided.
### _Search and Rescue Simulation_
Based on the value maps generated from the rescuers' inputs, we simulate our planner and compare its performance to operational methods. The environment consists of a 44 \(\times\) 59 grid, with each grid cell spanning 100 \(\times\) 100 meters. We initialize simulations across four expert-informed start locations and sample target locations from a truth model, shown in our extended version [26], which was informed by BES Chief Andy Amalfitano, an expert in SAR.
**Baseline Implementation:** The baseline search algorithm, informed from current methods [1], receives the operator's sketch set \(\mathcal{K}\), positive ("Inside" and "Near") observations, \(\mathcal{O}\), and waypoints \(\mathcal{S}_{visit}\). The search agent starts at one of four locations and follows a shortest path algorithm to traverse all waypoints. Once complete, the agent travels to each sketched polygon and executes a lawnmower search pattern of the space. If the target is still not found, the agent redirects to the exhaustive search stage and conducts a lawnmower search over the entire environment.
**Our Approach:** We solve the POMDP described in Sec. III-B with the following details. We define a strict target observation condition \(D_{obs}=1\), defining positive observations of the target only when within \(1\) grid cell. If within \(D_{obs}\), the UAS receives a true positive observation with \(Z_{true}=80\%\) and a proximal positive observation with \(Z_{prox}=10\%\) for the two adjacent cells in the direction of the target. We set our reward model's running cost \(R_{time}=-1\) and the reward for finding the target \(R_{target}=1000\), the \(\hat{r}_{g}\) leveraged from the geospatial preference distribution. Finally, we set the UAS initial battery \(B_{max}=1000\).
The resulting UAS behavior is represented in Figure 5. Our approach efficiently explores high value areas, autonomously searching in and outside of the defined sectors and waypoints, eventually finding the target 2.6 times faster. We comprehensively evaluate our approach and the baseline using two metrics. The first, _Localization Ratio_ defines the ratio of simulation runs that find the target over runs that do not find the target within the given battery life. The second metric, _Reward/Timestep_ expresses the discounted reward accumulated per timestep. If the agent fails to find the target under the battery constraints, the simulation is terminated. We run 100 simulations across the four start locations and report mean and the standard error of the mean in Table II.
Our method results in statistically significant \((p<0.05)\) improvement from the baseline approach for all rescuers' input and metrics. Evaluating the localization ratio, our method finds the target 18% more than the baseline within the available battery life. We perform a two-tailed Binomial test (\(N=400\)) to find statistical significance. More importantly, the system collects 15.4 times more reward per timestep, reflecting efficient path planning that follows the operator's geospatial context drawn from a limited set of inputs. Statistical significance for this metric is evaluated using a two sample Z-test (\(N=400\)).
## V Conclusion
We introduced a human-centered autonomy framework tailored to UAS deployed in dynamic and uncertain environments, particularly in SAR missions. Our approach hinges on inferring geospatial context and operator preferences using minimal operator inputs to guide a probabilistic target search policy. We take an operator's priority definition, spatial semantic observations over ad-hoc geographical areas, and waypoints as input, which provide valuable context of the operator's mental model. The system infers a reward function from these inputs and plans online using this reward signal and its own belief about target location. We proved effective alignment of inferred and true operator preferences through a spatial error metric and feature ranking. Simulated Monte Carlo trials revealed that our inference and planning pipeline significantly outperforms an operational baseline in both target localization and reward accumulation.
While we focus our application on SAR, the ability to estimate an operator's geospatial preferences for guided planning shows promise in enabling expert-directed autonomy in any data-driven environment. However, our method is limited to areas with mapped geography. Future work will seek ways to address this limitation, such as incorporating online aerial perception techniques, and deploying fielded hardware experiments with vehicle and interface integration.
## Acknowledgements
We thank the members of the Boulder Emergency Squad for their thoughtful contributions and input.
\begin{table}
\begin{tabular}{c|c c|c c} \hline \hline \multirow{2}{*}{Rescuer} & \multicolumn{2}{c|}{Localization Ratio} & \multicolumn{2}{c}{Reward/Timestep} \\ & Ours & Base & Ours & Base \\ \hline
1 & **75.3\%** & 54.8\% & **4.56 \(\pm\) 1.21** & 0.490 \(\pm\) 0.043 \\
2 & **54.5\%** & 23.3\% & **2.05 \(\pm\) 0.60** & 0.195 \(\pm\) 0.027 \\
3 & **63.0\%** & 46.3\% & **4.72 \(\pm\) 0.97** & 0.173 \(\pm\) 0.023 \\
4 & **44.3\%** & 25.5\% & **4.70 \(\pm\) 1.36** & 0.101 \(\pm\) 0.015 \\
5 & **51.0\%** & 45.8\% & **1.85 \(\pm\) 0.72** & 0.201 \(\pm\) 0.027 \\ \hline Average & **57.6\%** & 39.1\% & **3.58 \(\pm\) 0.67** & 0.232 \(\pm\) 0.067 \\ \hline \hline \end{tabular}
\end{table} TABLE II: Simulation Results - 100 runs per start position
Fig. 5: Example trajectories of our planner and the baseline simulated using rescuer 1 input data. Our approach finds target in 329 timesteps, baseline finds target in 858 timesteps. Trajectory opacity decreases with search time.
\begin{table}
\begin{tabular}{c|c|c|c} \hline \hline \multirow{2}{*}{Rescuer} & \multicolumn{1}{c|}{Error} & Pos. Feature Rank & Neg. Feature Rank \\ & (Optimal: 0) & (Optimal: 1) & (Optimal: 1) \\ \hline
1 & **0.015** (2.95) & **0.93** (0.63) & **1.21** (3.66) \\
2 & **0.015** (2.19) & **0.97** (0.89) & **1.07** (1.21) \\
3 & **0.013** (2.41) & **0.95** (0.72) & **1.28** (2.17) \\
4 & **0.025** (2.41) & **0.84** (0.55) & **1.85** (2.02) \\
5 & **0.009** (2.06) & **0.94** (0.76) & **1.47** (1.81) \\ \hline \hline \end{tabular}
\end{table} TABLE I: Alignment results (with random baseline) |
2305.00357 | Finding Galois splitting models to compute local invariants | For prime $p$ and small $n$, Jones and Roberts have developed a database
recording invariants for $p$-adic extensions of degree $n$. We contributed to
this database by computing the Galois slope content, Galois mean slope, and
inertia subgroup for a variety of wildly ramified extensions of composite
degree using the idea of Galois splitting models. We will describe a number of
strategies to find Galois splitting models including an original technique
using generic polynomials and Panayi's root finding algorithm. | Benjamin Carrillo | 2023-04-30T00:03:58Z | http://arxiv.org/abs/2305.00357v1 | # Finding Galois splitting models to compute local invariants
###### Abstract.
For prime \(p\) and small \(n\), Jones and Roberts have developed a database recording invariants for \(p\)-adic extensions of degree \(n\). We contributed to this database by computing the Galois slope content, Galois mean slope, and inertia subgroup for a variety of wildly ramified extensions of composite degree using the idea of _Galois splitting models_. We will describe a number of strategies to find Galois splitting models including an original technique using generic polynomials and Panayi's root finding algorithm.
## 1. Introduction
For a number field \(K\), i.e. a finite extension of \(\mathbb{Q}\), and for each prime \(p\), we have an associated \(p\)-adic algebra \(K\otimes\mathbb{Q}_{p}\cong\prod_{i=1}^{g}K_{p,i}\), where each \(K_{p,i}\) is a finite extension of \(\mathbb{Q}_{p}\).
We can answer a variety of questions about \(K\) using basic invariants of the \(K_{p,i}\), such as their ramification index and residue field degree. For more advanced questions, there is a need for more detailed information about the \(p\)-adic extension, such as their Galois group and ramification groups, which allows us to measure the wild ramification of the extension.
Let \(\mathcal{K}(p,n)\) be the set of isomorphism classes of degree \(n\) extensions of \(\mathbb{Q}_{p}\) which is finite, as a consequence of Krasner's Lemma [9]. We want to record and store information that encapsulates the filtration of the Galois group by its ramification groups for each extension for particular \(\mathcal{K}(p,n)\), as well as some companion results. To easily compute this data we use the idea of _Galois splitting models_.
Formally, a _Galois splitting model_ of a \(p\)-adic extension \(K/\mathbb{Q}_{p}\), is a polynomial \(f(x)\in\mathbb{Z}[x]\) that is irreducible in \(\mathbb{Z}_{p}[x]\) with \(\mathbb{Q}_{p}(\alpha)\cong K\) and \(\operatorname{Gal}(\mathbb{Q}_{p}(\alpha)/\mathbb{Q}_{p})=\operatorname{Gal} (\mathbb{Q}(\alpha)/\mathbb{Q})\) where \(\alpha\) is a root of \(f(x)\). With a Galois splitting model \(f(x)\) the computation of various invariants related to \(K\) that we are interested in can be computed easily using \(f(x)\) and its corresponding number field.
For a given \(p\)-adic extension we use techniques coming from inverse Galois theory such as class field theory and generic polynomials to find a Galois splitting model. In particular, we will describe an original technique to find our desired polynomial using generic polynomials and Panayi's root finding algorithm. [12]
In keeping with Jones and Roberts [7], all data will be available at [https://hobbes.la.asu.edu/LocalFields/](https://hobbes.la.asu.edu/LocalFields/), as well as [http://www.lmfdb.org/LocalNumberField/](http://www.lmfdb.org/LocalNumberField/) so all computations are recorded once and is freely available for those who are interested. Also an implementation of the main algorithm
described here will be available at [https://github.com/bcarrill/gsm_panayi](https://github.com/bcarrill/gsm_panayi).
## 2. Preliminaries
### Panayi's Root Finding Algorithm
We will now describe Panayi's root finding algorithm, let \(\varphi(x)\in\mathcal{O}_{K}[x]\), where \(K\) is a finite \(p\)-adic extension with uniformizer \(\pi\). [12][11].
* Let \(\varphi(x)=c_{n}x^{n}+c_{n-1}x^{n-1}+\ldots+c_{1}x+c_{0}\in\mathcal{O}_{K}[x]\) and define the valuation \(\nu_{K}(\varphi):=\min\{\nu_{K}(c_{n}),\ldots,\nu_{K}(c_{0})\}\) where the initial \(\nu_{K}\) is the \(\pi\)-adic valuation of \(K\) and \(\varphi^{\#}(x):=\varphi(x)/\pi^{\nu_{K}(\varphi)}\). For \(\alpha\in\mathcal{O}_{K}\) denote its representative in the residue field \(k\) by \(\overline{\alpha}\), and for \(\beta\in k\), denote a lift of \(\beta\) to \(\mathcal{O}_{K}\) by \(\hat{\beta}\).
* To count the number of root of \(\varphi(x)\) in \(K\) we define two sequences \((\varphi_{i}(x))_{i}\) and \((\delta_{i})_{i}\).
* Set \(\varphi_{0}(x):=\varphi^{\#}(x)\) and let \(\delta_{0}\in\mathcal{O}_{K}\) be the lift of a root of \(\overline{\varphi_{0}(x)}\).
* If \(\overline{\varphi_{i}^{\#}(x)}\) has a root \(\beta_{i}\) then define \(\varphi_{i+1}(x):=\varphi_{i}^{\#}(x\pi+\hat{\beta}_{i})\) with \(\delta_{i+1}:=\hat{\beta}_{i}\pi^{i+1}+\delta_{i}\).
At some point, one of the following cases must occur:
1. \(\deg(\overline{\varphi_{i}^{\#}})=1\) then \(\delta_{i-1}\) is an approximation of one root of \(\varphi(x)\)
2. \(\deg(\overline{\varphi_{i}^{\#}})=0\) then \(\delta_{i-1}\) is not an approximation of a root of \(\varphi(x)\)
3. \(\overline{\varphi_{i}^{\#}}\) has no roots and thus \(\delta_{i-1}\) is not an approximation of a root of \(\varphi(x)\)
If at any step of the process there exist multiple roots \(\beta_{i}\) for \(\overline{\varphi_{i}(x)}\), we split the sequence and proceed.
### Parametric/Generic polynomials
The Inverse Problem of Galois Theory asks if for a finite group \(G\) and a field \(K\), does there exist a finite Galois extension \(L\) such that \(G=\operatorname{Gal}(L/K)\). An additional question is if \(G\) can be realized as the Galois group of a field extension of \(K\), can we construct a family of polynomials over \(K\) such that the Galois group of the polynomials over \(K\) is \(G\).
The idea of _parametric polynomials_ is an attempt to answer the second question. Following Jensen, Ledet, and Yui, consider the polynomial \(P(\mathbf{t},x)\in K(\mathbf{t})[x]\), where \(\mathbf{t}=(t_{1},\ldots,t_{n})\) and the \(t_{i}\) are indeterminants. Let \(\mathbb{L}\) be the splitting field of \(P(\mathbf{t},x)\) over \(K(\mathbf{t})\). We say that \(P(\mathbf{t},x)\) parametrizes \(G\)-extensions and is a _parametric polynomial_ if \(P\) satisfies the following two properties:
1. \(\mathbb{L}/K(\mathbf{t})\) is Galois with Galois group \(G\)
2. Every Galois extension \(L/K\) with Galois group \(G\) is the splitting field of a polynomial \(P(\mathbf{a},x)\) for some \(\mathbf{a}\in K^{n}\).
If \(P(\mathbf{t},x)\) has the additional property of parametrizing \(G\)-extensions for any field containing \(K\) then we say \(P(\mathbf{t},x)\) is a _generic polynomial_. Discussion on generic polynomials up to degree 7 is readily available, for example, see [5].
For a Galois group \(G\) it can be identified as a transitive subgroup of \(S_{n}\). When referring to Galois groups we will use standard notation (\(S_{n}\), \(A_{n}\), \(C_{n}\), \(D_{n}\)) as well as T-numbering that was
introduced in [2], writing \(n\mathrm{T}j\) for a degree \(n\) field whose Galois closure has Galois group \(\mathrm{T}j\) of \(S_{n}\).
### Confirming Galois splitting models
Given a degree \(n\) extension \(F/\mathbb{Q}_{p}\) with defining polynomial \(f(x)\) and Galois group \(G\) with \(\mathrm{T}\)-number \(\mathrm{T}j\), as we produce polynomials we naturally need to confirm if any of those polynomials are a Galois splitting model for \(F/\mathbb{Q}_{p}\). If we have a degree \(n\) polynomial \(g(x)\) and \(K/\mathbb{Q}=\mathbb{Q}[x]/\langle g(x)\rangle\) has Galois group with \(\mathrm{T}\)-number \(\mathrm{T}j\), then \(g(x)\) is a Galois splitting model for \(F/\mathbb{Q}_{p}\) if \(f(x)\) has a root in \(\hat{K}/\mathbb{Q}_{p}\), the completion of \(K/\mathbb{Q}\) by the use of Panayi's Algorithm.
If \(f(x)\) does indeed have a root in \(\hat{K}/\mathbb{Q}_{p}\), this means that \(F^{\mathrm{gal}}\) is contained in \(\hat{K}^{\mathrm{gal}}\), thus \(F^{\mathrm{gal}}\subseteq\hat{K}^{\mathrm{gal}}\). But the \(\mathrm{T}\)-number of the Galois group of \(F/\mathbb{Q}_{p}\) is \(\mathrm{T}j\) and the \(\mathrm{T}\)-number of the Galois group of \(\hat{K}/\mathbb{Q}_{p}\) is at most \(\mathrm{T}j\), therefore both Galois groups must have the same \(\mathrm{T}\)-number and \(F^{\mathrm{gal}}=\hat{K}^{\mathrm{gal}}\) and thus \(F=\hat{K}\) and the \(\mathrm{T}\)-number of the Galois group of \(K\) and \(\hat{K}\) are the same.
## 3. Constructing Galois splitting models
In this section, we describe four strategies to find candidates for Galois splitting models.
### Using a Database Search
Our initial attempt to find Galois splitting models is to use the various databases of number fields, namely the databases of Jones and Roberts [6], Kluners and Malle [8], and the LMFDB (\(L\)-function and Modular Forms Database) [10] to find any initial matches. We quickly filter out number fields where the prime \(p\) splits within the number field and with the remaining polynomials we check if any are Galois splitting models for some \(p\)-adic extension. This strategy is useful for finding quick matches in our initial step to find Galois splitting models for \(p\)-adic extensions of a given degree.
### Using Galois Theory
The next strategy for finding Galois splitting models is to use group theoretic facts about the Galois group of a \(p\)-adic extension to construct a Galois splitting model using composita of smaller fields.
Given a degree \(n\) field extension \(K/F\) with Galois group \(G\) we want to determine if there exist a subfield \(L\) of \(K^{\mathrm{gal}}\) such that \(L^{\mathrm{gal}}=K^{\mathrm{gal}}\) and \(L\) is the compositum of two smaller subfields. This can be easily found using group theoretic arguments. Namely, we are searching for two non-trivial subgroups of the Galois group of index less than \(n\) such that their intersection is trivial. In the case of multiple pairs of non-trivial subgroups that satisfies the previous statement, we pick the pair that generates the largest group, as this will correspond to a common subfield for the fixed fields of the pair of subgroups and we want to minimize this degree.
We will show an example of this process using a field extension \(K/F\) with Galois group 14T9. Using Magma we find that there exist a pair of subgroups \(H\) and \(K\) (of index 2 and 8 respectively) whose intersection is trivial and \(\langle H,K\rangle\) is the group 14T9. Their corresponding fixed fields will have Galois group 2T1 and 8T25.
This means there exists a degree 16 extension of \(K^{\mathrm{gal}}\) that is the compostium of a degree 2 and degree 8 extension with no common subfield. The Galois closure of this degree 16 extension has its Galois group isomorphic to a 14T9 group. We can then use Magma to recover the 14T9 extension
using resolvents.
Once we identify a suitable pair of subgroups for a particular Galois group for an extension of \(\mathbb{Q}_{p}\), we need to find a Galois splitting model for the fixed field of each subgroup. To find these Galois splitting models with common subfield, we again refer back to the Galois splitting models already found, the various number field databases, or use the other methods which will be described below. We choose to find Galois splitting models for the lower degree extensions as it could potentially easier due to a greater number of lower degree number fields in the various databases and could be quicker computationally with the methods that will be described next.
### Using Class Field Theory
This strategy uses class field theory to find Galois splitting models. We know that solvable Galois extensions can be constructed by a chain of abelian extensions and we use this idea to construct a Galois splitting model.
For a number field \(K\) we can use class field theory to construct a cyclic extension of \(K\) with prime order and conductor who divides an ideal of \(\mathbb{Z}\) that we specify. We use two implementations one in Pari/GP [13] implementing algorithms from [3] and the other in Magma [4].
Here is an example with a field extension \(K/\mathbb{Q}\) with Galois group 15T26. We cannot find a Galois splitting model for this type of extension using composita of proper subfields of \(K^{\mathrm{gal}}\). It can be calculated that \(|\mathrm{Aut}_{\mathbb{Q}}(K)|=3\). Thus using standard Galois Theory there exist a unique degree 5 subfield \(F\) of \(K\), and it can be shown that \(\mathrm{Gal}(F/\mathbb{Q})\cong C_{5}\). Now \(K\) over \(F\) is a Galois extension of degree 3, therefore \(\mathrm{Gal}(K/F)\cong C_{3}\). So finding a \(C_{3}\) extension of \(F\), will give us a degree 15 extension over \(\mathbb{Q}\) that may include or be a 15T26 extension. Thus for a 15T26 \(p\)-adic extension, we will find a Galois splitting model for the degree 5 subfield and use this process to try to construct a Galois splitting model for the full extension.
Another more complicated example is a 15T33 extension of \(\mathbb{Q}\). Now \(|\mathrm{Aut}_{\mathbb{Q}}(K)|=1\) and there exist a degree 5 subfield \(F\) with Galois group \(C_{5}\) over \(\mathbb{Q}\). So \(K/F\) is a degree 3 extension that is not Galois, this means the Galois group of \(K/F\) must be \(S_{3}\). The Galois closure \(K^{\prime}\) of \(K\) over
Figure 1. Example using 15T26
has degree 6, so there exists a degree 2 extension \(L_{1}\) over \(F\), which is always cyclic. Additionally, since \(K^{\prime}\) over \(F\) is Galois, then \(K^{\prime}\) over \(L_{1}\) is a degree 3 Galois extension, which means it must also be cyclic.
We can use our process in two steps. The first step is the find quadratic extensions of \(F\) and then find \(C_{3}\) extensions of the resulting fields. This will give us degree 30 extensions that could possibly contain a 15T33 extension. From a degree 30 extension we can use Pari/GP or Magma to try to find a 15T33 field extension. A way to optimize this process is to identify that \(\operatorname{Gal}(L_{1}/\mathbb{Q})\cong C_{10}\), and then find \(C_{3}\) extensions of \(L_{1}\) in one iteration rather than two iterations. Again for a 15T33 \(p\)-adic extension we will find a Galois splitting model for the \(C_{10}\)-extension and use this process to try to find a Galois splitting model for the 15T33 \(p\)-adic extension.
Using group theory we can tell if a number field with particular Galois group \(G\) is the subfield of an extension whose Galois group is the wreath product of two groups with the first group cyclic of prime order, i.e. it provides the base extension and order of cyclic extension from the base extension. Kluner and Malle's database provides us this information so we have a starting point to apply this technique to find a Galois splitting model.
### Using Generic Polynomials
We will describe our strategy to find Galois splitting models using generic polynomials. Given an extension \(K/\mathbb{Q}_{p}\), there will exist a subfield \(F\) (which could be trivial) such that \(F^{unram}=K^{unram}\), where \(K^{unram}\) is the maximal unramified extension of \(K\). If there exists multiple subfields of \(K\), we simply pick the subfield \(F\) such that \([K:F]\) is minimal. If we know \(\operatorname{Gal}(K^{\operatorname{gal}}/\mathbb{Q}_{p})\), then we can determine \(G^{\prime}=\operatorname{Gal}(K^{\operatorname{gal}}/F)\). Let \(P(\mathbf{t},x)\) be generic polynomial that parametrises \(G^{\prime}\)-extensions and \(f(x)\) be a Galois splitting model for \(F/\mathbb{Q}_{p}\) with \(L=\mathbb{Q}[x]/\langle f(x)\rangle\).
If we calculate \(P(\mathbf{a},x)\) for \(\mathbf{a}\in L^{n}\). This will give us a relative field extension over \(L\) which we can then view as an extension over \(\mathbb{Q}\). Once we calculate a defining polynomial for the absolute
Figure 2. Example using 15T33
extension, we determine if it is a Galois splitting model for full extension \(K/\mathbb{Q}_{p}\).
To find suitable values of \(\mathbf{a}\in L^{n}\) when we have an explicit parametric or generic polynomial, we developed an algorithm that uses Panayi's root finding algorithm. The idea of this algorithm is to "zero in" to correct values of the indeterminants of \(P(\mathbf{t},x)\) by using Panayi's algorithm and the appropriate substitutions of the indeterminants of \(P(\mathbf{t},x)\). We will describe the algorithm now:
Let \(\pi_{F}\) be a uniformizer of \(F/\mathbb{Q}_{p}\cong\hat{L}/\mathbb{Q}_{p}\) and \(\pi_{K}\) be a uniformizer of \(K/\mathbb{Q}_{p}\). Note we can represent \(\pi_{F}\) in terms of \(\pi_{K}\) by factoring the minimal polynomial of \(\pi_{F}\) over \(\mathbb{Q}_{p}(\pi_{K})\).
**Algorithm 1**.:
* Let \(\mathbf{b}=(b_{1},\ldots,b_{n-1},t)\) with \(b_{i}\in L\) and define \[\varphi(x,t):=P(\mathbf{b},x)=c_{n}x^{n}+c_{n-1}x^{n-1}+\ldots+c_{1}x+c_{0}\in \left(\mathcal{O}_{L}[t]\right)[x]\] where \(c_{i}=a_{i,0}+a_{i,1}t+\ldots+a_{i,m_{i}}t^{m_{i}}\).
* Define \(\nu_{K}(\varphi):=\min\{\nu_{K}(a_{0,0}),\ldots,\nu_{K}(a_{n,m_{n}})\}\) and \(\varphi^{\#}(x,t):=\varphi(x,t)/\pi_{K}^{\nu_{K}(\varphi)}\). For \(\alpha\in\mathcal{O}_{K}\) denote its representative in the residue field \(k\) by \(\overline{\alpha}\), and for \(\beta\in k\), denote a lift of \(\beta\) to \(\mathcal{O}_{K}\) by \(\hat{\beta}\).
* We initially create a set \(S=\{\{\{\{\},\varphi^{\#}(x,t)\}\}\}\) and for \(\{s,\varphi(x,t)\}\in S\), if at any point \(\nu_{K}(a_{i,0})\geq\min\{\nu_{K}(a_{i,1}),\ldots,\nu_{K}(a_{i,m_{i}})\}\) for some \(i\), we then replace \(\{s,\varphi(x)\}\) with \(\bigcup_{\beta\in k}\{\{s\cup\{\hat{\beta}\},\,\varphi(x,\hat{\beta}+\pi_{F}t )\}\}\) in \(S\).
We choose to make the substitution when \(\nu_{K}(a_{i,0})\geq\min\{\nu_{K}(a_{i,1}),\ldots,\nu_{K}(a_{i,m_{i}})\}\) for some \(i\), because otherwise the choice of \(t\) may affect the value of \(\nu_{K}(\varphi)\). Thus we substitute all possible values for the first \(\pi_{F}\)-adic digit of \(t\) and then proceed with the algorithm.
* If \(\overline{\varphi}(x,t)\) has a root \(\beta\) then define \(\varphi^{\prime}(x,t):=\varphi^{\#}(x\pi_{K}+\hat{\beta},t)\) and we replace \(\{s,\varphi(x,t)\}\) with \(\{s,\,\varphi^{\prime}(x,t)\}\).
When one of the following cases occur:
1. \(\deg_{t}(\overline{\varphi}(x,t))=0\) and \(\deg_{x}(\overline{\varphi})=1\) then \(P(\mathbf{a},x)\) with \(\mathbf{a}=(b_{1},\ldots,b_{n-1},\sum_{i=1}s_{i}\pi_{F}^{i})\) has a root of in \(K\).
2. \(\deg_{t}(\overline{\varphi}(x,t))=0\) and \(\deg_{x}(\overline{\varphi})=0\) then \(P(\mathbf{a},x)\) with \(\mathbf{a}=(b_{1},\ldots,b_{n-1},\sum_{i=1}s_{i}\pi_{F}^{i})\) does not have root of in \(K\).
3. \(\deg_{t}(\overline{\varphi}(x,t))=0\) and \(\overline{\varphi}(x,t)\) has no roots then \(P(\mathbf{a},x)\) with \(\mathbf{a}=(b_{1},\ldots,b_{n-1},\sum_{i=1}s_{i}\pi_{F}^{i})\) does not have root of in \(K\).
4. The cardinality of \(s\) is larger than a predefined bound.
Once a list of polynomials \(P(\mathbf{a},x)\) is produced from Algorithm 1, we can then create a list of polynomials over \(\mathbb{Q}\). From this list of polynomials we search for a Galois splitting model for the extension \(K/\mathbb{Q}_{p}\). Note we chose to give values from \(L\) to all but one indeterminant, but this is not necessary as one can easily modify the algorithm to solve for multiple indeterminants. In general only having one indeterminant does make the computations quicker.
#### 3.4.1. Examples
For our first example we want to find a Galois splitting model for every \(D_{5}\)-extension of \(\mathbb{Q}_{5}\) which there are \(3\) such extensions. So let \(F=\mathbb{Q}_{5}\) and we choose the Galois splitting model for \(F\) to be \(f(x)=x\) and hence \(\pi_{F}=5\). From [5] a generic polynomial for \(D_{5}\)-extensions is \(P(s,t,x)=x^{5}+(t-3)x^{4}+(s-t+3)x^{3}+(t^{2}-t-2s-1)x^{2}+sx+t\). Since the polynomial \(P\) has two parameters, we will choose \(s=5\).
We will walk through a full example now. For the first extension \(K/\mathbb{Q}_{5}\) its defining polynomial is \(g(x)=x^{5}+15x^{2}+5\) and for the first iteration we let \(\varphi(x,t)=x^{5}+(t-3)x^{4}+(-t+8)x^{3}+(t^{2}-t-11)x^{2}+5x+t\) and \(s=\{\}\).
Since \(\nu_{L}(a_{0,0})=\nu_{L}(0)>\nu_{L}(a_{0,1})=\nu_{L}(1)\) we will do the substitution step and substitute \(3+\pi_{F}t\) for \(t\) and therefore we get \(\varphi(x,t)=x^{5}+5tx^{4}+(-5t+5)x^{3}+(25t^{2}+25t-5)x^{2}+5x+(5t+3)\) and \(s=\{3\}\).
Now \(\overline{\varphi}(x,t)\) has a root of \(2\), and thus \(\varphi(x\pi_{K}+2,t)=(-15\pi_{K}^{2}-5)x^{5}+(5t+10)\pi_{K}^{4}x^{4}+(35t+45) \pi_{K}^{3}x^{3}+(25t^{2}+115t+105)\pi_{K}^{2}x^{2}+(100t^{2}+200t+125)\pi_{K} x+(100t^{2}+145t+65)\).
But \(\nu_{L}(a_{0,0})=\nu_{L}(65)\geq\nu_{L}(a_{0,1})=\nu_{L}(145)\), so for this substitution step we will substitute \(\pi_{F}t\) and we will get \(\varphi(x,t)=(-15\pi_{K}^{2}-5)x^{5}+(25t+10)\pi_{K}^{4}x^{4}+(175t+45)\pi_{K}^ {3}x^{3}+(625t^{2}+575t+105)\pi_{K}^{2}x^{2}+(2500t^{2}+1000t+125)\pi_{K}x+(250 0t^{2}+725t+65)\) therefore \(\varphi^{\prime}(x,t)=\varphi^{\#}(x\pi_{K}+2,t)=(-3\pi_{K}^{2}-1)x^{5}+(5t+2) \pi_{K}^{4}x^{4}+(35t+9)\pi_{K}^{3}x^{3}+(125t^{2}+115t+21)\pi_{K}^{2}x^{2}+(50 0t^{2}+200t+25)\pi_{K}x+(500t^{2}+145t+13)\) and \(s=\{3,0\}\).
Now \(\varphi^{\prime}(x,t)\) has a root of \(3\), therefore \(\varphi^{\prime}(x\pi_{K}+3,t)=(225\pi_{K}^{4}+150\pi_{K}^{2}+25)x^{5}+(-75\pi_{ K}^{4}+(-125t+3325)\pi_{K}^{3}+(5625t+2250)\pi_{K}^{2}+1125\pi_{K}+(1875t+750))x^{4}+( (-4500t-1800)\pi_{K}^{4}+(-2625t-1125)\pi_{K}^{3}+(-1500t+19650)\pi_{K}^{2}+(-8 75t-225)\pi_{K}+6750)x^{3}+((625t^{2}+575t-3945)\pi_{K}^{4}+(-20250t-8100)\pi_{K }^{3}+(-23625t-7425)\pi_{K}^{2}+(-6750t-2700)\pi_{K}+(-7875t-2025))x^{2}+((4725t +1215)\pi_{K}^{4}+(3750t^{2}+3450t-5445)\pi_{K}^{3}+(2500t^{2}-39500t-16075)\pi_ {K}^{2}-2025\pi_{K}+(-13500t-5400))x+((2025t+810)\pi_{K}^{4}+(4725t+1215)\pi_{K }^{3}+(5625t^{2}+5175t-2700)\pi_{K}^{2}+(7500t^{2}+3000t+375)\pi_{K}+(2500t^{2} +725t-1150))\).
And \(\varphi^{\prime\prime}(x,t)=\varphi^{\prime\#}(x\pi_{K}+3,t)=(-3\pi_{K}^{4}- \pi_{K}^{2})x^{5}+(-45\pi_{K}^{3}+(-75t-30)\pi_{K}^{2}-15\pi_{K}+(-25t-10))x^{4} +((60t+24)\pi_{K}^{4}+(35t+9)\pi_{K}^{3}-270\pi_{K}^{2}-90)x^{3}+(54\pi_{K}^{4} +(270t+108)\pi_{K}^{3}+(315t+81)\pi_{K}^{2}+(125t^{2}+115t+21)\pi_{K})x^{2}+((-1 00t^{2}-40t-5)\pi_{K}^{4}+81\pi_{K}^{3}+(540t+216)\pi_{K}^{2}+(-1500t^{2}+345t+ 168)\pi_{K}+(750t^{2}+690t+126))x+((75t^{2}-120t-30)\pi_{K}^{4}+(-300t^{2}-120 t-15)\pi_{K}^{3}+(-100t^{2}-29t+46)\pi_{K}^{2}+(1125t^{2}-1395t-288)\pi_{K}+(-4500t^{2}-855 t+18))\).
But now \(\overline{\varphi^{\prime\prime}}(x,t)=x+3\), which is a degree \(1\) polynomial. Thus \(\varphi(x,3+0\pi_{F})=P(5,3+0\pi_{F},x)=x^{5}+5x^{3}-5x^{2}+5x+3\) has a root in \(K\). And we find that \(x^{5}+5x^{3}-5x^{2}+5x+3\) is truly a Galois splitting model for \(K\).
Similarly, for the second extension its defining polynomial is \(x^{5}+10x^{2}+5\) and using Algorithm 1 we find that \(t=13\) and thus its Galois splitting model is \(x^{5}+10x^{4}-5x^{3}+145x^{2}+5x+13\). For the last extension we find its defining polynomial is \(x^{5}+5x^{4}+5\) and \(t=18\) for a Galois splitting model of \(x^{5}+15x^{4}-10x^{3}+295x^{2}+5x+18\).
For a more advanced example, let \(p=3\) and \(K/\mathbb{Q}_{p}\) being a \(C_{3}\wr C_{4}\) extension with \(F/\mathbb{Q}_{p}\) a \(C_{4}\)-subextension with a defining polynomial \(f(x)=x^{4}-3x^{2}+18\). There are 16 such extensions. The field \(F\) has a Galois splitting model \(g(x)=x^{4}+3x^{3}-6x^{2}-18x-9\) and \(F^{unram}/\mathbb{Q}_{p}\) is a degree 2 extension with Galois splitting model \(u(x)=x^{2}-x-1\). We can let a root of \(\overline{u(x)}\) be the generator of \(k\), and a root of \(f(x)\) is the uniformizer \(\pi_{F}\). Again from [5] a \(C_{3}\) generic polynomial is \(Q(t,x)=x^{3}-tx^{2}+(t-3)x+1\). Using Algorithm 1 the values for the parameter of \(Q(t,x)\) that will generate the Galois splitting models for all \(16\)\(C_{3}\wr C_{4}\) extensions where \(\alpha\) is a root of \(g(x)\) are listed below in Table 1.
### Conclusion
The cases in which the filtration of a Galois group by its ramification groups are interesting and most difficult and hence the need for Galois splitting models, are the wild extensions of composite degree. For wildly ramified extensions of degree 11 and lower, Jones and Roberts have computed all data relating to their ramification groups. The next interesting cases that we computed were \(\mathcal{K}(2,12)\), \(\mathcal{K}(3,12)\), \(\mathcal{K}(2,14)\), \(\mathcal{K}(7,14)\), \(\mathcal{K}(3,15)\), \(\mathcal{K}(5,15)\), and \(\mathcal{K}(2,18)\). For these cases we computed the Galois slope content, Galois mean slope, and Inertia subgroup see [6] on how these values are computed using the Galois splitting models. Once again all computed data are available at [https://hobbes.la.asu.edu/LocalFields/](https://hobbes.la.asu.edu/LocalFields/) and [http://www.lmfdb.org/LocalNumberField/](http://www.lmfdb.org/LocalNumberField/) and implementation of the Algorithm 1 is located at [https://github.com/bcarri11/gsm_panayi](https://github.com/bcarri11/gsm_panayi).
\begin{table}
\begin{tabular}{||c|l|l|} \hline Parameter Value & Defining Polynomial & Galois Splitting Model \\ \hline \hline \(-\frac{1}{3}\alpha^{8}-\frac{2}{3}\alpha^{7}+4\alpha^{6}+\) & \(x^{12}+12x^{11}-12x^{10}-6x^{9}-9x^{7}+\) & \(x^{12}-2673x^{11}+1199940x^{10}-22068644x^{9}+\) \\ \(\frac{10}{3}\alpha^{5}-\frac{1}{3}\alpha^{4}+6\alpha^{3}+\) & \(6x^{6}+9x^{5}+9x^{4}+9x^{3}-9\) & \(91973115x^{8}-138890646x^{7}+55022127x^{6}+54465408x^{5}-53087931x^{4}+10216039x ^{3}+1170603x^{2}+2661x+1\) \\ \hline \(-\frac{2}{3}\alpha^{8}-\frac{4}{3}\alpha^{7}+\frac{17}{3}\alpha^{6}+\frac{2 3}{3}\alpha^{5}+\frac{5}{3}\alpha^{4}+\frac{5}{3}\alpha^{4}+11\alpha^{3}+9 \alpha^{2}\) & \(9x^{4}-9\) & \(747684x^{7}-134418x^{6}-1037412x^{5}+1077174x^{4}-\) \\ \(\frac{3}{3}\alpha^{4}+11\alpha^{3}+9\alpha^{2}\) & & \(392936x^{3}+46728x^{2}+636x+1\) \\ \hline \(-\frac{1}{3}\alpha^{6}-\frac{4}{3}\alpha^{5}+\frac{5}{3}\alpha^{4}+\frac{1}{ 2}\alpha^{3}+8\alpha^{2}\) & \(x^{12}-27x^{11}+21x^{10}+39x^{9}-27x^{8}+\) & \(x^{12}-6x^{11}-6000x^{10}-23405x^{9}+148536x^{8}-133020x^{7}-239511x^{6}+508500x ^{5}-332604x^{4}+\) \\ & \(27x+18\) & \(83515x^{3}-6000x^{2}-6x+1\) \\ \hline \(\alpha^{2}\) & \(x^{12}-9x^{11}+12x^{10}-9x^{9}+9x^{8}-9x^{7}+\) & \(x^{12}-21x^{11}+135x^{10}-275x^{9}-99x^{8}+900x^{7}-12x^{6}+9x^{5}+9x^{4}+9x^{ 3}-9\) & \(741x^{6}-270x^{5}+531x^{4}-140x^{3}-30x^{2}+9x+1\) \\ \hline \(-\frac{1}{3}\alpha^{6}-\frac{2}{3}\alpha^{5}+3\alpha^{4}+\) & \(x^{12}+24x^{11}-39x^{10}-3x^{9}-36x^{8}+\) & \(x^{12}-48x^{11}-135x^{10}+886x^{9}+90x^{8}-3906x^{7}+3687x^{6}+2538x^{5}-5436x^{ 4}+2884x^{3}-597x^{2}+36x+1\) \\ & \(27x-36\) & & \\ \hline \(-\frac{1}{3}\alpha^{8}-\frac{2}{3}\alpha^{7}+5\alpha^{6}+\) & \(x^{12}-12x^{11}-3x^{10}+9x^{9}+9x^{8}+6x^{6}+\) & \(x^{12}-4452x^{11}+3497394x^{10}-87848339x^{9}+9x^{3}-9\) & \(350436510x^{8}-452194092x^{7}+20067603x^{6}+393023304x^{5}-283549896x^{4}+53119 039x^{3}+3448488x^{2}+4440x+1\) \\ \hline \(-\frac{1}{3}\alpha^{7}-\frac{4}{3}\alpha^{6}+\alpha^{5}+\) & \(x^{12}-6x^{11}+6x^{10}+9x^{9}+9x^{7}-3x^{6}+\) & \(x^{12}-117x^{11}-5736x^{10}+24646x^{9}-12645x^{8}-65502x^{7}+89493x^{6}+5544x^{ 5}-67761x^{4}+38929x^{3}-6957x^{2}+105x+1\) \\ \hline \(-\frac{2}{3}\alpha^{7}-\frac{4}{3}\alpha^{6}+6\alpha^{5}+\) & \(x^{12}-3x^{11}+6x^{10}-12x^{9}+9x^{8}-3x^{6}-\) & \(x^{12}-6x^{11}-23370x^{10}+70330x^{9}+86346x^{8}-487980x^{7}+476484x^{6}+70920x^{5 }-332829x^{4}+163480x^{3}-23370x^{2}-6x+1\) \\ & \(x^{12}-276x^{11}+12x^{9}-9x^{8}-9x^{7}+\) & \(x^{12}-2763x^{11}+1714650x^{10}-24252134x^{9}+3x^{6}-3x^{6}-9\) & \(91860525x^{8}-127673766x^{7}+38094477x^{6}+61019388x^{5}-49704831x^{4}+7257379x^{3}+1684 323x^{2}+2751x+1\) \\ \hline \(-\frac{1}{3}\alpha^{7}-\frac{4}{3}\alpha^{6}+\alpha^{5}+\) & \(x^{12}-12x^{11}+3x^{11}-33x^{10}-3x^{9}+9x^{8}-3x^{6}-\) & \(x^{12}-6x^{11}-23370x^{10}+70330x^{9}+86346x^{8}-487980x^{7}+476484x^{6}+70920x^{5 }-332829x^{4}+163480x^{3}-23370x^{2}-6x+1\) \\ \hline \(-\frac{2}{3}\alpha^{8}-\frac{2}{3}\alpha^{7}+\frac{19}{3}\alpha^{6}+\frac{34}{ 3}\alpha^{5}+\frac{19}{3}\alpha^{4}+6\alpha^{3}+9\alpha^{2}\) & \(x^{12}+3x^{11}-6x^{10}+12x^{9}-9x^{8}-9x^{7}+\) & \(x^{12}-2763x^{11}+1714650x^{10}-24252134x^{9}+3x^{6}-9x^{5}+9x^{4}-9\) & \(91860525x^{8}-127673766x^{7}+38094477x^{6}+61019388x^{5}-49704831x^{4}+7257379x^{3}+1684 323x^{2}+27512x+1\) \\ \hline \(-\frac{1}{3}\alpha^{7}-\frac{4}{3}\alpha^{6}+\alpha^{5}+\) & \(x^{12}-30x^{11}-33x^{10}-3x^{9}+9x^{8}-\) & \(x^{12}-6x^{11}-23370x^{10}+70330x^{9}+86346x^{8}-487980x^{7}+4776484x^{6}+70920x^{5 }-332829x^{4}+163480x^{3}-23370x^{2}-6x+1\) \\ \hline \(-\frac{2}{3}\alpha^{8}-\frac{2}{3}\alpha^{7}+\frac{19}{3}\alpha^{7}+\frac{19}{ 3}\alpha^{6}+\frac{34}{3}\alpha^{5}+\frac{19}{3}\alpha^{4}+6\alpha^{3}+9 \alpha^{2}\) & \(x^{12}+3x^{11}-6x^{10}+12x^{9}-9x^{8}-9x^{7}+\) & \(x^{12}-2763x^{11}+1714650x^{10}-24252134x^{9}+3x^{6}-9x^{5}+9x^{4}-9\) & \(91860525x^{8}-127673766x^{7}+38094477x^{6}+61019388x^{5}-49704831x^{4}+7257379x^{3}+1684 323x^{2}+27512+1\) \\ \hline \(-\frac{1}{3}\alpha^{7}-\frac{4}{3}\alpha^{6}+\alpha^{5}+\) & \(x^{12}-30x^{11}-33x^ |
2305.20087 | Too Large; Data Reduction for Vision-Language Pre-Training | This paper examines the problems of severe image-text misalignment and high
redundancy in the widely-used large-scale Vision-Language Pre-Training (VLP)
datasets. To address these issues, we propose an efficient and straightforward
Vision-Language learning algorithm called TL;DR, which aims to compress the
existing large VLP data into a small, high-quality set. Our approach consists
of two major steps. First, a codebook-based encoder-decoder captioner is
developed to select representative samples. Second, a new caption is generated
to complement the original captions for selected samples, mitigating the
text-image misalignment problem while maintaining uniqueness. As the result,
TL;DR enables us to reduce the large dataset into a small set of high-quality
data, which can serve as an alternative pre-training dataset. This algorithm
significantly speeds up the time-consuming pretraining process. Specifically,
TL;DR can compress the mainstream VLP datasets at a high ratio, e.g., reduce
well-cleaned CC3M dataset from 2.82M to 0.67M ($\sim$24\%) and noisy YFCC15M
from 15M to 2.5M ($\sim$16.7\%). Extensive experiments with three popular VLP
models over seven downstream tasks show that VLP model trained on the
compressed dataset provided by TL;DR can perform similar or even better results
compared with training on the full-scale dataset. The code will be made
available at \url{https://github.com/showlab/datacentric.vlp}. | Alex Jinpeng Wang, Kevin Qinghong Lin, David Junhao Zhang, Stan Weixian Lei, Mike Zheng Shou | 2023-05-31T17:59:03Z | http://arxiv.org/abs/2305.20087v3 | (\mathcal{T}\)**oo \(\mathcal{L}\)arge\(;\)**Data \(\mathcal{R}\)eduction for Vision-Language Pre-Training
###### Abstract
This paper examines the problems of severe image-text misalignment and high redundancy in the widely-used large-scale Vision-Language Pre-Training (VLP) datasets. To address these issues, we propose an efficient and straightforward Vision-Language learning algorithm called TL;DR, which aims to compress the existing large VLP data into a small, high-quality set. Our approach consists of two major steps. First, a codebook-based encoder-decoder captioner is developed to select representative samples. Second, a new caption is generated to complement the original captions for selected samples, mitigating the text-image misalignment problem while maintaining uniqueness. As the result, TL;DR enables us to reduce the large dataset into a small set of high-quality data, which can serve as an alternative pre-training dataset. This algorithm significantly speeds up the time-consuming pretraining process. Specifically, TL;DR can compress the mainstream VLP datasets at a high ratio, e.g., reduce well-cleaned CC3M dataset from 2.82M to 0.67M (\(\sim\)24%) and noisy YFCC15M from 15M to 2.5M (\(\sim\)16.7%). Extensive experiments with three popular VLP models over seven downstream tasks show that VLP model trained on the compressed dataset provided by TL;DR can perform similar or even better results compared with training on the full-scale dataset. The code will be made available at [https://github.com/showlab/data-centric.vlp](https://github.com/showlab/data-centric.vlp).
## 1 Introduction
The recent "scale-is-everything" viewpoint has become a widely accepted notion in the Vision-language Pre-training (VLP) community [1, 7, 17, 32, 37]. According to this view, the scale of the data has increased from the original tens of thousands-level (e.g., COCO [25] and VG [20]) to millions-level (e.g., CC3M [37] and CC12M [7]), and even up to billions-level (e.g., YFCC100M [40], WIT400M [32], and LAION400M [36]). Approaches [17, 32, 53] trained on these large-scale data show remarking performance improvement in various downstream tasks.
However, simply scaling-up data brings two critical challenges: \(i\). Larger image-text datasets lead to more training cost (e.g., Pretraining CoCa takes about 5 days on 2,048 CloudTPUv4 chips [53]) and storage overhead, which is difficult to afford. \(ii\). Obtaining high-quality VLP data requires massive data and well-designed collecting/filtering pipeline, which is expensive. For instance, the CC3M [37] data was obtained after filtering 5 billion collected images. These challenges are daunting and may impede the participation of numerous researchers in the VLP community.
In this study, we stop hunting for larger-scale data blindly and ask an important question: _Does employing a larger dataset always result in better performance in VLP?_ To explore and answer this question, we begin with a simple experiment. First, we utilize a pre-trained BLIP [22] model to calculate the Image-Text Matching (ITM) scores for all samples in the clean CC3M dataset. Subsequently, we remove a portion of the samples with the lowest ITM scores and evaluate the transfer learning results, as shown in Figure 1. Surprisingly, discarding 50% of the samples slightly improves performance. This remarkable finding challenges the prevailing belief that employing larger amounts of data invariably leads to superior VLP outcomes.
This experiment suggests removing certain data points
Figure 1: **Does using more data really lead to better performance in VLP?** Instead of training on the full-scale CC3M dataset, we delete data with low image-text matching score. We find that BLIP [22] model pretrained on 50% reserved data even obtains better result than full-scale dataset on downstream COCO retrieval [25]. This observation exposes there exists serious _mis-alignment_ between text&visual modalities and data redundancy in dataset.
can actually improve the model's ability to learn and generalize. Moreover, considering the performance improvements after removing the low ITM score data, we can infer the existence of significant misalignment between the textual and visual modalities in many text-image data pairs (see Figure 7 and the supplementary material for more evidences). These discoveries present promising potentiality to enhance the performance of models that depend on a smaller volume of VLP data.
Driven by above analysis and recent advance in dataset pruning [38], we present a simple, effective and scalable algorithm called _TL:DR_ that aims to improve data efficiency for visual-language pretraining. The _TL:DR_ has a powerful codebook-based captioner, which contains a visual encoder, a look-up codebook and a text decoder. Here is how it works: First, _TL:DR_ feeds each image into the visual encoder and determines the corresponding codes of the image by measuring the similarity between the codebook and the embedding generated by the encoder. Given a large pool of image-text pairs, _TL:DR_ clusters the samples based on their image corresponding codes and selects a representative subset of samples from each cluster. Then, _TL:DR_ further refines the caption of the selected samples via text decoder to reduce text-image misalignment. By doing so, _TL:DR_ is able to significantly reduce the size of the training dataset while maintaining the high quality.
In this work, we employ _TL:DR_ on widely-used CC3M, CC12M, YFCC100M and LAION400M datasets and evaluate small size data on three widely-used frameworks including CLIP [32], ViLT [19], and BLIP [22] for data efficiency pretraining with seven representative visual-language downstream tasks. The results show that, with only \(10\%-25\%\) data obtained by _TL:DR_, frameworks achieve similar or even better performance compared with the full-scale dataset. We hope our findings can inspire the community to reconsider data efficiency for VLP rather than blindly utilizing increasingly massive datasets.
## 2 Related Work
### Data-Efficient Learning
Recent successes in deep learning are largely attributed to the vast amount of data [10, 32]. However, collecting massive amounts of data is expensive and raises concerns about privacy and copyright [55]. As a result, the research community has become increasingly interested in data-efficient learning, which includes:
**Dataset Distillation**[46, 47, 57] compress a large dataset into a small set of synthetic samples, enabling models trained on the smaller dataset to achieve competitive performance with those trained on the original dataset. However, these techniques are only effective on relatively small datasets at low resolutions, such as CIFAR [21], and their performance deteriorates significantly when applied to larger-scale datasets. For example, the accuracy of a model trained on the state-of-the-art MMT's generated data is only 33.8% on the ImageNet-1K [10] test result [6], while pre-training on real ImageNet-1K achieves over 80% accuracy [9]. Furthermore, these methods necessitate supervised class labels, which are not suitable for multimodal data.
**Data Pruning**[30, 41] assumes high redundancy in large datasets, selecting only a subset of challenging samples. [28, 30] observed that during the entire training process, some examples are learned early and never forgotten, while others can be repeatedly learned and forgotten. The related work [38] uses a hard sample selection method to select 80% samples of the ImageNet dataset, and the model trained on selected samples approximating training on all data. Another recent work, CiT [48], also proposes to train models with dynamic training data.
**Neural Data Server** (NDS) [5, 26, 49] proposes a large-scale search engine to identify the most useful transfer learning data from large corpus. While these methods can be extended to multi-modality data, a similar idea has also been applied in NLP [50]. However, this setting assumes that the user has access to all downstream data and needs to train the downstream task using additional retrieval data.
In this work, we are different from previous techniques in that we attempt to compress large-scale multi-modal data for the first time, leading to comparable performance between the compressed and original vision-language datasets. We provide a comparison of our approach with these related works in Table 1.
### Visual-Language Pre-training
Large-scale Vision-Language Pre-training (VLP) involves training on extensive multi-modality data and evaluating performance on various downstream vision-language tasks. Conventional frameworks include the dual-stream architecture [32], the one-stream architecture [19, 24], and
\begin{table}
\begin{tabular}{l l l l l l l l} \hline Method & Year & Data Type & Compression Ratio\(\uparrow\) & Task Agnostic & Large-scale & Supervision & Generation/Selection \\ \hline Dataset Distillation [47] & 2017 & Image & 99\%-99.99\% & ✗ & ✗ & Class Label & Generation \\ Data Pruning [38] & 2022 & Image & 20\%-30\% & ✗ & ✓ & Class Label & Selection \\ Neural Data Server [49] & 2020 & Multi-modality & 94\%-98\% & ✗ & ✓ & Image-text Pairs & Selection \\ _TL:DR_ (ours) & - & Multi-modality & 75\%-90\% & ✓ & ✓ & Image-text Pairs & Generation+Selection \\ \hline \end{tabular}
\end{table}
Table 1: **Data-efficient learning methods**. “Large-scale” means that the methods are effective when used on datasets that are very large in size. The “task agnostic” means that the methods can be used regardless of the specific downstream task, and without any prior exposure to the associated data.
the encoder-decoder architecture [22]. Previous works have relied on high-quality, human-annotated datasets such as COCO [25] (110K images) and Visual Genome [20] (100K). As model sizes continue to increase, pre-training requires even more data than before [17, 25, 45], resulting in an extremely high computational cost. However, obtaining large and high-quality multi-modality data is challenging due to the difficulties in annotation. In this paper, we aim to democratize VLP research by proposing a general compression method for existing VLP data.
## 3 Method
Our _TL;DR_ is a simple yet effective approach for compressing the Vision-Language Pre-training dataset, leading to further reduction of the training cost. Our approach consists of two stages: (1) codebook-based captioner training and (2) data reduction including samples selection and caption refining. Figure 2 illustrates the idea, introduced next.
### Codebook-based Captioner
The captioner consists of a visual encoder, a codebook and a text decoder. The visual encoder is employed to extract image features. Inspired by vector quantisation technique [42, 52], we try to quantize the image feature for further clustering by utilizing a learnable codebook. Codebook comprises \(K\) learnable embedding vectors, each of which can be regarded as a code. Each token of image features conducts a nearest neighbor look-up from codebook and finds its corresponding code. In this way, image features are quantized into a couple of codes (quantized vectors). The quantized vectors are then sent into a text decoder, generating a caption. In order to enhance the quality of text generation, we initialize the codebook with the text embedding of \(K\) most frequently occurring keywords/keyphrases in the entire dataset, which enables the codebook to contain meaningful and intuitively understandable semantics.
To train the whole captioner, we utilize a Language Modeling loss [11], which maximizes the likelihood of the text in an autoregressive manner, and a symmetric commitment loss [52], which is specifically designed for codebook. We initially train this captioner on noisy source data and subsequently fine-tune it on smaller-scale datasets, such as COCO [25] and VisualGenome [20].
### Data Reduction
Currently, large-scale datasets exist with serious redundancy [38]. Meanwhile, a large part of texts is noisy and misaligned with images in VLP data. See Figure 2 for the example (the caption "You need to think twice before buying a pet as present" does not match the image). To overcome these limitations, we use the learned codebook to condense large-scale noisy data and the learned captioner to reduce the misalignment over image-text pairs.
**Samples selection.** For an encoded image feature with \(L\) tokens, we compute an index vector with length \(L\). Each value is the index of the code, which is the closest to each token. This vector maps the features from image space to semantic space so that it reduces the complexity of the image, benefiting and accelerating the cluster process. Subsequently, each image sample in the dataset is equipped with an index vector according to the above process and we cluster these vectors into \(N\) clusters with K-Means ( speed up by Faiss [18]). Then we uniformly sample \(M\%\) data points from each cluster, producing a small subset of the dataset. We examine various sampling methods and observe that uniform sampling is stable across different scales.
**Caption refining.** To alleviate the misalignment problem, we want to improve the text quality using the generated caption. Generated text \(T_{g}\) is from the text decoder, which takes the quantized vector of the image as input. We simply concatenate \(T_{g}\) with original text \(T_{o}\) together, denoted as \(T=T_{o}+T_{g}\), to refine and preserve the original caption's uniqueness while maintaining data diversity.
The compressed small-scale dataset with refined captions is recorded as dataset \(D_{c}\). At last, we train VLP models on this high quality dataset \(D_{c}\) and expect the model
Figure 2: **Our _TL;DR_ architecture. We first train a codebook-based captioner in Stage1. Then the learned codebook and captioner are used to reduce VLP data in Stage 2. Pre-training on the reduced dataset achieves similar performance to the original full-scale dataset across downstream tasks.**
to achieve comparable performance with original full-scale dataset \(D\) on downstream Vision-Language tasks.
**Discussion.** Considering the serious misalignment problem, it seems quite straightforward to use pure generated high-quality caption \(T_{g}\) to replace original noisy text. Driven by this idea, we try to pretrain BLIP [22] models with \(T_{o}\), \(T_{g}\) and \(T_{o}+T_{g}\) independently and show the train curve of Image-Text Contrastive (ITC) loss in Figure 3. However, we find the model trained with \(T_{g}\) fails into model collapse [34]. This phenomenon can be explained by captioning collapse [43, 44] and one-to-many problem [51] in image cpationing. That is, the trained captioner will generate fixed or similar captions for different images, which limits diversity in the output and easily leads to trivial solutions for contrastive loss. On the contrary, the ITC loss for both \(T_{o}\) and \(T_{o}+T_{g}\) works well and the \(T_{o}+T_{g}\) converges better. We also observe the loss of \(T_{g}\) is smaller than other two variants at epoch 0-2, which indicates the generated caption matches well with the image. Note that this simple stitching operation on caption does not bring additional computation cost for VLP as the max length in text-encoder Bert [11] model keeps unchanged for all setting.
### Technical Details.
Our _TL;DR_ can be implemented efficiently, and importantly, does not require any large auxiliary model. The codebook size \(K\) is 3000 as default. The selection of keywords/phrases is implemented using the NLTK 1. We adopt ViT-B/16 [12] as image encoder and BertLMHead Model [11] as text decoder. In this way, the token length \(L\) is 196 as default. The cross-attention is computed over image embedding and text embedding. To show the generality of compressed dataset, we test \(D_{c}\) on three different and representative VLP architectures: dual-stream CLIP [32], one-stream ViLT [19] and Fusion-encoder Blip [22] on various downstream tasks. All these models are trained under the same setting with different datasets.
Footnote 1: [https://github.com/nltk/nltk](https://github.com/nltk/nltk)
## 4 CC3M Experiments
We first study dataset reduction on well-cleaned _CC3M_[37] which heavily filters web crawled pairs and only keeps 0.1% of the raw data. This dataset contains a total of 2.8 million images. We employ our _TL;DR_ to compress the CC3M dataset, then conduct pre-training and fine-tuning evaluations on both original and compressed datasets. Following our ablation study, we transfer the pre-trained model to seven Vision-Language tasks downstream and fine-tune it through end-to-end training to evaluate its performance.
**Training.** We utilize PyTorch [29] to implement our models and trained them on 8 NVIDIA A100 GPUs to reduce the data samples. For Vision-Language Pre-training, we utilize 2 nodes, each equipped with 16 GPUs. The model is pre-trained for 20 epochs with a batch size of 1260 and an AdamW [27] optimizer with a weight decay of 0.05. During training, we apply a learning rate warm-up to 3e-4 and a linear decay with a rate of 0.85. For image augmenta
\begin{table}
\begin{tabular}{l c c} \hline \hline case & TR@1 & IR@1 \\ \hline \hline & 65.3 & 49.8 \\ ✓ & 68.5 & 51.9 \\ & ✓ & 69.4 & 52.3 \\ ✓ & ✓ & **72.8** & **54.8** \\ \hline \end{tabular}
\end{table}
Table 2: _TL;DR ablation experiments_ with BLIP model [22] on CC3M. We report image-to-text retrieval top-1 (TR@1) and text-to-image retrieval top-1 (IR@1) accuracy (%) on COCO [25] dataset. If not specified, the default baseline is: pre-training BLIP model based on ViT-B/16 with 25% sample of CC3M. Default settings are marked in gray.
Figure 3: Training curve with CC3M dataset. Simply stitching generated text and original text together solved the model collapse problem in Image-text Contrastive Loss.
tion, we utilize RandAugment [8] and apply all of the original policies except color inversion. This decision is based on the recognition of the crucial role that color information plays in the data. For pre-training, images were randomly cropped to a resolution of \(224\times 224\). We then increase this to \(384\times 384\) for fine-tuning downstream tasks. Further information about the training hyperparameters for downstream tasks can be found in the supplementary material.
### Main Properties
We ablate our _TL:DR_ using the default setting in Table 2 (see caption). Several intriguing properties are observed.
**Module deconstruction.** In Table 1(a) we analyze the impact of different components in _TL:DR_. We establish a baseline by randomly selecting 25% of the data from CC3M (first row). Our results show that codebook-based sampling outperforms random selection by 3.2% in TR@1. We also observe that both _codebook-based sampling_ and _caption refinement_ are crucial and the combination of them achieves optimal downstream performance.
**Sample selection.** In Table 1(b) we study the sample selection strategy in Stage 2. We sample 25% data in each cluster by default. For _Gradient-based_, we train a tiny network to conduct VLP pretrained with ITC [24], ITM [24] and LM [11]. Then we select samples which contribute most to the gradients in each cluster. _Large distance:_ Another perspective is that data points on the border of each cluster are more important than those at the center [4]. So we first compute the center of each cluster and then choose the sample that has the largest distance from the center of each cluster. We also report the result of _hard-sample_ selection from [38]. We observe that all these variants produce similar results except _large distances_. This suggests that the clustering step, rather than the selection step, plays a key role in data compression during Stage 2. To maintain simplicity, we choose uniform sampling as the default method.
**Codebook initialization.** In Table 1(c) we compare different initialization strategies. The xavier means all parameters in the codebook are initialized with xavier initialization [14]. For the object tags initialization, following previous works [2, 56], we use the 1600 object tags from Visual Genome [20] and extract text feature with a pre-trained BERT [11]. With same training setting, the keywords achieve a 0.8% TR@1 improvement and a 0.7 % IR@1 improvement over xavier. This result is expected as the text embeddings provide contextual information and simplify the learning process.
**Codebook vs. Image embedding.** In Table 1(d), we investigate different ways of cluster sampling. First, we remove the codebook from Stage-1 and use image embedding instead. Alternatively, we directly cluster images using the
\begin{table}
\begin{tabular}{l l|c c c c c c c c c c c c} \multicolumn{1}{c}{Method} & Dataset & \multicolumn{1}{c|}{\#Samples} & \multicolumn{5}{c|}{MSCOCO (5K test set)} & \multicolumn{5}{c}{Flick30K (1K test set)} \\ \multicolumn{1}{c}{} & \multicolumn{5}{c|}{Image\(\rightarrow\) Text} & \multicolumn{5}{c|}{Text\(\rightarrow\) Image} & \multicolumn{5}{c}{Image\(\rightarrow\) Text} & \multicolumn{5}{c}{Text\(\rightarrow\) Image} \\ \multicolumn{1}{c}{} & & & R@1 & R@5 & R@10 & R@1 & R@5 & R@10 & R@1 & R@5 & R@10 & R@1 & R@5 & R@10 \\ \hline \multirow{4}{*}{CLIP [32]} & _CC3M_[37] & 2.82M & **60.4** & 85.3 & 93.2 & 48.9 & 75.4 & 84.7 & 77.3 & 91.1 & **93.2** & 71.6 & 90.1 & 91.4 \\ & _TL:DR-CC3M_ & 0.67M & 60.3 & **85.6** & **93.8** & **49.4** & **77.4** & **86.0** & **82.5** & **91.8** & 92.2 & **72.0** & **90.5** & **92.1** \\ \cline{2-13} & _CC3M_[37] & 2.82M & 36.2 & 64.3 & 80.1 & 29.9 & 57.9 & 66.9 & 67.4 & 83.2 & **92.4** & 54.3 & 84.1 & **90.8** \\ & _TL:DR-CC3M_ & 0.67M & **37.7** & **64.6** & **80.8** & **30.7** & **58.4** & **68.2** & **68.5** & **85.4** & 92.0 & **55.6** & **82.1** & **90.8** \\ \hline \multirow{4}{*}{ViLT [19]} & _CC3M_[37] & 2.82M & 66.7 & **89.2** & 93.8 & 52.5 & **79.3** & 87.1 & 83.8 & 92.0 & 93.2 & 74.0 & 92.0 & **92.8** \\ & _TL:DR-CC3M_ & 0.67M & **67.1** & 88.7 & **94.1** & **53.1** & 78.9 & **88.2** & **85.3** & **92.4** & **93.6** & **75.6** & **92.1** & 92.5 \\ \cline{1-1} & _CC3M_[37] & 2.82M & 39.2 & 68.6 & 77.8 & 30.4 & 53.2 & 66.1 & 70.5 & 88.7 & 92.1 & 57.6 & **84.9** & **92.6** \\ \cline{1-1} & _TL:DR-CC3M_ & 0.67M & **43.5** & **70.8** & **81.4** & **33.9** & **57.9** & **66.8** & **73.2** & **90.5** & **93.3** & **58.6** & 84.7 & 92.4 \\ \hline \multirow{4}{*}{BLIP [22]} & _CC3M_[37] & 2.82M & 70.9 & 91.3 & **96.1** & 54.3 & 80.2 & 88.0 & 86.3 & 94.1 & 94.8 & 74.8 & 91.6 & 92.6 \\ \cline{1-1} & _TL:DR-CC3M_ & 0.67M & **72.8** & **91.9** & 95.9 & **54.8** & **80.6** & **89.4** & **87.5** & **94.8** & **95.3** & **75.7** & **92.2** & **93.4** \\ \cline{1-1} & _CL:DR-CC3M_ & 2.82M & 42.3 & 67.8 & 77.4 & 31.5 & 55.7 & 66.3 & 75.1 & 91.2 & 93.6 & 60.6 & 85.9 & 91.8 \\ \cline{1-1} & _TL:DR-CC3M_ & 0.67M & **48.7** & **73.1** & **82.7** & **36.7** & **60.6** & **70.4** & **76.3** & **91.9** & **93.9** & **61.0** & **87.7** & **93.0** \\ \end{tabular}
\end{table}
Table 3: **Fine-tuning and zero-shot** & **image-text retrieval** results on MSCOCO and Flickr30K dataset.
\begin{table}
\begin{tabular}{l l|c c c c c} \multicolumn{1}{c}{Method} & Dataset & \multicolumn{1}{c|}{\#Samples} & \multicolumn{5}{c}{VQA} & \multicolumn{5}{c}{NLVR\({}^{2}\)} & \multicolumn{5}{c}{RefCOCO+} & \multicolumn{5}{c}{COCO Caption} \\ \multicolumn{1}{c}{} & & test-dev & test-std & dev & test-P & val & testA & testB & B@4 & CIDEr \\ \hline Random-_CC3M_ & 0.67M & 68.3 & 66.2 & 73.6 & 73.8 & 68.6 & 71.8 & 62.8 & 35.9 & 118.8 \\ _CC3M_[37] & 2.8M & 71.5 & 71.8 & 76.0 & 76.2 & 72.4 & 76.1 & 65.3 & 36.8 & 121.6 \\ _TL:DR-CC3M_ & 0.67M & 73.1\({}_{+1.6}\) & 73.2\({}_{+1.4}\) & 77.7\({}_{+1.7}\) & 78.0\({}_{+1.8}\) & 75.1\({}_{+2.7}\) & 78.5\({}_{+2.4}\) & 68.4\({}_{+3.1}\) & 37.6\({}_{+0.8}\) & 123.8\({}_{+2.2}\) \\ \end{tabular}
\end{table}
Table 4: **Comparison with BLIP model pre-trained on different data sources for VQA, NLVR\({}^{2}\)**, RefCOCO+ and COCO Captioning. ViLT and CLIP architectures can not evaluated on part of these tasks since structural limitations.**
\begin{table}
\begin{tabular}{l l|c c c c} \multicolumn{1}{c}{Method} & Dataset & \multicolumn{1}{c|}{R@1\(\uparrow\)} & R@5\(\uparrow\) & R@10\(\uparrow\) & \multicolumn{1}{c}{MdR\(\downarrow\)} \\ \hline \multirow{4}{*}{CLIP [32]} & Rand-_CC3M_[37] & 15.3 & 34.8 & 46.3 & 13.0 \\ & _TL:DR-CC3M_[37] & 19.4 & **37.3** & **47.5** & **11.0** \\ _TL:DR-CC3M_ & **21.8** & **38.6** & **48.5** & **10.0** \\ \hline \multirow{4}{*}{ViLT [19]} & Rand-_CC3M_[37] & 18.8 & 38.2 & 49.5 & 1
image embedding [22] of images from BLIP model (pre-trained on 200M Image-text pairs). We observe the image embedding leads to much better result than text embedding. This is reasonable because clustering visual-similarity samples with text only is difficult. We observe that clustering depended on our codebook performs better than both image embedding and text embedding. This demonstrates that our codebook can efficiently project image embedding to semantic space, benefiting cluster process.
**Cluster sampling ratio.** Table 2e varies the sampling ratio of each cluster from 10% to 100%. We are surprised to find that a low sampling ratio can still produce effective results. With only 25% of the data and the _TL;DR_ model, we are able to achieve a 1.9% improvement on TR@1 and a 0.8% improvement on IR@1 over the full-scale baseline. Additionally, we observe that larger sampling ratios lead to even better results. Since our focus is on _achieving similar transfer learning results with fewer samples_, we use a default sampling ratio of 25% to minimize computation costs.
**Cluster numbers.** In Table 2f, we investigate the impact of cluster number on Stage 2 by increasing it from 300 to 30K. We observe that using more clusters results in a slight improvement at the beginning and becomes stable when the number of clusters exceeds 3K. Moreover, all results consistently outperform the random selection baseline. Therefore, we use 3K clusters as the default in this work, as it performs well on fine-tuning tasks.
### Transfer Learning Experiments.
We conduct an extensive evaluation of transfer learning in downstream tasks using the model pre-trained on our compressed _TL;DR-CC3M_ and source _CC3M_ with 3 architectures. Our evaluation primarily focuses on the core tasks of three categories that examine: (1) cross-modality alignment, (2) image captioning and multi-modality understanding capabilities, and (3) visual recognition. The baseline in this section is the model trained on CC3M dataset.
#### 4.2.1 Cross-modality Alignment Task
**Image-Text retrieval.** Fine-grained world region alignment plays a critical role in this task. We report both image-to-text retrieval (TR) and text-to-image retrieval (IR) on the COCO [25] and Flickr30K [31] benchmarks. For the BLIP [22] model, we adopt an additional re-ranking strategy, following the original implementation. In Table 3, we also report zero-shot retrieval results. We found that _TL;DR_ achieves comparable results with the baselines on all metrics and surprisingly performs quite well on zero-shot results. For example, for the BLIP [22] architecture, our method leads to a 6.4% improvement (from 42.3% to 48.7%) in Recall@1 of image-to-text retrieval on MSCOCO. All results suggest that a small part of refined image-text pairs is enough to learn good alignment.
**Zero-shot video retrieval.** In this experiment, we analyze the generalization ability of our method to video-language tasks. Specifically, we perform zero-shot transfer to text-to-video retrieval and evaluate the models trained on COCO-retrieval in Table 5. We uniformly sample 8 frames from each video to process the video input and concatenate the frame features into a single sequence. These models trained on our compressed dataset outperform the baseline on all metrics, demonstrating the generality of _TL;DR_.
#### 4.2.2 Image Captioning and Multi-modality Understanding Tasks
**Image captioning.** The task involves describing an input image, which we evaluate using No-Caps and COCO datasets. Both datasets are fine-tuned on COCO with the Language Modeling (LM) loss. We adopt a zero-shot setting for No-Caps dataset, and start each caption with the phrase "a picture of" for the BLIP architecture. We do not pre-train using COCO to avoid information leakage. Our
\begin{table}
\begin{tabular}{l l|c|c c c} Model & **Dataset** & \#Samples & ImNet & ImNet-A & ImNet-R \\ \hline \hline \multicolumn{2}{c|}{Rand-_CC3M_} & 0.67M & 58.3 & 61.8 & 62.3 \\ CLIP [32] & _CC3M_[37] & 2.82M & **62.2** & **65.2** & **66.9** \\ \multicolumn{2}{c|}{_TL:DR-CC3M_} & 0.67M & 61.4 & 65.0 & 65.7 \\ \hline \multicolumn{2}{c|}{Rand-_CC3M_} & 0.67M & 54.3 & 59.8 & 58.4 \\ ViLT [37] & _CC3M_[37] & 2.82M & 58.6 & 62.9 & **64.2** \\ \multicolumn{2}{c|}{_TL:DR-CC3M_} & 0.67M & **59.1** & **63.3** & 64.0 \\ \hline \hline \multicolumn{2}{c|}{Rand-_CC3M_} & 0.67M & 57.3 & 61.8 & 65.2 \\ BLIP [22] & _CC3M_[37] & 2.82M & **62.5** & **65.5** & **68.1** \\ \multicolumn{2}{c|}{_TL:DR-CC3M_} & 0.67M & 62.0 & 63.9 & 67.4 \\ \end{tabular}
\end{table}
Table 6: **Zero-shot image classification results on ImageNet [10], ImageNet-A [16], ImageNet-R [15]. _There is no free lunch_, as selecting partial samples reduces the visual diversity crucial for classification. Despite this, _TL;DR_ still performs significantly better than random selection.
Figure 4: **The generated caption** match the image well.
Figure 5: **The codebook-based clusters visualization.** The samples within each cluster exhibit similar contextual characteristics, as opposed to mere visual appearance. For example, the “Christmas elements” cluster located at the right.
results outperform baseline with a much smaller quantity of pre-training data, as shown in Table 4.
**Visual question answering (VQA).** We evaluate our model's performance on the VQA task [3], where the model needs to provide an answer based on an image and a question. We consider it as an answer generation task that allows open-vocabulary VQA for better results, following previous works [22, 23]. The results are presented in Table 4. The BLIP trained on _TL;DR-CC3M_ outperforms baseline by 1.4% on test-dev splits, demonstrating the effectiveness of our compressed dataset for improving VQA performance.
**Visual reasoning.** The Natural Language Visual Reasoning (NLVR2) [39] task is a binary classification task that requires the model to reason about two images and a question in natural language. Multi-modal reasoning is crucial for the completion of this task. We observe that BLIP trained on our dataset achieved 78.0% accuracy compared to 76.2% achieved by the CC3M, as shown in Table 4.
**Cross-modality grounding.** Referring Expression (RE) Comprehension requires the model to select the target object from a set of image regions proposals, based on the query description. This task heavily relies on visual-grounding ability. The models are evaluated on ground-truth objects, and we evaluate RE Comprehension on Re-fCOCO+ [54]. The results are reported in Table 4, and we observe that _TL;DR-CC3M_ achieves better results.
#### 4.2.3 Visual Recognition Tasks
Besides the cross-modality task, we also explore a unimodality task, mainly image classification. Specifically, we fix the image encoder and explore zero-shot image classification. We show the results in Table 6. Our _TL;DR_ shows steady improvement for all architectures over random selection. Unfortunately, the classification performance for _TL;DR-CC3M_ is slightly worse than the full-scale CC3M for the CLIP and BLIP architectures. Both of these architectures have independent image encoders like ViT to extract image embeddings. This indicates that this task heavily relies on visual diversity, which is different from multi-modal tasks, and our method reduces the visual diversity potentially. For the ViLT model, this architecture adopts a shared backbone for both visual and text, and we observe the slightly different results. We guess that multi-modality interaction in early-fusion affects the classification result.
### Visualization
**Generated caption visualization.** We show the generated caption in Figure 4. It is evident that the original captions can be highly abstract and difficult to match their respective images, even for human observers sometimes. For instance, when the ITM score is as low as 0.04, matching the figure with its corresponding caption becomes arduous. Such challenging cases can potentially harm the cross-modality alignment. In contrast, we observe that the generated captions describe the image very well and sometimes offer helpful complementary information. For example, "bus" and "castle" in the middle example.
**Codebook-based cluster visualization.** Figure 5 displays the codebook grouping result achieved with simple K-Means. Clusters are sets of data points with similar characteristics, often defined by their features or attributes. Interestingly, we observe that the model cluster samples "accurate", meaning that these samples have semantic similarity rather than simple appearance. For instance, the model classifies "dollars" and "piggy bank" together, even though they differ significantly in appearance.
### More Investigation
**Is image generation possible?** To ease the misalignment problem of image-text pairs, instead of simply selecting representative samples, a potential and naive idea is to generate images from text. To this end, we randomly sample 0.3M
\begin{table}
\begin{tabular}{l|c c} \hline Method & TR@1 & IR@1 \\ \hline real data & 58.3 & 44.0 \\ VQ-GAN [13] & 35.2 & 32.4 \\ DALLE2 [33] (implement from 2) & 44.3 & 38.3 \\ Stable Diffusion [35] (implement from 3) & **52.4** & **40.7** \\ \hline \end{tabular}
\end{table}
Table 7: **Compare different sample generation methods** over 0.3M subset of CC3M. We first pre-train BLIP model on these generated data and then evaluation on COCO.
Figure 6: **Image generation result** with strong Text-to-image Model. The generation time is also reported.
Figure 7: **ITM score distribution.**_TL;DR_ alleviates the issue of misalignment in VLP data.
subset of CC3M and generate image from text with three popular text to image generation models, VQ-GAN [52], DALLE 2 [33] and Stable Diffusion [35]. We display the generated samples in Figure 6. We observe that the generative models struggle with complex scenarios, but are capable of generating simple prompts like "dog" proficiently. In addition, generation methods only produce visual cues in a fixed vocabulary, potentially reducing data diversity.
Next, we pre-train BLIP models on these generated data and evaluate it on COCO Retrieval. In Table 7 we observe the results of transfer learning depend on the quality of generated samples, with those generated by stable diffusion being particularly effective. However, there still exists a significant gap between the generated data and the real dataset (e.g., 52.4% vs. 58.3% on TR@1). We believe that higher-quality and diverse generated images may lead to comparable results with real images in the near future.
**Explore the misalignment problem.** Figure 7 shows the Image-text Matching (ITM) score distribution for both _CC3M_ and our _TL;DR-CC3M_ data (the visualization about more datasets is reported in the supplementary). We observe a lot of samples of original _CC3M_ at low matching score even tends to zero, which indicates the current dataset has serious misalignment problems. Since image-text matching (ITM) loss and image-text contrastive (ITC) loss are used in all architectures, these samples will damage the multimodal representation learning. When adopting our _TL;DR_, we observe that the matching score tends to be higher and has very few samples with low ICM score.
## 5 Transfer to other VLP datasets
We study data compression performed in two categories shown below: clean data that involves human-based offline filter pipelines and raw data that has not undergone cleaning. For clean data, in addition to _CC3M_, we explore the well-cleaned, high-quality dataset _CC12M_[7]. Then, we study the raw data YFCC100M [40] and LAION400M [36]. _CC12M_[7] contains 12 million image-text pairs specifically meant to be used for vision-and-language pre-training. These data are collected by relaxing the data collection pipeline as in CC3M. YFCC15M [32] is a subset of the multilingual and noisy YFCC100M [40] that contains English captions. LAION400M [36] is a large-scale noisy dataset that provides URLs with captions for download. To control the computation cost and reduce the storage overhead, we randomly sample a 40M subset of LAION400M and download images at a resolution of 128 \(\times\) 128. So, we record the dataset as _TL;DR-LAION40M(128)_, and the performance over downstream tasks could be improved with higher resolution. More exploration about video-text datasets is reported in the supplementary material
We use BLIP as the default architecture and evaluate our _TL;DR_ on different datasets and show the results in Table 8. Surprisingly, with only 2.5M (16.7%) data, _TL;DR-YFCC15M_ leads to similar results with 15M raw data over all metrics except Imagenet. More results with different backbones are reported in the supplementary material. For _LAION40M(128)_, when using 8M data (20%), the model trained on our dataset consistently outperforms the baseline method on six downstream tasks. We noticed that the compression rate of _LAION40M(128)_ is less than that of _YFCC15M_. This may be due to the fact that the collection of _LAION40M(128)_ has already been filtered with CLIP similarity, reducing the impact of the misalignment problem.
## 6 Conclusion and Discussion
This paper presents _TL;DR_, a novel and pioneering algorithm for selecting and generating high-quality image-text pairs from noisy Vision-Language Pre-training (VLP) data, thereby contributing to the field of VLP. _TL;DR_ incorporates a text generation process into learning to reduce serious misalignment problem. Our experiments demonstrate three widely-used architectures leads to comparable results and much smaller training cost when learning from our generated dataset. Additionally, we demonstrate that the misalignment problem can be effectively addressed using our simple _TL;DR_. However, the choice of the highest compression ratio is done manually rather than learned. Furthermore, achieving even higher compression ratios for VLP models remains a challenge, and text-to-image generation models may be helpful in this regard. We hope that this perspective will inspire future research.
\begin{table}
\begin{tabular}{l l l|l l l l l l l} \hline \hline
**Dataset** & \#Samples Time & VQA & NLVR2 & ReFCOCO & Nocaps Captioning & \multicolumn{2}{c}{Flickr30K Retrieval} & \multicolumn{2}{c}{Imagenet} \\ & & test-dev & test-P & val & B@4 & CIDEr & TR@1 & IR@1 & Acc \\ \hline Rand-_CC12M_ & 2.4M & 14h & 71.8 & 76.2 & 72.5 & 36.8 & 121.0 & 82.9 & 73.3 & 61.2 \\ _CC12M_[7] & 10.8M & 65h & 73.5 & 78.9 & 74.1 & 37.5 & 122.9 & 84.7 & 75.3 & 65.3 \\ _TL;DR-CC12M_ & 2.4M & 14h & 74.1\({}_{+0.6}\) & 78.5\({}_{-0.4}\) & 74.0\({}_{-0.1}\) & 38.1\({}_{+0.6}\) & 124.1\({}_{+1.2}\) & 85.5\({}_{+0.8}\) & 76.3\({}_{+1.0}\) & 63.8\({}_{-1.5}\) \\ \hline Rand-_YFCC15M_ & 2.5M & 15h & 67.2 & 70.5 & 68.1 & 35.2 & 116.3 & 78.8 & 70.5 & 65.4 \\ _YFCC15M_[40] & 15M & 90h & 70.5 & 74.2 & 70.6 & 35.9 & 118.4 & 81.5 & 72.4 & 67.8 \\ _TL;DR-YFCC15M_ & 2.5M & 15h & 70.3\({}_{-0.2}\) & 75.3\({}_{+1.1}\) & 72.6\({}_{+2.0}\) & 37.2\({}_{+1.3}\) & 122.5\({}_{+4.1}\) & 82.3\({}_{+0.8}\) & 74.3\({}_{+1.9}\) & 67.3\({}_{-0.5}\) \\ \hline Rand-_LAION40M(128)_ & 8M & 48h & 70.7 & 75.3 & 73.4 & 34.8 & 113.2 & 80.4 & 72.5 & 68.5 \\ _LAION40M(128)_[36] & 40M & 120h & 74.5 & 79.1 & 76.6 & 35.2 & 117.4 & 83.2 & 74.9 & 71.3 \\ _TL;DR-LAION40M(128)_ & 8M & 48h & 76.3\({}_{+1.8}\) & 80.5\({}_{+1.4}\) & 77.4\({}_{+0.8}\) & 36.8\({}_{+1.6}\) & 120.9\({}_{+3.5}\) & 82.8\({}_{-0.4}\) & 76.1\({}_{+1.2}\) & 70.4\({}_{-0.9}\) \\ \hline \hline \end{tabular}
\end{table}
Table 8: **Comparison with different source of data on 6 downstream tasks. BLIP [22] is adopted as baseline and (128) means the image resolution is 128\(\times\)128. We also list the pre-training time, which can be significantly reduced via _TL;DR_.** |
2309.00082 | RePo: Resilient Model-Based Reinforcement Learning by Regularizing
Posterior Predictability | Visual model-based RL methods typically encode image observations into
low-dimensional representations in a manner that does not eliminate redundant
information. This leaves them susceptible to spurious variations -- changes in
task-irrelevant components such as background distractors or lighting
conditions. In this paper, we propose a visual model-based RL method that
learns a latent representation resilient to such spurious variations. Our
training objective encourages the representation to be maximally predictive of
dynamics and reward, while constraining the information flow from the
observation to the latent representation. We demonstrate that this objective
significantly bolsters the resilience of visual model-based RL methods to
visual distractors, allowing them to operate in dynamic environments. We then
show that while the learned encoder is resilient to spirious variations, it is
not invariant under significant distribution shift. To address this, we propose
a simple reward-free alignment procedure that enables test time adaptation of
the encoder. This allows for quick adaptation to widely differing environments
without having to relearn the dynamics and policy. Our effort is a step towards
making model-based RL a practical and useful tool for dynamic, diverse domains.
We show its effectiveness in simulation benchmarks with significant spurious
variations as well as a real-world egocentric navigation task with noisy TVs in
the background. Videos and code at https://zchuning.github.io/repo-website/. | Chuning Zhu, Max Simchowitz, Siri Gadipudi, Abhishek Gupta | 2023-08-31T18:43:04Z | http://arxiv.org/abs/2309.00082v2 | # RePo: Resilient Model-Based Reinforcement Learning by Regularizing Posterior Predictability
###### Abstract
Visual model-based RL methods typically encode image observations into low-dimensional representations in a manner that does not eliminate redundant information. This leaves them susceptible to _spurious variations_ - changes in task-irrelevant components such as background distractors or lighting conditions. In this paper, we propose a visual model-based RL method that learns a latent representation resilient to such spurious variations. Our training objective encourages the representation to be maximally predictive of dynamics and reward, while constraining the information flow from the observation to the latent representation. We demonstrate that this objective significantly bolsters the resilience of visual model-based RL methods to visual distractors, allowing them to operate in dynamic environments. We then show that while the learned encoder is resilient to spurious variations, it is not invariant under significant distribution shift. To address this, we propose a simple reward-free alignment procedure that enables test time adaptation of the encoder. This allows for quick adaptation to widely differing environments without having to relearn the dynamics and policy. Our effort is a step towards making model-based RL a practical and useful tool for dynamic, diverse domains. We show its effectiveness in simulation benchmarks with significant spurious variations as well as a real-world egocentric navigation task with noisy TVs in the background. Videos and code: [https://zchuning.github.io/repo-website/](https://zchuning.github.io/repo-website/).
## 1 Introduction
Consider the difference between training a single robot arm against a plain background with reinforcement learning (RL), and learning to operate the same arm amidst of plentiful dynamic distractors - uncontrollable elements such as changing lighting and disturbances in the scene. The latter must contend with _spurious variations_ - differences in environments which are irrelevant for the task but potentially confusing for a vision-based RL agent - resilience to which is indispensable for truly versatile embodied agents deployed in real world settings.
Standard end-to-end techniques for visual RL struggle in the presence of spurious variations [64; 48], in part because they fail to discard task-irrelevant elements. To improve generalization [38; 59], self-supervised representation learning methods [23; 39; 55; 54; 17; 31] pre-train visual encoders that compress visual observations. These methods aim for lossless compression of how image observations evolve in time (e.g. by minimizing reconstruction error). Unaware of the demands of downstream
tasks, these methods also cannot determine which elements of an environment can be discarded. As such, they often struggle in dynamic and diverse scenes [64; 48; 17] - ones where significant portions of the observations are both unpredictable and irrelevant - despite being remarkably successful in static domains.
This paper proposes Resilient Model-Based RL by **R**egularizing **P**osteior Predictability (RePo) - an algorithm for learning lossy latent representations resilient to spurious variations. A representation is satisfactory if it (a) predicts its own dynamics and (b) accurately predicts the reward. To satisfy these criteria, RePo jointly learns (i) a visual encoder mapping high-dimensional observations to intermediate image "encodings" (ii) a latent encoder which compresses histories of intermediate image encodings into compressed _latent representations_ (iii) a dynamics model in the latent representation space, and (iv) a reward predictor to most accurately predict current and future rewards. What distinguishes us from past work [63; 12; 17] is a new desideratum of _predictability:_ that, conditioned on past latents and actions, future latent dynamics should look _as deterministic as possible_. This is because an agent should try to maximize its control over task-relevant parts of the state, whilst neglecting aspects of the environment that it cannot influence [20; 60]. RePo optimizes a novel loss which encourages _predictability_, thereby discarding a broad range of spurious variations in aspects of the environment which are out of the agents control (e.g. changes in background, lighting, or visual traffic in the background). At the same time, by penalizing reward prediction error, we capture the _task-relevant_ aspects of the dynamics necessary for learning performant policies.
RePo implements a deceptively simple modification to recurrent state-space models for model-based RL [17; 59; 46]. We maximize mutual information (MI) between the current representation and _all_ future rewards, while minimizing the mutual information between the representation and observation. Instead of minimizing image reconstruction error, we optimize a variational lower bound on the MI-objective which tractably enforces that the learned observation encoder, latent dynamics and reward predictors are highly informative of reward, while ensuring latents are as _predictable_ as possible (in the sense described above). We demonstrate that the representations, and the policies built thereupon, learned through RePo succeed in environments with significant amounts of dynamic and uncontrollable distractors, as well as across domains with significant amounts of variability and complexity. Through ablations, we also validate the necessity of our careful algorithm design and optimization decisions.
While these learned representations enable more effective reinforcement learning in dynamic, complex environments, the visual encoders (point (i) above) mapping from observations into intermediate encodings suffer from distribution shift in new environments with novel visual features (e.g. a new background not seen at train time.) We propose a simple test-time adaptation scheme which uses (mostly) unlabeled test-time data to adapt the _visual encoders_ only, whilst keeping all other aspects of the RePo model fixed. Because RePo ensures resilience of the compressed latent representation at training time, modifying only the test-time visual encoders to match training time representations allows representations to recover optimal performance with only minor amounts of adaptation.
Concretely, the key contributions of this work are: **(1)** We propose a simple representation learning algorithm RePo for learning representations that are informative of rewards, while being as predictable as possible. This allows model-based RL to scale to dynamic, cluttered environments, avoiding reconstruction. **(2)** We show that while the learned encoders may be susceptible to distribution shift, they are amenable to a simple test-time adaptation scheme that can allow for quick adaptation in new environments. **(3)** We demonstrate the efficacy of RePo on a number of simulation and real-world domains with dynamic and diverse environments.
## 2 Related Work
Our work is related to a number of techniques for visual model-based reinforcement learning, but differs in crucial elements that allow it to scale to dynamic environments with spurious variations.
Figure 1: Reinforcement learning in environments with spurious variations - including dynamic elements like humans, changes in lighting and training across a range of visual appearances.
**Model-Based RL.** Though model-based RL began with low-dimensional, compact state spaces [26; 37; 27; 57], advances in visual model-based reinforcement learning [17; 19; 18; 44; 42; 21] learn latent representations and dynamics models from high dimensional visual feedback (typically via recurrent state-space models). Perhaps most relevant to RePo is Dreamer [17]. Section 4 explains the salient differences between Dreamer and RePo; notably, we execute a reconstruction loss in pursuit of resilience to spurious variations. A closely related work is TD-MPC [22], which learns a task-oriented latent representation by predicting the value function. However, its representation may not discard irrelevant information and necessarily contains information about the policy.
**Representation Learning for Control.** There is a plethora of techniques for pretraining visual representations using unsupervised learning objectives [38; 34; 30; 32; 41; 49; 47; 13]. While these can be effective on certain domains, they do not take downstream tasks into account. Task-relevant representation learning for RL uses the reward function to guide representation learning, typically in pursuit of _value-equivalence_ (e.g. via bisimulation) [63; 8; 62; 12; 50; 22]. However, these approaches do little to explicitly counteract spurious variations. Our work aligns with a line of work that disentangles task-relevant and task-irrelevant components of the MDP. [7; 6] obtain provable guarantees for representation learning with exogeneous distractors - parts of the state space whose dynamics is independent of the agent's actions. [56] introduces a more granular decomposition of the MDP across the task relevance and controllability axes. Our work, in contrast, does not impose a specific form on the spurious variations.
**Domain Adaptation.** Unsupervised domain adaptation adapts representations across visually different source and target domains [66; 58; 45; 11; 25]. These techniques predominantly adapt visual encoders by minimizing a distribution measure across source and training distributions, such as MMD [5; 33; 53; 24], KL divergence [67; 35] or Jensen-Shannon divergence [11; 52]. In [61], distribution matching was extended to sequential decision making. While domain adaptation settings typically assume that the source and target share an underlying marginal or joint distribution in a latent space, this assumption does not hold in online RL because the data is being collected incrementally through exploration, and hence the marginals may not match. Hence, our test-time adaptation technique, as outlined in Section 4.1, introduces a novel support matching objective that enforces the test distribution to be in support of the train distribution, without trying to make the distributions identical.
## 3 Preliminaries
**MDPs.** A (discounted) MDP \(\mathcal{M}=(\mathcal{S},\mathcal{A},\gamma,P,P_{0},r)\) consists of a state-space \(\mathcal{S}\), action space \(\mathcal{A}\), discount factor, \(\gamma\in(0,1)\), transition and \(P(\cdot,\cdot):\mathcal{S}\times\mathcal{A}\rightarrow\triangle(\mathcal{S})\), initial state distribution \(P_{0}\in\triangle(\mathcal{S})\), and reward function \(r(\cdot,\cdot):\mathcal{S}\times\mathcal{A}\rightarrow[0,1]\) (assumed deterministic for simplicity). A policy \(\pi:\mathcal{S}\rightarrow\triangle(\mathcal{A})\) is a mapping from states to distributions over actions.We let \(\mathbb{E}_{\mathcal{M}}^{\pi}\) denote expectations under \(s_{0}\sim P_{0}\), \(a_{t}\sim\pi(s_{t})\), and \(s_{t+1}\sim P(s_{t},a_{t})\); the value is \(V_{\mathcal{M}}^{\pi}(s):=\mathbb{E}_{\mathcal{M}}^{\pi}\left[\sum_{t=0}^{ \infty}\gamma^{h}r(s_{t},a_{t})\mid s_{0}=s\right]\), and \(V_{\mathcal{M}}^{\pi}=\mathbb{E}_{s_{0}\sim P_{0}}[V_{\mathcal{M}}^{\pi}(s_{0})]\). The goal is to learn a policy \(\pi\) that maximizes the sum of expected returns \(\mathbb{E}_{\mathcal{M}}^{\pi}\left[\sum_{t=0}^{\infty}\gamma^{h}r(s_{t},a_{t} )\mid s_{0}=s\right]\), as in most RL problems, but we do so based on a belief state as explained below.
**Visual RL and Representations.** For our purposes, we take states \(s_{t}\) to be visual observations \(s_{t}\equiv o_{t}\in\mathcal{O}\); for simplicity, we avoid explicitly describing a POMDP formulation - this can be subsumed either by introducing a belief-state [68], or by assuming that images (or sequences thereof, e.g. to estimate velocities) are sufficient to determine rewards and transitions [36]. The states \(o_{t}\) may be high-dimensional, so we learn encoders \(h:\mathcal{O}\rightarrow\mathcal{X}\) to an encoding space \(\mathcal{X}\). We compress these encodings \(x_{t}\) further into latent states \(z_{t}\), described at length in our method in Section 4.
**Spurious variation.** By _spurious variation_, we informally mean the presence of features of the states \(s_{t}\) which are irrelevant to our task, but which do vary across trajectories. These can take the form of explicit _distractors_ - either _static_ objects (e.g. background wall-paper) or _dynamic_ processes (e.g. video coming from a television) that do not affect the part of the state space involved in our task [7; 6]. Spurious variation can also encompass processes which are not so easy to disentangle with the state: for example, lighting conditions will affect all observations, and hence will affect the appearance of transition dynamics.
Consider the following canonical example: an MDP with state space \(\mathcal{S}_{1}\times\mathcal{S}_{2}\), where for \(s=(s^{(1)},s^{(2)})\in\mathcal{S}_{1}\times\mathcal{S}_{2}\), the reward \(r(s,a)\) is a function \(\bar{r}(s^{(1)},a)\) only of the projection onto \(\mathcal{S}^{1}\). Moreover, suppose that \(\mathbb{P}[(s^{+})^{(1)}\in\cdot\mid s,a]\), where \(s^{+}\sim P(s,a)\), is a distribution \(\bar{P}(s^{(1)},a)\) again only depending on \(s^{(1)}\). Then, the states \(s^{(2)}\) can be viewed as spuriously various. For example, if
\(s^{(1)}\) is a Lagrangian state and \(s^{(2)}\) is a static background, then it is clear that transitions of Lagrangian state and reward do not depend on \(s^{(2)}\). Our template also encompasses dynamic distractors; e.g. a television show in the background has its own dynamics, and these also do not affect reward or physical dynamics. Even varying lighting conditions can be encompassed in this framework: the shadows in a scene or brightness of the environment should not affect reward or physics, even though these visual features themselves evolve dynamically in response to actions and changes in state. That is, there are examples of spurious variation where \(s^{(1)}\) (e.g. Lagrangian state) affect \(s^{(2)}\) (e.g. certain visual features), but not the other way round. In all cases, "spurious" implies that states \((s^{(2)}_{t})_{t\geq 0}\), and their possible variations due to different environments, have no bearing on optimal actions.
## 4 RePo: Parsimonious Representation Learning without Reconstruction
We propose a simple technique for learning task-relevant representations that encourages parsimony by removing all information that is neither pertinent to the reward nor the dynamics. Such representations discard information about spurious variations, while retaining the information actually needed for decision making.
To describe our method formally, we introduce some notation (which is also shown in Fig 2). Let \(\mathcal{O}\) be the space of image observations, \(\mathcal{X}\) the space of encoded observations, where \(h:\mathcal{O}\rightarrow\mathcal{X}\) represents the encoding function from images observations to encoded observations, and \(\mathcal{Z}\) the space of latent representations. Note that \(x_{t+1}\) is simply the instantaneous encoding of the image \(o_{t+1}\) as \(x_{t+1}=h(o_{t+1})\), but the latent representation \(z_{t+1}\) at time step \(t+1\) is an aggregation of the current encoding \(x_{t+1}\) and previous latent \(z_{t}\) and action \(a_{t}\). Let \(\mathscr{P}_{\mathrm{post}}\) denote the space of "posteriors" on latent dynamics \(z\) of the form \(p(z_{t+1}\in\cdot\mid z_{t},a_{t},x_{t+1})\), where \(z_{t},z_{t+1}\in\mathcal{Z}\), \(a_{t}\in\mathcal{A}\), \(x_{t+1}\in\mathcal{X}\), and where and \(z_{0}\sim p_{0}\) has some initial distribution \(p_{0}\). In words, the latent posterior use past latent state and action, in addition to _current encoding_ to determine current latent. Control policies and learned dynamics models act on this latent representation \(z_{t+1}\), and not simply the image encoding \(x_{t+1}\) so as to incorporate historical information.
Let \(\mathcal{D}_{\mathrm{buf}}\) denote the distribution over experienced actions, observations and rewards from the environment (\((a_{1:T},o_{1:T},r_{1:T})\sim\mathcal{D}_{\mathrm{buf}}\)). For \(p\in\mathscr{P}_{\mathrm{post}}\), let \(\mathbb{E}_{p,h}\) denote expectation of \((a_{1:T},o_{1:T},r_{1:T})\sim\mathcal{D}_{\mathrm{buf}}\), \(x_{t}=h(o_{t})\) and the latents \(z_{t+1}\sim p(\cdot\mid z_{t},a_{t},x_{t+1})\) drawn from the latent posterior, with the initial latent \(z_{0}\sim p_{0}\). Our starting proposal is to optimize the latent posterior \(p\) and image encoder \(h\) such that information between the latent representation and future reward is maximized, while bottlenecking [1] the information between the latent and the observation:
\[\max_{p,h}\mathrm{I}_{p,h}(z_{1:T};r_{1:T}\mid a_{1:T})\;\;\text{s.t.}\;\; \mathrm{I}_{p,h}(z_{1:T};o_{1:T}\mid a_{1:T})<\epsilon. \tag{4.1}\]
Above, \(\mathrm{I}_{p,h}(z_{1:T};r_{1:T}\mid a_{1:T})\) denotes mutual information between latents and rewards conditioned actions under the \(\mathbb{E}_{p,h}\) distribution, and distribution \(\mathrm{I}_{p,h}(z_{1:T};o_{1:T}\mid a_{1:T})\) measures information
Figure 2: RePo learns a latent representation resilient to spurious variations by predicting the dynamics and the reward while constraining the information flow from images.
between latents and observations under \(\mathbb{E}_{p,h}\) as well. Thus, (4.1) aims to preserve large mutual information with rewards whilst minimizing information stored from observations.
Optimizing mutual information is intractable in general, so we propose two variational relaxations of both objects (proven in Appendix B)
\[\mathrm{I}_{p,h}(z_{1:T};r_{1:T}\mid a_{1:T}) \geq\mathbb{E}_{p,h}\left[\sum_{t=1}^{T}\log q_{\mathrm{r}}(r_{t} \mid z_{t})\right] \tag{4.2}\] \[\mathrm{I}_{p,h}(z_{1:T};o_{1:T}\mid a_{1:T}) \leq\mathbb{E}_{p,h}\left[\sum_{t=0}^{T-1}\mathrm{D}_{\mathrm{KL} }(p(\cdot\mid z_{t},a_{t},x_{t+1})\parallel q_{\mathrm{z}}(\cdot\mid z_{t},a_ {t}))\right], \tag{4.3}\]
where \(q_{\mathrm{r}}\) and \(q_{\mathrm{z}}\) are variational families representing beliefs over rewards \(r_{t}\) and latent representations \(z_{t+1}\), respectively. We refer to \(z_{t+1}\sim p(\cdot\mid z_{t},a_{t},x_{t+1})\) as the _latent posterior_, because it conditions on the latest encoded observation \(x_{t+1}=h(o_{t+1})\). We call the variational approximation \(q_{\mathrm{z}}(\cdot\mid z_{t},a_{t})\) the _latent prior_ because it does not use the current observation \(o_{t+1}\) (or it's encoding \(x_{t+1}\)) to determine \(z_{t+1}\). Note that the right hand side of Eq. (4.3) depends on \(h\) through \(x_{t+1}=h(o_{t+1})\), and thus gradients of this expression incorporate gradients through \(h\).
**The magic of Eq. (4.3).** The upper bound in (4.3) reveals a striking feature which is at the core of our method: that, in order to reduce extraneous information in the latents \(z_{t}\) about observations \(o_{t}\), it is enough to match the latent posterior \(z_{t+1}\sim p(\cdot\mid z_{t},a_{t},x_{t+1})\) to our latent prior \(q_{\mathrm{z}}(\cdot\mid z_{t},a_{t})\) that _does not condition on current \(x_{t+1}\)_. Elements that are spurious variations can be captured by \(p(\cdot\mid z_{t},a_{t},x_{t+1})\), but not by \(q_{\mathrm{z}}(\cdot\mid z_{t},a_{t})\), since \(q_{\mathrm{z}}\) is not informed by the latest observation encoding \(x_{t+1}\), and spurious variations are not predictable. To match the latent posterior and the latent prior, the latent representation must omit these spurious variations. For example, in an environment with a TV in the background, removing the TV images reduces next-step stochasticity of the environment. Thus, (4.3) encourages representations to omit television images.
**The relaxed bottleneck.** The above discussion may make it seem as if we suffer in the presence of task-relevant stochasticity. However, by replacing the terms in Eq. (4.1) with their relaxations in Eqs. (4.2) and (4.3), we only omit the stochasticity that is not useful for reward-prediction. We make these substitutions, and move to a penalty-formulation amenable to constrained optimization methods like dual-gradient descent [2]. The resulting objective we optimize to learn the latent posterior \(p\), latent prior \(q_{\mathrm{z}}\), reward predictor \(q_{\mathrm{r}}\) and observation encoder \(h\) jointly is:
\[\max_{p,q_{\mathrm{r}},q_{\mathrm{z}},h}\min_{\beta}\mathbb{E}_{p,h}\left[\sum_ {t=1}^{T}\log q_{\mathrm{r}}(r_{t}\mid z_{t})\right]+\beta\left(\mathbb{E}_{p, h}\left[\sum_{t=0}^{T-1}\mathrm{D}_{\mathrm{KL}}(p(\cdot\mid z_{t},a_{t},x_{t+1}) \parallel q_{\mathrm{z}}(\cdot\mid z_{t},a_{t}))\right]-\epsilon\right). \tag{4.4}\]
**Implementation details.** We parameterize \(p\) and \(q\) using a recurrent state-space model (RSSM) [17]. The RSSM consists of an encoder \(h_{\theta}(x_{t}\mid o_{t})\), a latent dynamics model \(q_{\theta}(z_{t+1}\mid z_{t},a_{t})\) corresponding to the prior, a representation model \(p_{\theta}(z_{t+1}\mid z_{t},a_{t},x_{t+1})\) corresponding to the posterior, and a reward predictor \(q_{\theta}(r_{t}\mid z_{t})\). We optimize (4.4) using dual gradient descent. In addition, we use the KL balancing technique introduced in Dreamer V2 [19] to balance the learning of the prior and the posterior. Concretely, we compute the KL divergence in Eq. (4.4) as \(\mathrm{D}_{\mathrm{KL}}(p\parallel q)=\alpha\mathrm{D}_{\mathrm{KL}}(\lfloor p \rfloor\parallel q)+(1-\alpha)\mathrm{D}_{\mathrm{KL}}(p\parallel\lfloor q \rfloor)\), where \(\lfloor\cdot\rfloor\) denotes the stop gradient operator and \(\alpha\in[0,1]\) is the balancing parameter. With the removal of reconstruction, the KL balancing parameters becomes especially important as shown by our ablation in Sec. 5.
**Policy learning** As is common in the literature on model-based reinforcement learning [19, 17, 18], our training procedure alternates between (1) _Representation Learning:_ learning a representation \(z\) by solving the optimization problem outlined in Eq. (4.4) to infer a latent posterior \(p(z_{t+1}\mid z_{t},a_{t},x_{t+1})\), a latent prior \(q_{\mathrm{z}}(z_{t+1}\mid z_{t},a_{t})\), an encoder \(x_{t}=h(o_{t})\) and a reward predictor \(q_{r}(r_{t}\mid z_{t})\), and (2) _Policy Learning:_ using the inferred representation, dynamics model and reward predictor to learn a policy \(\pi_{\phi}(a_{t}\mid z_{t})\) for control. With the latent representation and dynamics model, we perform actor-critic policy learning [16, 10] by rolling out trajectories in the latent space. The critic \(V_{\psi}(z)\) is trained to predict the discounted cumulative reward given a latent state, and the actor \(\pi_{\phi}(a\mid z)\) is trained to take the action that maximizes the critic's prediction. While policy learning is carried out entirely using the latent prior as the dynamics model, during policy execution (referred to as inference in Fig. 2), we infer the posterior distribution \(p(z_{t+1}\mid z_{t},a_{t},x_{t+1})\) over latent representations from the current observation, and use this to condition the policy acting in the world. We refer readers to Appendix C for further details.
**Comparison to Dreamer, DeepMDP, and Bisimulation.** Dreamer [17] was first derived to optimize pixel-reconstruction, leading to high-fidelity dynamics but susceptibility to spurious variations. Naively removing pixel reconstruction from dreamer, however, leads to poor performance [17]. Our objective can be interpreted as modifying Dreamer so as to maintain sufficiently accurate dynamics, but without the fragility of pixel-reconstruction. DeepMDP [12] sets the latents \(z_{t}\) to exactly the image encodings \(x_{t}=h(o_{t})\). It learns a dynamics \(\bar{P}:\mathcal{X}\times\mathcal{A}\rightarrow\triangle(\mathcal{X})\) such that the distribution \(\bar{x}_{t+1}\sim\bar{P}(h(o_{t}),a_{t})\) is close to \(x_{t+1}\sim h(o_{t+1})\), \(o_{t+1}\sim P^{\star}(o_{t},a_{t})\), where \(P^{\star}\) denotes a ground-truth transition dynamics; this enforces consistency of dynamics under encoding. The above distributions are viewed as conditional on _past_ observation and action, and as a result, highly non-parsimonious representations such as the identity are valid under this objective. Bisimulation [63] learns an optimal representation in the sense that a perfect bisimulation metric does not discard any relevant information about an MDP. However, there is no guarantee that it will disregard irrelevant information. Indeed, the identity mapping induces a trivial bisimulation metric. Hence, Bisimulation compress only by reducing the dimensionality of the latent space. In contrast, we further compress the encodings \(x_{t}\) into latents \(z_{t}\) so as to enforce the latent prior \(q_{\text{z}}(\cdot\mid a_{t},z_{t})\) is close to the latest observation-dependent posterior distribution \(p(\cdot\mid z_{t},a_{t},x_{t+1})\). As mentioned in Eq. (4.3), this ensures information compression and invalidates degenerate representations such as the identity mapping.
### Transferring Invariant Latent Representations via Test-Time Adaptation
While resilient to spurious variations seen during training, our learned latents \(z_{t}\) - and hence the policies which depend on them - may not generalize to new environment which exhibit systematic distribution shift, e.g. lighting changes or background changes. The main source of degradation is that encoder \(h:\mathcal{O}\rightarrow\mathcal{X}\) may observe images that it has not seen at train time; thus the latent, which depend on observations through \(x_{t}=h(o_{t})\), may behave erratically, even when system dynamics remain unchanged.
Relying on the resilience of our posteriors \(p\) over latents \(z_{t}\) introduced by RePo, we propose a test-time adaption strategy to only adjust the encoder \(h\) to the new environment, whilst leaving \(p\) fixed. A natural approach is to apply unsupervised domain adaptation methods [66; 58] to adapt the visual encoder \(h\) to \(h_{\text{test}}\). These domain adaptation techniques typically operate in supervised learning settings, and impose distributional constraints between source and target domains [61; 25], where the distributions of training and test data are stationary and assumed to be the same in _some_ feature space. A distribution matching constraint would be:
\[\min_{h_{\text{test}}(\cdot)}\mathrm{D}(\mathcal{P}_{\text{train}}\parallel \mathcal{P}_{\text{test}})\text{ s.t. }\mathcal{P}_{\text{test}}=h_{\text{test}}\circ\mathcal{D}_{\text{test}}, \mathcal{P}_{\text{train}}=h\circ\mathcal{D}_{\text{train}}. \tag{4.5}\]
In Eq. (4.5), we consider matching the distributions over encodings \(x\) of observations \(o\). Specifically, we assume \(\mathcal{D}_{\text{train}}\) and \(\mathcal{D}_{\text{test}}\) denote training and test-buffer distributions over observations \(o\), \(\mathcal{P}_{\text{train}}=h_{\text{train}}\circ\mathcal{D}_{\text{train}}\) denotes the distribution of \(x=h_{\text{train}}(o)\) where \(o\sim\mathcal{D}_{\text{train}}\) is encoded by the train-time
Figure 3: Depiction of test-time adaptation scheme for latent alignment via support constraints. During exploration, the marginal distributions may not match perfectly, so we match the supports of the latent features instead, using a _reweighted_ distribution constraint.
encoder \(h_{\mathrm{train}}\), and \(\mathcal{P}_{\mathrm{test}}=h_{\mathrm{test}}\circ\mathcal{D}_{\mathrm{test}}\) denotes encodings under a test-time encoder \(h_{\mathrm{test}}(\cdot)\) over which we optimize. Here, \(\mathrm{D}(\cdot,\cdot)\) denotes an \(f\)-divergence, such as the \(\chi^{2}\)-divergence.
**Support Constraint.** (4.5) fails to capture that the encoded distributions at train and test time _differ_ at the start of our adaption phase: suboptimal encoder performance at the start of the adaptation phase causes the policy to visit sub-optimal regions of state space not seen at train time. Thus, it may be impossible to match the distribution as in standard unsupervised domain adaptation. We therefore propose to replace (4.5) with a _support constraint_, enforcing that the distribution of \(h_{\mathrm{test}}\circ\mathcal{D}_{\mathrm{test}}\) is contained in the _support_ of \(h_{\mathrm{train}}\circ\mathcal{D}_{\mathrm{train}}\). We consider the following idealized objective:
\[\min_{\tau(\cdot)\geq 0,h_{\mathrm{test}}(\cdot)}\mathrm{D}(\tau\cdot \mathcal{P}_{\mathrm{train}}\parallel\mathcal{P}_{\mathrm{test}})\;\;\text{ s.t.}\;\;\mathbb{E}_{x\sim\mathcal{P}_{\mathrm{train}}}[\tau(x)]=1. \tag{4.6}\]
Here, by \(\tau\cdot\mathcal{P}_{\mathrm{train}}\), we mean the re-weighted density of \(\mathcal{P}_{\mathrm{train}}=h_{\mathrm{train}}\circ\mathcal{D}_{\mathrm{ train}}\) by a function \(\tau(x)\). The constraints \(\mathbb{E}_{\mathcal{P}_{\mathrm{train}}}[\tau(x)]=1\) and \(\tau(\cdot)\geq 0\) ensures this reweighted distribution is also a valid probability distribution. The reweighting operation \(\tau\cdot\mathcal{P}_{\mathrm{train}}\) seems intractable at first, but we show that if we take \(\mathrm{D}(\cdot,\cdot)=\chi^{2}(\cdot,\cdot)\) to be the \(\chi^{2}\) divergence, then Eq. (4.6) admits the following tractable Lagrangian formulation (we refer readers to [65] and Appendix B for a thorough derivation)
\[\min_{\tau(\cdot)\geq 0,h_{\mathrm{test}}(\cdot)}\max_{f(\cdot),\lambda}\mathbb{ E}_{\mathcal{P}_{\mathrm{train}}}[\tau(x)\cdot f(x)]-\mathbb{E}_{\mathcal{P}_{ \mathrm{test}}}\left[f(x)+\frac{1}{4}f(x)^{2}\right]+\lambda(\mathbb{E}_{ \mathcal{P}_{\mathrm{train}}}[\tau(x)]-1), \tag{4.7}\]
where above, \(\lambda\in\mathbb{R}\), \(f:\mathcal{X}\rightarrow\mathbb{R}\), and the objective depends on \(h_{\mathrm{test}}\) through the definition \(\mathcal{P}_{\mathrm{test}}=h_{\mathrm{test}}\circ\mathcal{D}_{\mathrm{test}}\). This objective is now a tractable saddle point optimization, which can be solved with standard stochastic optimization techniques. The optimization alternates between optimizing the reweighting \(\tau\) and the visual encoder \(h_{\mathrm{test}}\), and the dual variables \(f,\lambda\). Throughout adaptation, we freeze all other parts of the recurrent state space model and only optimize the encoder. We provide more intuition for the support constraint in Appendix E.
**Calibration.** We note that naively reweighting by \(\tau(\cdot)\) can cause degenerate encodings that collapse into one point. To prevent this, we regularize the support constraint by also ensuring that some set of paired "calibration" states across training and testing domains share the same encoding. We collect paired trajectories in the training and testing domains using actions generated by an exploration policy, and minimize the \(\ell_{2}\) loss between the training and testing encoding of each pair of observations. We defer the details of the complete optimization to Appendix C.
## 5 Experimental Evaluation
We conduct empirical experiments to answer the following research questions: (1) Does RePo enable learning in dynamic, distracted environments with spurious variations? (2) Do representations learned by RePo quickly adapt to new environments with test time adaptation? (3) Does RePo help learning in static, but diverse and cluttered environments?
Figure 4: Depiction of the environments being used for evaluation. **(Left):** the Distracted DeepMind Control suite [64], **(Top Right)**: Maniskill2 [15] environments with realistic backgrounds from Matterport [3]. **(Bottom Right)**: TurtleBot environment with two TVs playing random videos in the background.
Evaluation domainsWe evaluate our method primarily in three different settings. (1) **Distracted DeepMind Control Suite**[64, 63] is a variant of DeepMind Control Suite where the static background is replaced with natural videos (Fig. 4). For adaptation experiments, we train agents on static undistracted backgrounds and adapt them to distracted variants. (2) **Realistic Maniskill** is a benchmark we constructed based on the Maniskill2 benchmark [15], but with realistic backgrounds from [3] to simulate learning in a diverse range of human homes. We solve three tasks - LiftCube, PushCube, and TurnFaucet in a variety of background settings. (3) **Lazy TurtleBot** is a real-world robotic setup where a TurtleBot has to reach some goal location from egocentric observations in a furnished room. However, there are two TVs playing random videos to distract the "lazy" robot. We provide more details about evaluation domains in Appendix D.
BaselinesWe compare our method with a number of techniques that explicitly learn representations and use them for learning control policies. (1) **Dreamer**[17] is a state-of-the-art visual model-based RL method that learns a latent representation by reconstructing images. (2) **TIA**[9] renders Dreamer more robust to visual distractors by using a separate dynamics model to capture the task-irrelevant components in the environment. (3) **Denoised MDP**[56] further learns a factorized latent dynamics model that disentangles controllability and reward relevance. (4) **TD-MPC**[22] trains a latent dynamics model to predict the value function and uses a hybrid planning method to extract a policy. (5) **DeepMDP**[12] is a model-free method that learns a representation by predicting dynamics and reward, and then performs actor-critic policy learning on the learned representation. (6) Deep Bisimulation for Control **DBC**[63] is model-free algorithm which encodes images into a latent space that preserves the bisimulation metric.
We also compare with a number of techniques for test-time adaptation of these representations. (1) **calibrated distribution matching**, a variant of the method proposed in Section 4.1, using a distribution matching constraint rather than a support matching one, (2) **uncalibrated support matching**, a variant of the method proposed in Section 4.1, using a support matching constraint but without using paired examples, (3) **uncalibrated distribution matching**, a variant of the method proposed in Section 4.1, using a distribution matching constraint, but without using paired examples, (4) invariance through latent alignment **ILA**[61], a technique for test-time adaptation of representations with distribution matching and enforcing consistency in latent dynamics, (5) **calibration**, a baseline that only matches the encodings of paired examples.
**Does RePo learn behaviors in environments with spurious variations?** We evaluate our method's ability to ignore spurious variations on a suite of simulated benchmark environments with dynamic visual backgrounds (Fig. 4); these are challenging because uncontrollable elements of the environment visually dominate a significant portion of the scene. Fig. 5 shows our method outperforms the baselines across six Distracted DeepMind Control environments, both in terms of learning speed and asymptotic performance. This implies that our method successfully learns latent representations resilient to spurious variations. Dreamer [17] attempts to reconstruct the dynamic visual distractors which is challenging in these domains. TIA [9] and Denoised MDP [56] see occasional success when
Figure 5: Results on distracted DeepMind control environments. These environments have spurious variations, and RePo is able to successfully learn in all of them, both faster and achieving higher asymptotic returns than prior representation learning methods.
they dissociate the task-relevant and irrelevant components, but they suffer from high variance and optimization failures. TD-MPC [22] is affected by spurious variations as its representations are not minimal. The model-free baselines DeepMDP [12] and DBC [63] exhibit lower sample efficiency on the more complex domains despite performing well on simpler ones.
To further validate RePo's ability to handle spurious variations in the real world, we evaluate its performance on Lazy TurtleBot, where a mobile robot has to navigate around a furnished room to reach the goal from egocentric observations (Fig. 4). To introduce spurious variations, we place two TVs playing random Youtube videos along the critical paths to the goal. As shown in Table. 1, RePo is able to reach the goal with nontrivial success within 15K environment steps, whereas Dreamer fails to reach the goal. We provide details about the setup in Appendix. D.
**Do representations learned by** RePo **transfer under distribution shift?** We evaluate the effectiveness of the test-time adaptation method described in Section 4.1 on three DeepMind Control domains: Walker Stand, Walker Walk, and Cheetah Run. We train the representation in environments with _static backgrounds_, and adapt the representation to domains with _natural video distractors_ (as shown in Fig. 4). For methods that use calibration between the source and target environments, we collect 10 trajectories of paired observations. Results are shown in Fig. 6. RePo shows the ability to adapt quickly across all three domains, nearly recovering the full training performance within 50k steps. Performance degrades if we replace the support constraint with a distribution matching objective, as it is infeasible to match distributions with the test-time distribution having insufficient exploration. We also observe that by removing the calibration examples, both support constraint and distribution perform worse as the distributions tend to collapse. We found the addition of dynamics consistency in ILA to be ineffective. Nor is calibration alone sufficient for adaptation.
**Does** RePo **learn across diverse environments with varying visual features?** While the previous two sections studied learning and adaptation in dynamic environments with uncontrollable elements, we also evaluate RePo on it's ability to learn in a _diverse_ range of environments, each with a realistic and cluttered static background. Being able to learn more effectively in these domains suggests that RePo focuses it's representation capacity on the important elements of the task across environments, rather than trying to reconstruct the entire background for every environment.
\begin{table}
\begin{tabular}{l c c} \hline \hline & Success & Return \\ \hline RePo (Ours) & **62.5\%** & **-24.3** \\ \hline Dreamer [17] & 0.0\% & -61.7 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results on Lazy TurtleBot at 15K environment steps. RePo achieves nontrivial success whereas Dreamer fails to reach the goal.
Figure 6: Results on adaptation from static environments to dynamic environments in Deepmind control. RePo with calibrated support constraints outperforms ablations and previous techniques for domain adaptation.
Figure 7: Results of training agents on varying static environments in Maniskill [15]. RePo is able to learn more quickly and efficiently than alternatives even in static domains.
We test on three robotic manipulation tasks - LiftCube, PushCube, and TurnFaucet with realistic backgrounds depicted in Fig. 4. As shown in Fig. 7, our method achieves saturating performance across all three tasks. Dreamer [17] spends its representation capacity memorizing backgrounds and is unable to reach optimal task performance. TIA [9] suffers from high variance and occasionally fails to dissociate task-relevant from task-irrelevant features. Denoised MDP [56], TD-MPC [22], and DBC [63] learn to ignore the background in two of the tasks but generally lag behind RePo in terms of sample efficiency. DeepMDP [12] fails to learn meaningful behavior in any task.
Visualizing representations learned by RePoTo decipher our representation learning objective, we probe the learned representations by post-hoc training a separate image decoder to reconstruct image observations from the latents. We visualize the results in Fig. 9 and compare them with Dreamer reconstructions [17]. Our representation contains little information about background but is capable of reconstructing the agent, implying that it contains only task-relevant information.
In addition to probing, we qualitatively compare the latent states of RePo and Dreamer by visualizing their top two principal components. We collect the same trajectory across all backgrounds in Maniskill and visualize the final recurrent latent state inferred by RePo and Dreamer respectively. As shown in Fig. 10, RePo produces more compact latent representations than Dreamer, meaning the latent states encode less information about background variations. This enables RePo to share data across different backgrounds, explaining its superior sample efficiency compared to baselines.
Ablation experimentsWe conduct ablation experiments to determine the effect of hyperparameters in Fig. 8. As we can see, the performance is crucially dependent on the information bottleneck \(\epsilon\), as well as KL balancing. We refer readers to Appendix E for a more thorough discussion.
## 6 Discussion
This work presents RePo, a technique for learning parsimonious representations that are resilient to spurious variations. Our representation is effective on learning in dynamic, distracted environments. And while the representation is subject to degradation under distribution shift, it can be quickly adapted to new domains by a semi-supervised test-time adaptation procedure. A limitation of our method is that the learned dynamics model is no longer task-agnostic, as it only captures task-relevant information. This can be potentially addressed by simultaneously predicting multiple reward objectives. Our framework opens up several interesting directions for future research, such as: can a multi-task variant of RePo allow for representations applicable to a some distribution of tasks? Can we apply our algorithm in a continual learning setup? We believe our method holds promise in these more general settings, especially for real robots deployed into dynamic, human-centric environments.
Figure 8: Ablating objectives showing the importance of information bottleneck and KL balancing described in Section 4.
Figure 10: Top two principle components of RePo and Dreamer’s latent representations across different backgrounds. RePo’s latent representation is more compact than Dreamer’s, which enables data sharing.
### Acknowledgments and Disclosure of Funding
We would like to thank Marius Memmel, Max Balsells, and many other members of the WEIRD Lab at University of Washington for valuable feedback and discussions.
|
2305.19606 | Lattice paths in Young diagrams | Fill each box in a Young diagram with the number of paths from the bottom of
its column to the end of its row, using steps north and east. Then, any square
sub-matrix of this array starting on the south-east boundary has determinant
one. We provide a - to our knowledge - new bijective argument for this result.
Using the same ideas, we prove further identities involving these numbers which
correspond to an integral orthonormal basis of the inner product space with
Gram matrix given by the array in question. This provides an explicit answer to
a question (listed as unsolved) raised in Exercise 6.27 c) of Stanley's
Enumerative Combinatorics. | Thomas K. Waring | 2023-05-31T07:23:05Z | http://arxiv.org/abs/2305.19606v1 | # Lattice paths in Young diagrams
###### Abstract
Fill each box in a Young diagram with the number of paths from the bottom of its column to the end of its row, using steps north and east. Then, any square sub-matrix of this array starting on the south-east boundary has determinant one. We provide a -- to our knowledge -- new bijective argument for this result. Using the same ideas, we prove further identities involving these numbers, which correspond to an integral orthonormal basis of the inner product space with Gram matrix given by the array in question. This provides an explicit answer to a question (listed as unsolved1) raised in Exercise 6.27 c) of Stanley's Enumerative Combinatorics.
Footnote 1: In an addendum [14, p. 584], Stanley notes that Robin Chapman settled the _existence_ problem stated in the exercise. This argument doesn’t appear to be available anywhere, and in this note we provide the required object explicitly.
Here we consider a problem raised in Exercises 6.26 and 6.27 of [14, p. 232]. These problems are solved (see [13, SS3 Theorem 2], [15] and the solutions on [14, p. 267]), but here we give a concise bijective proof, using the Lindstrom-Gessel-Viennot lemma.
Let \(D\) be a Young diagram of a partition \(\lambda\), and fill each box \((i,j)\in D\) (numbering "matrix-wise": down then across) with the number of paths from \((\lambda^{\prime}_{j},j)\) to \((i,\lambda_{i})\), using steps north and east, and staying within the diagram \(D\). That is, \((i,j)\) is filled with the number of paths from the lowest square in its column to the rightmost square in its row. Call this number \(D_{i,j}\). For example, with \(\lambda=(5,4,3,3)\):
\[\begin{array}{|c|c|c|c|}\hline 16&7&2&1&1\\ \hline 6&3&1&1\\ \hline 3&2&1\\ \hline 1&1&1\\ \hline\end{array} \tag{1}\]
Then, the matrix formed by any square sub-array with a 1 in the lower right has determinant 1. The same array of integers arises in discussions of so-called ballot sequences [13, SS1], and of Young's lattice of partitions [14, p. 223]. For instance, from the diagram in eq. (1) we have:
\[\det\begin{pmatrix}16&7&2\\ 6&3&1\\ 3&2&1\\ \end{pmatrix}=1.\]
This result follows directly from the following lemma, due to Lindstrom-Gessel-Viennot.
**Lemma 1**.: Let \(G\) be a locally finite directed acyclic graph, and \(A=\{a_{1},\ldots,a_{n}\}\) and \(B=\{b_{1},\ldots,b_{n}\}\) sets of _source_ and _destination_ vertices, respectively. Write \(e(a,b)\) for the number of paths from \(a\) to \(b\) in \(G\), and define a matrix \(M\) by \(M_{i,j}=e(a_{i},b_{j})\). Then,
\[\det M=\sum_{P=P_{1},\ldots,P_{n}}\operatorname{sgn}(\sigma_{P}),\]
where the sum is over the collection of \(n\)-tuples of vertex-disjoint paths \((P_{1},\dots,P_{n})\) in \(G\), where \(\sigma_{P}\) is a permutation of \([n]\), and \(P_{i}\) is a path from \(a_{i}\) to \(b_{\sigma_{P}(i)}\).
Proof.: See [1, Chapter 29].
To see this, let \(G\) be the graph with the boxes of \(D\) as vertices, and directed edges from each box to its northern and eastern neighbours. Given a square \(n\times n\) sub-array as above, let \(a_{1},\dots,a_{n}\) be the "feet" of the columns of \(D\) corresponding to the columns of \(M\), and \(b_{1},\dots,b_{n}\) the ends of the rows. Any path system \(P\) as above must have \(\sigma_{P}=\operatorname{id}\), as a pair of paths \(a_{i}\to b_{j}\) and \(a_{j}\to b_{j}\) must share a vertex. Moreover, there is exactly one vertex-disjoint tuple \(P\) of paths with \(\sigma_{p}=\operatorname{id}\). The \(1\) in the lower right of \(M\) forces the path \(a_{n}\to b_{n}\) to be a "hook" up then right. This implies the same of the path \(a_{n-1}\to b_{n-1}\) and so forth. The unique collection of paths in our running example is (poorly) rendered in eq. (2).
\[\begin{array}{|c|c|c|c|c|}\hline\looparrow&\to&\to&\to&\to\\ \hline\uparrow&\looparrow&\to&\to&\\ \hline\uparrow&\uparrow&\looparrow&\\ \hline\uparrow&\uparrow&\uparrow&\end{array} \tag{2}\]
Exercise 6.27 offers an extension, which is also resolved by our method. Suppose that \(D\) is self-conjugate (ie \(\lambda=\lambda^{\prime}\)), and let \(n\) be the size of the Durfee square of the diagram \(D\) -- that is, the largest \(n\) such that \(\lambda_{n}\geq n\). Let \(x_{1},\dots,x_{n}\) be a basis for a real vector space \(V\), and define an inner product on \(V\) by
\[\langle x_{i},x_{j}\rangle=D_{i,j}.\]
We exhibit an integral orthonormal basis for \(V\). If \(G_{k}=\det[D_{i,j}]_{k\leq i,j\leq n}\) is the "Gram determinant", then, using Cramer's rule, the result of applying the Gram-Schmidt process to the vectors \(x_{n},x_{n-1},\dots\) (in that order) is a basis \(y_{n},\dots,y_{1}\) of \(V\) given by:
\[G_{j-1}\cdot y_{j}=\det\begin{pmatrix}x_{j}&\langle j,j+1\rangle&\dots& \langle j,n\rangle\\ x_{j+1}&\langle j+1,j+1\rangle&\dots&\langle j+1,n\rangle\\ \vdots&\vdots&&\vdots\\ x_{n}&\langle n,j+1\rangle&\dots&\langle n,n\rangle\end{pmatrix}=\det\begin{pmatrix} x_{j}&D_{j,j+1}&\dots&D_{j,n}\\ x_{j+1}&D_{j+1,j+1}&\dots&D_{j+1,n}\\ \vdots&\vdots&&\vdots\\ x_{n}&D_{n,j+1}&\dots&D_{n,n}\end{pmatrix}\]
Observe that the matrix in the formal determinant given here is the \((n-j+1)\times(n-j+1)\) submatrix of the Durfee square of \(D\), with the first column replaced by \(x_{j},\dots,x_{n}\). As such, the above result implies that the Gram determinant \(G_{j-1}=1\), and as such the basis \(y_{1},\dots,y_{n}\) is integral. The norm of \(y_{j}\) is \(G_{j}/G_{j-1}=1\).
Using the above interpretation of determinants in terms of lattice paths, we can derive the coefficients explicitly. Expanding our expression by cofactors, we obtain an expression of the form \(y_{j}=\sum_{i=j}^{n}(-1)^{i-j}c_{ij}x_{i}\), with coefficients
\[c_{ij}=\det\begin{pmatrix}D_{j,j+1}&\dots&D_{j,n}\\ \vdots&&\vdots\\ \widehat{D_{i,j+1}}&\dots&\widehat{D_{i,n}}\\ \vdots&&\vdots\\ D_{n,j+1}&\dots&D_{n,n}\end{pmatrix},\]
where the hat denotes omitting that row. This is the path matrix from \(a_{j+1},\ldots,a_{n}\) to \(b_{j},\ldots,\widehat{b_{i}},\ldots,b_{n}\). For example, with \(j=1\) and \(i=2\), using the tableau given above, \(c_{ij}\) is the path determinant of:
First observe that, again, we can restrict ourselves to tuples \((P_{j},\ldots,P_{n})\) with \(\sigma_{P}=\mathrm{id}\), for the same reason as above. Secondly, for any \(k>i\), the path \(P_{k}\) from \(a_{k}\to b_{k}\) is uniquely determined (indeed, it is the same "hook" described in the original problem). For each \(k<i\), the path \(P_{k}:a_{k+1}\to b_{k}\) is determined by a number \(m_{k}\) so that it has the form:
\[(\lambda_{k+1}^{\prime},k+1),\ldots,(k+1,k+1),\ldots,(k+1,m_{k}),(k,m_{k}), \ldots,(k,\lambda_{k}),\]
where \(k+1\leq m_{k}\leq\lambda_{k}\) In the above example, we have \(m_{k}\in\{2,3,4\}\), corresponding to the paths:
Since \(P_{k}\) cannot intersect \(P_{k+1}\), we have \(m_{k}<m_{k+1}\), and to avoid going outside the Young diagram, we must have \(m_{k}\leq\lambda_{k+1}\). In fact, since \(\lambda_{k}\geq\lambda_{k+1}\), applying the second requirement to \(m_{i-1}\) is sufficient. Therefore, the sequence \(m_{j},\ldots,m_{i-1}\) is uniquely determined by an \((i-j)\)-subset of \(\{j+1,\ldots,\lambda_{i}\}\). Since any such a sequence determines a unique tuple \(P_{j},\ldots,P_{n}\), we have:
\[c_{ij}=\binom{\lambda_{i}-j}{i-j}.\]
Explicitly, this give us the expansion:
\[y_{j}=\sum_{i=j}^{n}(-1)^{i-j}\binom{\lambda_{i}-j}{i-j}x_{i}.\]
In this example, we glossed over the requirement that \(\lambda\) be self-conjugate, which allows for the interpretation of the above as an inner product. The argument goes through regardless, demonstrating the following identity for \(i\geq j\):
\[\langle y_{j},x_{i}\rangle=\sum_{k=j}^{n}(-1)^{j-k}D_{ki}\binom{\lambda_{k}-j }{k-j}=\delta_{ij}. \tag{3}\]
Applied to the conjugate, we have:
\[\langle y_{j}^{\prime},x_{i}\rangle=\sum_{k=j}^{n}(-1)^{j-k}D_{ik}\binom{ \lambda_{k}^{\prime}-j}{k-j}=\delta_{ij}.\]
Combined, these identities determine the values \(D_{ij}\) for \(1\leq i,j\leq n\). Cutting off initial rows or columns from the Young diagram \(D\), the values of \(D_{ij}\) outside the Durfee square could also be computed.
This result reduces to, and provides a bijective proof of, the special cases of Exercise 6.27 a) and b). If \(\lambda=(2n+1,2n,\ldots,2,1)\) then \(D_{ij}=C_{2n+2-i-j}\), and the orthonormal basis \(y_{j}\) is:
\[y_{j}=\sum_{i=j}^{n+1}(-1)^{i-j}\binom{2n+2-i-j}{i-j}x_{i},\]
If we let primes denote the reflection \(i^{\prime}=(n+1)-i\), we get \(\langle x_{i^{\prime}},x_{j^{\prime}}\rangle=C_{i^{\prime}+j^{\prime}}\) and,
\[y_{j^{\prime}}=\sum_{i^{\prime}=0}^{j^{\prime}}(-1)^{j^{\prime}-i^{\prime}} \binom{i^{\prime}+j^{\prime}}{j^{\prime}-i^{\prime}}x_{i^{\prime}},\]
as expected.
As a final example, if \(\lambda=(n,n,\ldots,n)\) is the partition of \(n^{2}\), then \(D_{ij}=\binom{2n-i-j}{n-i}\), and \(c_{ij}=\binom{n-j}{i-j}\). The identity in question is:
\[\langle y_{j},x_{i}\rangle=\sum_{k=j}^{n}(-1)^{j-k}\binom{2n-i-k}{n-i}\binom{ n-j}{k-j}=\delta_{ij}.\]
Making the substitution \(m=n-m\) on the indexes \(i,j,k\), and extending the sum with terms \(=0\), this the sum is:
\[\sum_{k}(-1)^{k-j}\binom{i+k}{i}\binom{j}{j-k}=\binom{i}{i-j}, \tag{4}\]
where we have used the following, which is [10, SS1.2.6 eq. 23]:
\[\sum_{k}(-1)^{r-k}\binom{r}{k}\binom{s+k}{n}=\binom{s}{n-r}.\]
Since we require that \(j\geq i\) (opposite to eq. (3) after the substitution made in eq. (4)), this implies the claimed identity. |
2309.10291 | Koopman Invertible Autoencoder: Leveraging Forward and Backward Dynamics
for Temporal Modeling | Accurate long-term predictions are the foundations for many machine learning
applications and decision-making processes. However, building accurate
long-term prediction models remains challenging due to the limitations of
existing temporal models like recurrent neural networks (RNNs), as they capture
only the statistical connections in the training data and may fail to learn the
underlying dynamics of the target system. To tackle this challenge, we propose
a novel machine learning model based on Koopman operator theory, which we call
Koopman Invertible Autoencoders (KIA), that captures the inherent
characteristic of the system by modeling both forward and backward dynamics in
the infinite-dimensional Hilbert space. This enables us to efficiently learn
low-dimensional representations, resulting in more accurate predictions of
long-term system behavior. Moreover, our method's invertibility design
guarantees reversibility and consistency in both forward and inverse
operations. We illustrate the utility of KIA on pendulum and climate datasets,
demonstrating 300% improvements in long-term prediction capability for pendulum
while maintaining robustness against noise. Additionally, our method excels in
long-term climate prediction, further validating our method's effectiveness. | Kshitij Tayal, Arvind Renganathan, Rahul Ghosh, Xiaowei Jia, Vipin Kumar | 2023-09-19T03:42:55Z | http://arxiv.org/abs/2309.10291v1 | # Koopman Invertible Autoencoder: Leveraging Forward and Backward Dynamics for Temporal Modeling
###### Abstract
Accurate long-term predictions are the foundations for many machine learning applications and decision-making processes. However, building accurate long-term prediction models remains challenging due to the limitations of existing temporal models like recurrent neural networks (RNNs), as they capture only the statistical connections in the training data and may fail to learn the underlying dynamics of the target system. To tackle this challenge, we propose a novel machine learning model based on Koopman operator theory, which we call Koopman Invertible Autoencoders (KIA), that captures the inherent characteristic of the system by modeling both forward and backward dynamics in the infinite-dimensional Hilbert space. This enables us to efficiently learn low-dimensional representations, resulting in more accurate predictions of long-term system behavior. Moreover, our method's invertibility design guarantees reversibility and consistency in both forward and inverse operations. We illustrate the utility of KIA on pendulum and climate datasets, demonstrating 300% improvements in long-term prediction capability for pendulum while maintaining robustness against noise. Additionally, our method excels in long-term climate prediction, further validating our method's effectiveness.
## I Introduction
Temporal data, prevalent in many applications such as climate, finance, and biomedicine, present a challenging problem for accurate long-term prediction and forecasting. Recurrent Neural Networks (RNNs) have gained significant attention for their ability to model sequential data by maintaining an internal time-evolving state. However, a primary concern in training and deploying RNNs is in their degraded performance over extended time horizons, which stems from the problem of exploding and vanishing gradients [32]. This gradient instability can result in slow convergence or even hinder learning completely, making it less suitable for capturing long-term dependencies in the data. To address this issue, researchers have proposed various methods, such as constraining the weight matrix to belong to the orthogonal group [24], using unitary hidden-to-hidden matrices [19], temporal convolutional networks [3], and other solutions [22]. Despite these efforts, achieving long-term memory is still an ongoing challenge and remains an active area of research.
Additionally, applying existing temporal models directly for scientific problems presents multiple obstacles: Firstly, accurate depiction of spatial and temporal processes within physical systems necessitates a substantial amount of training data [37], which is often scarce in real-world situations. Secondly, existing empirical models remain limited in generalizing to scenarios that look different from training data. This is because they only establish statistical connections [30] between input and the targeted system variables but do not consider the inherent characteristics of the processes involved in the target system. Lastly, the relationships learned by these models are only valid for the specific distribution of forcing variables present in the training data, limiting their ability to generalize to scenarios not covered in the training set. In a study by Read et al. [33], it was demonstrated that an RNN model trained solely on data from a water body under current climatic conditions struggled to accurately predict outcomes in different climate scenarios, highlighting the limited generalizability of existing models in such cases.
In recent times, Koopman-based models have gained attention as a promising alternative approach for modeling temporal data [8]. These models are based on the Koopman operator ([21], also see related work), transforming the original nonlinear system into an infinite-dimensional linear space. Koopman operator has three distinctive properties, which make it an ideal choice for temporal modeling (i) _Linearity_[23]: The Koopman operator turns the original nonlinear system into an infinite-dimensional linear system, which simplifies the process of capturing the inherent patterns and trends in the data, which is crucial for effective temporal modeling. (ii) _Global analysis_[31]: Unlike other linearization techniques (e.g., linearization around fixed points or periodic orbits), the Koopman operator provides a global perspective, capturing the overall behavior of the system rather than just local dynamic that can enhance the generalization capabilities of the model. (iii) _Invariant properties_[38]: The eigenfunctions and eigenvalues of the Koopman operator can reveal intrinsic properties of the system that remain unchanged under the system dynamics, which helps uncover hidden structures and patterns and makes it more robust to noise. These three distinctive properties of the Koopman operator make it a powerful tool for modeling temporal data.
However, using the Koopman operator for practical computations can be challenging because it is an infinite-dimensional operator [29]. Recently, researchers have developed techniques to approximate the Koopman operator using finite-dimensional representations extracted by autoencoder-based models, e.g., the Koopman Autoencoder (KAE) [27]. These models effectively
reduce the complexity by creating a low-dimensional representation space in which the Koopman operator can be suitably approximated with a linear layer that accurately captures the underlying dynamics of the system. However, because of model architecture, the information gleaned from Koopman-based models often is primarily based on forward dynamics, which overlooks the scope to acquire knowledge from backward dynamics. The fundamental goal of the forward run is to move from the present state to the subsequent state. Conversely, a backward run aims to go from the current state to the one that preceded it. Modeling backward dynamics ensures linearity in the low-dimensional space and also regularizes the forward run to be consistent.
One straightforward strategy is to have two separate linear layers [2], each dedicated to independently modeling the forward and backward runs. However, such an approach may have a limited capacity to accurately capture the intrinsic dynamics of the process due to its need for knowledge sharing between the forward and backward states. This paper aims to build a long-term predictive model by leveraging the Koopman analysis to capture both forward and backward dynamics in a unified model. In particular, we model the forward and backward dynamics in the low-dimensional space using an invertible neural networks model [20], which can establish explicit invertible mappings between the input and output spaces. As a result of this integrated approach, a single layer can be trained to learn both forward and backward processes. This unified model can leverage common knowledge between the two directions and enhances the ability to capture the process dynamics fully. To the best of our knowledge, this paper is the first to present an invertibility approach for learning the Koopman operator. While analogous concepts have been utilized in the field of inverse modeling, i.e., using observable data to infer hidden characteristics, and vice versa [39], our research lays the foundation for predictive models that incorporate the essential underlying dynamics and go beyond mere trend prediction, which is of great significance in temporal modeling.
Our main contributions are as follows.
* In this work, we present Koopman Invertible Autoencoders (KIA), a novel approach that harnesses both forward and backward dynamics for learning low-dimensional representations of temporal data and illustrates its utility on the pendulum and climate datasets.
* We accurately extracted the pendulum system's dynamics, handling both clean and noisy scenarios, and achieved a remarkable 300% improvement in long-term prediction accuracy.
* We demonstrated the capability of our model to comprehend the intricate dynamics of the climate dataset, which enables generalization across diverse weather scenarios and makes accurate long-term predictions.
## II Related Work
RNNs [45] and their variants have proved indispensable in dealing with sequential data, making strides in numerous applications such as language modeling [17], speech recognition [18], and time-series prediction [25]. However, despite the versatility of RNNs, they are plagued by issues such as vanishing and exploding gradients [5], hindering their ability to model long-term dependencies. Variants of RNNs such as Long Short-Term Memory (LSTM) [14], Gated Recurrent Unit (GRU) [10], and Quasi-Recurrent Neural Networks (QRNN) [7] have been proposed to overcome this difficulty and have achieved remarkable results. These networks address the limitations of RNN by utilizing various gating mechanisms allowing them to control the information flow, retain long-term dependencies, and mitigate vanishing/exploding gradient problems.
However, RNN-based models remain limited for long-term prediction as they rely on complex non-linear temporal structures (e.g., LSTM unit) and can easily accumulate errors over time. A recent trend in machine learning is to explore transferring physical theories from existing physics-based models to enhance the capabilities and generalization of RNNs [42]. It combines the best of both worlds: the structure and explanatory power of physics-based models and the learning and predictive capabilities of RNNs. For instance, a physics-guided recurrent neural network model (PGRNN) [16] integrated the heat transfer process with the RNN model to effectively captured the long-term dependencies in the data, a task at which traditional RNNs failed. Other works also leveraged physics to enhance the long-period prediction of turbulent flows [4, 41]. However, these methods rely on thorough knowledge of the system's physics, which limits their applicability. Furthermore, current model architectures, both generic and physics-based, predominantly rely on using forward dynamics for training, neglecting the potential of backward dynamics. This lack of integration due to network architecture presents an overlooked opportunity for enhancing these systems. In contrast, our approach builds upon the dynamical systems theory to assimilate both forward and backward dynamics through which it learns the true underlying processes. This significantly improves long-term prediction capabilities and offers an alternative to prevailing methodologies.
The theory of Koopman operators [21], established in 1931, has recently emerged as a groundbreaking framework for systematic linearization of complex dynamical system within an infinite-dimensional Hilbert space. Dynamic Mode Decomposition (DMD) [35] and Extended Dynamic Mode Decomposition (EDMD) [43] are two popular approaches for approximating this operator; however, they face issues of computational feasibility [8]. To tackle these challenges, researchers have begun to utilize data-driven approaches through deep learning to determine the Koopman operator from observed data [27, 38]. Most of these strategies employ autoencoders [40] to transition from nonlinear to linear Koopman subspaces in order to identify the Koopman operator. For instance, the VAMP architecture [28] utilizes a time-lagged autoencoder and a custom variational score to find Koopman coordinates on a protein folding example. Similarly, researchers [6] used koopman operator for sequence disentanglement by assuming that the underlying complex dynamics can be represented
linearly in latent space. The focus of our work is on long-term temporal modeling whereby we want to model both forward and backward dynamics, which is significantly more manageable in a linearized space as backward dynamics are the inverse of forward space. A method most relevant to our approach is C-KAE [1], where the authors developed two separate networks for learning forward and backward dynamics, which are then trained together using consistency loss. Nevertheless, training two separate networks overlooks the connection between forward and backward dynamics and consequently loses the opportunity to exploit shared knowledge. Additionally, optimizing two neural network to be inverse of each other is computationally difficult and unstable due to their stochastic nature. Recently, a new category of neural networks, called Invertible Neural Networks (INNs), was introduced [11, 20], based on the principles of normalizing flow. INNs are bijective functions that, by design, can simultaneously be trained on both forward and backward dynamics and exploit shared knowledge, making them an ideal choice for long-term temporal modeling.
## III Problem Formulation & Preliminaries
### _Problem Formulation_
Temporal data, denoted as \(\{\mathbf{x}_{t}\}_{t=1}^{T}\), can be interpreted as a series of observations from a dynamical system. This could represent anything that varies over time, for example, weather predictions. stock price etc. Consider the following discrete form of the system dynamics
\[\mathbf{x}_{t+1}=\mathscr{F}(\mathbf{x}_{t})+\mathbf{r}_{t}\,\quad\mathbf{x}\in\mathcal{M} \subset\mathbb{R}^{m}\, \tag{1}\]
where \(\mathbf{x}_{t+1}\) represents the state of the system at the next time step, given its current state \(\mathbf{x}_{t}\) and \(\mathbf{r}_{t}\in\mathcal{M}\) represents deviation from the true dynamics due to e.g., measurement errors or missing values. Here, the function \(\mathscr{F}(\mathbf{x}_{t})\) is an (possibly non-linear) update rule which describes how the state of the system evolves from one time step to the next. Additionally, \(\mathcal{M}\) represents a finite-dimensional manifold, embedded in a higher-dimensional Euclidean space, denoted by \(\mathbb{R}^{m}\). In this work, we focus on multi-step forecasting task of predicting the future observations given the current observation. Formally, we seek to learn function map \(\mathscr{F}\) such that
\[\mathbf{x}_{t+l}=\mathbf{x}_{t}\circ\mathscr{F}^{l}\,\quad l=1,2,...\, \tag{2}\]
where \(\mathbf{x}_{t+l}\) signifies the state of the system at a future time step and \(\circ\) denotes function composition. \(\mathscr{F}^{l}\) indicates that we're applying the system dynamics repeatedly, \(l\) times in total, to the current state \(\mathbf{x}_{t}\). The above model assumes that future states \(\mathbf{x}_{t+l}\) depend only on the current observation \(\mathbf{x}_{t}\) and not on the information from a sequence of previous observations (between \(t\) and \(t+l\)).
### _Linear Invertible Neural Network (INNs)_
INNs are bijective functions with a forward mapping \(\mathcal{K}:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\) and an inverse mapping \(\mathcal{K}^{-1}:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\). These mappings can be computed in closed-form manner, building upon the foundational works by Dinh et al., [11, 20]. To construct a linearly invertible neural network, we utilize the framework of the real non-volume preserving architecture proposed by [12]. The basic unit of this network comprises a reversible bijective network, incorporating two interconnected linear coupling layers. During the forward processing stage, as described by equation (3), the input vector, denoted as \(\mathbf{u}\), is partitioned into \(\mathbf{u}_{1}\) and \(\mathbf{u}_{2}\). These halves are then linearly transformed using translation functions \(t_{i}(\cdot)\), where \(i\in{1,2}\), and they are modeled by linear neural networks trained in the forward pass. The output are the concatenation \([\mathbf{v}_{1},\mathbf{v}_{2}]\), which are computed as
\[\mathbf{v}_{1} =\mathbf{u}_{1}+t_{2}(\mathbf{u}_{2}) \tag{3}\] \[\mathbf{v}_{2} =\mathbf{u}_{2}+t_{1}(\mathbf{v}_{1})\]
Given the output \(\mathbf{v}=[\mathbf{v}_{1},\mathbf{v}_{2}]\), the above expressions are easily invertible as follows:
\[\mathbf{u}_{2} =\mathbf{v}_{2}-t_{1}(\mathbf{v}_{1}) \tag{4}\] \[\mathbf{u}_{1} =\mathbf{v}_{1}-t_{2}(\mathbf{u}_{2})\]
Importantly, the mappings \(t_{i}\) can be arbitrarily complex functions. In our implementation, we implement bijectivity by a fully connected linear layers with linear activations and use deep INN which is composed of a sequence of these reversible blocks. One advantage of INNs is that can we can train them on the well-understood forward process \(\mathcal{K}\) and get the inverse \(\mathcal{K}^{-1}\) for free by running them backward. This provides a unique opportunity to develop a unified model that integrates both forward and inverse dynamical processes seamlessly.
### _Koopman Operator Theory_
The Koopman theory proposes a framework that allows us to study complex systems in a linear space. According to this theory, every nonlinear dynamical system can be transformed into a form where the evolution of observations can be described by an infinite-dimensional Koopman operator. This operator operates on the space of all possible measurement functions, which can be fully represented through a linear map. Formally, we define the Koopman operator as \(\mathcal{K}^{\infty}_{\mathscr{F}}:\mathcal{G}(\mathcal{M})\rightarrow\mathcal{ G}(\mathcal{M})\), where \(\mathcal{G}(\mathcal{M})\) is the space of real-valued measurement functions, represented by \(g:\mathcal{M}\rightarrow\mathbb{R}\). The Koopman operator maps between function spaces and transforms the observations of the state to the next time step. Mathematically, the action of the Koopman operator on a measurement function \(g\) at time \(t\) is given by:
\[\mathcal{K}^{\infty}_{\mathscr{F}}g(\mathbf{x}_{t})=g(\mathcal{K}^{\infty}_{\mathscr{ F}}(\mathbf{x}_{t})), \tag{5}\]
where \(\mathscr{F}(\mathbf{x}_{t})\) represents the transformed state of the system to the next time step. Also, the Koopman operator is a linear operator. This means that for any real numbers \(\alpha\) and \(\beta\), the following linearity property holds:
\[\mathcal{K}^{\infty}_{\mathscr{F}}(\alpha g_{1}+\beta g_{2}) =(\alpha g_{1}+\beta g_{2})\circ\mathcal{K}^{\infty}_{\mathscr{F}}\] \[=\alpha g_{1}\circ\mathcal{K}^{\infty}_{\mathscr{F}}+\beta g_{2} \circ\mathcal{K}^{\infty}_{\mathscr{F}}\] \[=\alpha\mathcal{K}^{\infty}_{\mathscr{F}}g_{1}+\beta\mathcal{K}^{ \infty}_{\mathscr{F}}g_{2}\]
This property ensures that the Koopman operator preserves the linear structure of the measurement functions' space.
## IV Koopman Invertible Autoencoder (KIA)
For forecasting future observations, an intuitive route is to design a neural network that effectively learns an approximation of the mapping denoted by \(\mathcal{F}\). However, a model developed using this approach ignores the underlying dynamics and can be hard to analyze. To address these challenges, we propose to develop the Koopman Invertible Autoencoder (KIA), which is built upon the Koopman theory and INNs. In the following, we first provide an overview of the Koopman Autoencoder and how it incorporates the concepts from the Koopman theory. Subsequently, we will describe the integration of INNs, focusing on how they facilitate bidirectional modeling and contribute to the robustness of our proposed model.
### _Koopman Autoencoder (KAE)_
Due to the infinite-dimensional nature of \(\mathcal{K}^{\infty}_{\mathcal{F}}\) in Eq. (5), the practical use of the Koopman theory requires creating a finite-dimensional approximation that is capable of encapsulating the majority of the dynamics at play. We denote finite-dimensional Koopman operator by \(\mathcal{K}_{\mathcal{F}}\). The primary objective of the KAE model lies in the discovery of the matrices \(\mathcal{K}_{\mathcal{F}}\) and the nonlinear transformation of observation to the Koopman subspace that allows for an accurate recovery of the underlying dynamics. The Koopman autoencoder consists of an encoder \(\gamma_{e}\), which maps observations to the Koopman subspace, and a decoder \(\gamma_{d}\), which maps the Koopman subspace back to the observation space. Both encoder and decoder are represented by neural networks. The training of the networks is conducted by minimizing the discrepancy between the input (\(\mathbf{x}_{t}\)) and its corresponding output (\(\tilde{\mathbf{x}}_{t}\&=\gamma_{d}\circ\gamma_{e}(\mathbf{x}_{t})\)). This guarantees that the encoder and decoder are solely responsible for encoding and decoding processes and do not learn any dynamics. This can be formally represented as \(\gamma_{d}\circ\gamma_{e}\approx\mathbb{I}\), where \(\mathbb{I}\) stands for the identity function. The loss function to train this network is given by:
\[\mathcal{L}_{Recom}=\frac{1}{n}\sum_{t=1}^{n}\|\gamma_{d}\circ\gamma_{e}(\mathbf{x} _{t})-\mathbf{x}_{t}\|_{2}^{2}\, \tag{6}\]
which measures the mean squared error between the reconstructed and original data points over a dataset of size \(n\).
Modeling Forward DynamicsIn general, \(\mathcal{K}_{\mathcal{F}}\) prescribes a rule to move forward in time i.e. \(\hat{\mathbf{x}}_{t+1}\&=\gamma_{d}\circ\mathcal{K}_{\mathcal{F}}\circ\gamma_{e}( \mathbf{x}_{t})\). We can leverage this relationship for multi-step forecasting as : \(\gamma_{d}\circ\mathcal{K}_{\mathcal{F}}^{l}\circ\gamma_{e}(\mathbf{x}_{t})\approx \mathbf{x}_{t}\circ\mathcal{F}^{l}\), i.e., we iteratively obtain forward estimates. In order to accurately represent these forward dynamics, we apply a linear invertible neural network to serve as the approximate Koopman operator, denoted as \(\mathcal{K}_{\mathcal{F}}\). Our encoding network, denoted as \(\gamma_{e}\), processes observations \(\mathbf{x}_{t}\) to acquire a latent representation \(\mathbf{z}_{t}\), in which the dynamics become linear through the encoder structure. During the forward evolution step (Eq 3), the input vector, referred to as \(\mathbf{z}_{t}\), is divided into two equal parts, \(\mathbf{z}_{t_{1}}\) and \(\mathbf{z}_{t_{2}}\), which are then propagated forward as follow:
\[\begin{split}\mathbf{z}_{t+1,1}&=\mathbf{z}_{t,1}+t_{2}(\bm {z}_{t,2})\\ \mathbf{z}_{t+1,2}&=\mathbf{z}_{t,2}+t_{1}(\mathbf{z}_{t+1,1}) \end{split} \tag{7}\]
where the concatenation of output \([\mathbf{z}_{t+1,1},\mathbf{z}_{t+1,2}]\) from INNs represent the forward evolution in Koopman subspace. In our tests, we noticed that our models predict as well as generalize better in multi-step forecasting, compared with the traditional auto-regressive approach that computes one step forward at a time. Given a choice of \(k\) forward prediction steps, we define the following forward dynamical loss term:
\[\mathcal{L}_{\text{fwd}}=\frac{1}{k*n}\sum_{l=1}^{k}\sum_{t=1}^{n}\|\gamma_ {d}\circ\mathcal{K}_{\mathcal{F}}^{l}\circ\gamma_{e}(\mathbf{x}_{t})-\mathbf{x}_{t+l }\|_{2}^{2}\, \tag{8}\]
Fig. 1: The left figure depicts our network architecture (KIA). The right figure showcases the various losses employed where observations \(\mathbf{x}_{t}\) are inputted and transformed into a latent representation \(\mathbf{z}_{t}\) through an encoder and propagated forward and backward using INNs
### _Bidirectional Modeling_
The majority of Koopman-based networks currently available [27, 38] primarily focus on modeling forward dynamics. A benefit of preserving linearity in the latent space is that the evolution matrix \(\mathcal{K}_{\mathscr{F}}\) can also be exploited for backward prediction via its inverse: \(\mathcal{K}_{\mathscr{F}}^{-1}\), i.e., \(\bar{\mathbf{x}}_{t-1}=\gamma_{d}\circ\mathcal{K}_{\mathscr{F}}^{-1}\circ\gamma_{ e}(\mathbf{x}_{t})\). KAE architectures that employ the liner layer [27, 38] to learn forward dynamics cannot learn the backward dynamics due to the high computational cost of matrix inversion, which is typically \(O(n^{3})\). Another approach to implementing backward dynamics is through independent linear layers [1], but this may limit the capacity to capture the intrinsic dynamics due to the lack of knowledge sharing between the forward and backward states. Earlier methods have also examined the integration of backward dynamics into their non-linear model, such as in bi-directional RNN [36]. However, the inherent nonlinearities of a typical neural network make it difficult to constrain the forward and backward models.
Our tests found that models trained for forward prediction typically produce poor, backward predictions. In contrast, considering both forward and backward dynamics can contribute to more effective training of KAE. Specifically, incorporating backward dynamics ensures linearity in the low-dimensional space while regularizing the consistency with the forward run. To address this limitation, we propose modeling the forward and backward dynamics in the low-dimensional space using INNs which are invertible by design and very efficient. In contrast to existing methods that use separate structures and parameters for backward modeling, our model allows for the direct back prediction through the INN structure (Eq. 4). Specifically, the latent representation \(\mathbf{z}_{t}\) (\(\gamma_{e}\circ\mathbf{x}_{t}\)) is divided into two parts, \(\mathbf{z}_{t_{1}}\) and \(\mathbf{z}_{t_{2}}\), which are then propagated backward as follow:
\[\begin{split}\mathbf{z}_{t-1,2}=\mathbf{z}_{t,2}-t_{1}(\mathbf{z}_{t,1})\\ \mathbf{z}_{t-1,1}=\mathbf{z}_{t,1}-t_{2}(\mathbf{z}_{t-1,2})\end{split} \tag{9}\]
where the concatenation of output \([\mathbf{z}_{t-1,1},\mathbf{z}_{t-1,2}]\) from INNs represent the backward evolution in Koopman subspace. As with forward dynamics, we noticed that our models predict as well as generalize better in multi-step backcasting. Given a choice of \(k\) backward prediction steps, we define the following backward dynamical loss term:
\[\mathcal{L}_{\text{bwd}}=\frac{1}{k*n}\sum_{l=1}^{k}\sum_{t=1}^{n}\|\gamma_{d} \circ\mathcal{K}_{\mathscr{F}}^{-l}\circ\gamma_{e}(\mathbf{x}_{t})-\mathbf{x}_{t-l}\| _{2}^{2}\, \tag{10}\]
We derive our KIA framework for analyzing temporal data by integrating all the above components. Our model undergoes training by minimizing a loss function whose minimizers guarantee that we achieve an optimized autoencoder and get accurate predictions over time by effectively capturing both forward and backward dynamics. We define the complete training loss as
\[\mathcal{L}=\lambda_{\text{Recon}}\mathcal{L}_{\text{Recon}}+\lambda_{\text{ fwd}}\mathcal{L}_{\text{fwd}}+\lambda_{\text{bwd}}\mathcal{L}_{\text{bwd}}\, \tag{11}\]
where \(\lambda_{\text{Recon}},\lambda_{\text{fwd}},\lambda_{\text{bwd}}\), are parameters that balance between reconstruction, forward and backward prediction. Figure 1 (left) depicts our network design with an encoder, decoder, and an INN-based Koopman module for forward and backward dynamics computation. Figure 1 (right) illustrates our loss landscape.
## V Experiments
To evaluate our proposed approach for long term temporal modeling, we perform a comprehensive study on two datasets, i.e., the pendulum dataset and the sea surface temperature
Fig. 2: Prediction errors over a time horizon of 2000 steps for clean and noisy pendulum observations with initial conditions \(\theta=0.8\) (top row) and \(\theta=2.4\) (bottom row). The first column shows the clean results, the second column shows results with small noise, and the third column shows results with large noise. (Best seen in color)
dataset, and compare with state of the art Koopman-based approaches as well as baseline sequential models.
### _Nonlinear Pendulum_
**Dataset Description:** We evaluate our models on a non-linear pendulum, whose behavior is described by a second-order ordinary differential equation (ODE), where the angular displacement from equilibrium, denoted by \(\theta\), follows the equation \(\frac{d^{2}\theta}{dt^{2}}=-\frac{q}{l}\sin(\theta)\). We use standard parameters for the length, \(l\) (1), and gravity's acceleration, \(g\) (\(9.8~{}m/s^{2}\)). The pendulum's behavior is different based on the initial angle. A small initial angle results in simple harmonic motion, while a larger initial angle introduces non-linearity and complexity. With this in mind, we conducted experiments using two specific initial angles for oscillations, \(\theta=0.8\) and \(\theta=2.4\), over a period ranging from \(0\) to \(400\). We also employ a random orthogonal transformation in \(\mathbb{R}^{64\times 2}\) to convert our input data points into a high-dimensional space. This transformation results in observations that reside in \(\mathbb{R}^{64}\), enhancing the representation of the original data. We collected 4,000 evenly spaced points in \(\mathbb{R}^{2}\) over the time interval \(t=[0,400]\). This dataset was divided into training (400 points), validation (1,500 points), and testing (2,100 points) sets. The model was trained on the training set, hyperparameter optimized using the validation set, and the final resulting model was evaluated on the test set.
**Baselines:** _Koopman Auto Encoder (KAE)_[27]: This network model approximates the Koopman operator by only looking at its forward dynamics. _Consistent Koopman Auto Encoder (C-KAE)_[1]: This network model builds upon the original KAE and incorporates both forward and backward dynamics by using two separate Koopman operators. Further they enforce consistency between these operators by approximating one to be inverse of the other through consistency loss. _Long Short-Term Memory (LSTM)_: We further conducted a comparison against an LSTM, a variant of a recurrent neural network (RNN). This model is designed to learn a nonlinear function \(\mathbf{x}_{t}\rightarrow\mathbf{x}_{t+1}\) directly. During the inference phase, this function takes \(\mathbf{x}_{t}\) as input to predict \(\mathbf{x}_{t+1}\), \(\mathbf{x}_{t+2}\)...\(\mathbf{x}_{t+k}\) recursively. We use two LSTM layers, each with a hidden dimension of 64.
All Koopman-based models, including our proposed method, utilize the same encoder-decoder architecture. Specifically, we employ a three-layer feed-forward network (128, 64, 8) with non-linear activations. While specialized architectures like convolutional neural networks with non-linear layers are another option, previous research has shown that these architectures do not offer any advantages for Koopman-based approaches [34]. All models underwent training for 500 epochs, utilizing Adam optimizers with identical learning rates. The training, validation, and test sets were consistent across all models. An early stopping criterion was also applied, terminating training if the validation loss did not decrease for 20 consecutive epochs. Moreover, the models were trained using the same random seeds to ensure consistency. In order to determine the optimal hyperparameters, we conducted an extensive grid search, exploring a range of parameter values specified in Table II.
of 2,000 steps, using 30 different initial observations. The mean and standard deviation of prediction errors for these initial observations are presented in Table I. Additionally, we provide insights into prediction capabilities across different time spans by reporting the average prediction error over all timesteps, the first 100 timesteps, and the last 100 timesteps.
The average predictions for condition \(\theta_{0}=0.8\) are consistently lower than for \(\theta_{0}=2.4\), indicating that \(\theta_{0}=2.4\) introduces more complexity and non-linearity. The RNN model performs better in the first 100 timesteps, suggesting its strength in capturing short-term temporal dynamics. Still, the KIA model shows a remarkable 300% improvement in long-term prediction accuracy over RNN on average. We attribute this improvement to the KIA model's ability to capture long term trends by learning inherent characteristics via mapping forward and backward dynamics in a single network. C-KAE, which considers both forward and backward, outperforms the KAE, which emphasizes the importance of studying backward dynamics but lacks consistency as it uses separate layers.
_Robustness comparison:_ Temporal physical processes often have noisy measurements due to sensor limitations and environmental conditions. We simulated the behavior by applying additive white gaussian noise with zero mean and varying standard. Specifically, to test model robustness, we evaluated KIA with perturbed inputs using small (0.1 std) and large (0.2 std) noise levels, and the results are depicted in Table III and Table IV respectively. The noise disrupts the regular oscillation of the pendulum, causing slight deviations in its motion. Consequently, the system becomes more chaotic and difficult to accurately predict, resulting in increased errors as the noise levels rise. Interestingly, the KAE and C-KAE models shows more performance deterioration compared to the RNN and KIA models, especially over longer time horizons. Also, in table IV, we observe signs of overfitting in the KAE and C-KAE models, as indicated by lower errors in the initial \(100\) predictions and higher errors in the last \(100\) predictions. Figure 2 illustrates the prediction error of the pendulum over 2,000 steps for clean and noisy observations. Comparing the RNN and KIA models, we find that KIA outperforms RNN, particularly over longer time horizons, even in the presence of noise. This highlights the robustness of KIA in effectively handling variations and disturbances in the input signals while capturing the true dynamics.
_Effect of the size of the Training Data:_ Furthermore, to evaluate the influence of the labeled data size, we conducted experiments by utilizing different proportions of the training data on the top-performing RNN and KIA models. The results are presented in Table V, where we compare the average prediction error obtained using subsamples of 200 and 300 instances from the training datasets. The KIA model consistently outperformed the RNN model in test error accuracy, even with fewer labeled data points, indicating its superior robustness and efficiency in using available data for precise predictions.
_Visualization of Prediction Trajectories:_ To gain further insights, we visualize the prediction trajectories of our models for an initial condition of \(\theta_{0}=2.4\). Figure 3 shows the trajectories generated using clean and noisy inputs, with the true trajectory shown in violet. Both KIA and RNN models are shown to excel in capturing the pendulum dynamics, regardless of the presence of noise. The red and green lines align closely with the true trajectory, indicating their effectiveness. However, the predicted trajectories of the KAE and C-KAE models exhibit increasing deviations from the true trajectory, especially over longer time periods. The introduction of noisy inputs further amplifies these deviations, leading to a more pronounced divergence from the actual trajectory. This highlights the advantage of using KIA which utilize both forward and backward dynamics to learn underlying dynamics.
Fig. 4: Example SST regions of the Persian Gulf (left) and Southeast Asia (right). The Persian Gulf, has a high temperature range of 23-35\({}^{\circ}\)C, displaying high variability, while Southeast Asia, has a range of 26-31\({}^{\circ}\)C, represents a low variability region.
Fig. 3: Visualization of Prediction Trajectories on clean (top) and noisy input(bottom).
### _Sea Surface Temperature Data_
**Dataset Description:** In this experiment, we analyzed the NOAA Optimal Interpolation SST High Resolution dataset [15], which provides daily sea-surface temperature measurements at a spatial resolution of 0.25\({}^{\circ}\). We focused on two specific regions: the Persian Gulf (Lat: \(12.5^{\circ}\) to \(30^{\circ}\) N, Long: \(31.25^{\circ}\) to \(68.75^{\circ}\) E) and Southeast Asia (Lat: \(-10^{\circ}\) to \(7.5^{\circ}\) N, Long: \(105^{\circ}\) to \(142.5^{\circ}\) E) (see Fig 4). The Persian Gulf has higher sea surface temperatures and greater seasonal variations compared to Southeast Asia, making it more nonlinear from a modeling perspective. We used five years of data, with three years for training, one year for validation, and one year for testing. The performance of our models was evaluated using Celsius MAE as the error metric. This metric calculates the average absolute difference, in degrees Celsius, between the predicted and the actual temperatures, regardless of their direction (positive or negative). Forecasting climate patterns is an inherently difficult task [9] due to the intricate interactions of a complex system. However, the input dynamics exhibit non-stationary periodic structures [1], suggesting the use of Koopman-based methods.
**Baselines:** We incorporates all Koopman-based baselines from Pendulum experiments, along with Persistence, Climate Mean, and replace LSTM with ConvLSTM. Persistence and Climate Mean are commonly used as baseline methods for climate predictions, offering different advantages based on the context and time scale. Persistence is effective for short-term forecasting, assuming stable climate conditions, while Climate Mean incorporates historical data for more reliable long-term predictions. Any useful model is expected to outperform these baselines. _Persistence:_ This forecasting model assumes that future climate conditions will be similar to the current one, making identical predictions for each time step. _Climate Mean:_ This model derives the average climate state from historical data of the same date. It suggests that the future climatic conditions will closely resemble the mean state derived from historical observations spanning previous years. _ConvLSTM:_ is a spatiotemporal model specifically designed for data like SST, combining spatial processing with temporal dependencies using convolutional neural networks and LSTMs. It is widely adopted and used as an alternative to LSTM for spatio-temporal modeling.
**Experimental results:** In _Long-term Prediction Testing_ (as done in [2, 34]), we generate predictions for the next 180 days on 30 different initial days of the year to evaluate the accuracy and reliability of the model over an extended period. Table VI shows prediction error averages, and Fig. 6 visually compares different model outputs. The results show that our model's predictions for the 60-day, 120-day, and 180-day forecasts are closer to the ground truth compared to other models. To enhance differentiation in the model's output, we use a color scheme that represents the temperature difference with the ground truth. Grey indicates a perfect prediction, while red and blue represent warmer and colder predictions, respectively. The initial state and ground truth temperatures are in Celsius. The oval, rectangle, and triangle regions were used for clear differentiation.The visual comparison shows that our model, KIA, performs better than CKAE and KAE models, which tend to overpredict. The prediction results show that KIA can generalize to unseen climate scenarios, which is significant considering the challenges involved in predicting climate data. Furthermore, Fig 5 provides a visualization of the prediction error spanning a 180-day horizon, revealing interesting insights about different models' performance. While the persistence model is more accurate in short-term predictions, the KIA model outperforms the persistence and mean models when evaluating their overall performance. This indicates that the KIA model offers better long-term prediction capabilities, making it a more reliable choice for forecasting.
Additionally, we visualize the prediction error over the 180-day horizon in Fig 5, showing that while short-term predictions are more accurate with the persistence model, the KIA model outperforms both the persistence model and the mean model
Fig. 5: Prediction errors over a time horizon of 180 days for Persian Gulf (left) and Southeast Asia (right)
\begin{table}
\begin{tabular}{c c c} \hline \hline
**Model** & **Regions** & **Prediction Error (Avg)** \\ \hline KAE & PG & 0.99 \(\pm\) 0.06 \\ C-KAE & PG & 1.04 \(\pm\) 0.06 \\ ConvLSTM & PG & 1.93 \(\pm\) 0.10 \\ Mean & PG & 0.86 \(\pm\) 0.04 \\ Persistence & PG & 2.13 \(\pm\) 0.05 \\ KIA & PG & **0.73**\(\pm\) 0.05 \\ \hline KAE & SE & 0.76 \(\pm\) 0.06 \\ C-KAE & SE & 0.75 \(\pm\) 0.04 \\ ConvLSTM & SE & 1.08 \(\pm\) 0.03 \\ Mean & SE & 0.84 \(\pm\) 0.03 \\ Persistence & SE & 1.10 \(\pm\) 0.02 \\ KIA & SE & **0.67**\(\pm\) 0.04 \\ \hline \hline \end{tabular}
\end{table} TABLE VI: Sea surface temperature prediction error statistics: mean and standard deviation.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{**Regions**} & \multicolumn{4}{c}{**Prediction Error (Avg)**} \\ \cline{3-6} & & 1 day & 7 day & 14 day & 21 day & 30 day \\ \hline KAE & PG & \(0.59\) & \(0.61\) & \(0.65\) & \(0.69\) & \(0.77\) \\ C-KAE & PG & \(0.56\) & \(0.60\) & **0.63** & **0.67** & \(0.76\) \\ ConvLSTM & PG & \(0.51\) & \(0.67\) & \(0.82\) & \(0.91\) & \(0.98\) \\ Mean & PG & \(0.81\) & \(0.81\) & \(0.81\) & \(0.81\) \\ Persistence & PG & **0.21** & \(0.59\) & \(0.78\) & \(0.96\) & \(1.19\) \\ KIA & PG & \(0.48\) & **0.57** & **0.63** & **0.67** & **0.74** \\ \hline KAE & SE & \(0.55\) & \(0.57\) & \(0.59\) & \(0.63\) & \(0.65\) \\ C-KAE & SE & \(0.54\) & **0.56** & \(0.59\) & \(0.62\) & \(0.64\) \\ ConvLSTM & SE & \(0.55\) & \(0.65\) & \(0.76\) & \(0.88\) & \(0.92\) \\ Mean & SE & \(0.75\) & \(0.75\) & \(0.75\) & \(0.75\) & \(0.75\) \\ Persistence & SE & **0.24** & \(0.61\) & \(0.68\) & \(0.72\) & \(0.77\) \\ KIA & SE & \(0.54\) & **0.56** & **0.58** & **0.61** & **0.63** \\ \hline \hline \end{tabular}
\end{table} TABLE VII: Mean error of K-day ahead sea surface temperature prediction.
_Impact of Timescales on K-Day Ahead Predictions:_ In some days of a year, accurate forecasting can be more challenging due to factors like seasonality, irregular events, or volatility. Analyzing different timescales [13] helps identify these days and assess their impact on model prediction. This information aids in resource allocation and the utilization of alternative forecasting methods during those periods. For this, we employ \(K\) day ahead forecasts for each day in a test year, where each prediction is based on input from \(K\) days in the past. In our case, we evaluated our model for \(K\) in {1, 7, 14, 21 and 30} days. Table VII showcases the mean prediction error values (standard deviation is omitted for brevity). We observe that all Koopman-based models do an equally good job for short-term prediction. Also, the persistence model is the best if we are interested in a 1-day ahead forecast. However, even in the short term, from K=7 days onwards, our approach surpasses the persistence and mean models. Fig. 7 displays the 30-day KIA forecast for randomly selected initial states, which is visually compared with the mean and persistence models. We can observe that for K=30, the mean is overpredicting, persistence is underpredicting, and KIA output is considerably closer to the actual data. The prediction results are highly significant, as successfully predicting climate data and surpassing the mean and persistence models in the short term is challenging. Fig. 8 further visualizes the prediction error for the entire year for \(K=1\) and \(K=30\) day. From the \(K=30\) plot, we can identify that the later part of the years is difficult, where the models struggle to provide better predictions than the mean. In addition, from the plot, we can also identify dates like the \(150^{th}\) day of the year, where the mean is doing better than all the models. From these results, we can comprehend that the KIA can capture the true essence of the system dynamics and can be helpful for domain scientists for long-term prediction. The code used in this paper is available at Google Drive 1.
Footnote 1: [https://www.google.com/data/1_vs8g/gthk2/gthk2/gthk2/gthk2.html](https://www.google.com/data/1_vs8g/gthk2/gthk2/gthk2/gthk2.html)
## VI Discussion and Conclusion
This paper proposes KIA, a long-term predictive model for temporal data. Our approach is built upon the principles of the Koopman theory, which enables the approximation of dynamical systems using linear evolution matrices. The key contribution of our method lies in its incorporation of both forward and backward dynamics, achieved through a meticulously designed invertible neural network architecture that ensures consistency between these two directions. We successfully demonstrated the model's capability to accurately capture long-term trends, recover underlying dynamics, and generalize to unseen scenarios through experiments on pendulum and climate datasets. Although our improvements have been validated using these specific datasets, our architecture's versatility allows
Fig. 8: Prediction error for 1-day ahead (left) versus 30-day ahead (right) forecasts for southeast asia
Fig. 6: We depict the future predictions for a single initial state over a horizon of 60, 120, and 180 days. The first column represents the consistent initial state, the second column shows the ground truth, and the remaining columns provide the model’s predictions for days 60, 120, and 180. (best seen in color).
Fig. 7: We illustrate 30-day future forecasts for a random initial states. The first column shows the initial state, the second column presents the 30-day ahead ground truth, and the rest represent mean output, Persistence output, and KIA output.(best seen in color).
for its application in various other forecasting tasks that involve backward dynamics. For instance, consider the video captioning task [46], where the temporal order of video frames in both forward and backward directions plays a crucial role in capturing the movements of significant objects from different perspectives. Previous studies have employed graph-based networks to capture the relationships between objects in videos. In this context, our approach presents an alternative solution, thereby enabling a comprehensive understanding and captioning of the visual content. Furthermore, our method exhibits potential [26] for integration within Transformer models [44], a popular architecture widely used for sequence modeling tasks. The integration holds promise for addressing more complex temporal problems across diverse domains, further improving the accuracy and generalizability of forecasting systems.
|
2309.13798 | Matching-Logic-Based Understanding of Polynomial Functors and their
Initial/Final Models | In this paper, we investigate how the initial models and the final models for
the polynomial functors can be uniformly specified in matching logic. | Dorel Lucanu | 2023-09-25T01:14:11Z | http://arxiv.org/abs/2309.13798v1 | # Matching-Logic-Based Understanding of Polynomial
###### Abstract
In this paper, we investigate how the initial models and the final models for the polynomial functors can be uniformly specified in matching logic.
## 1 Introduction
It is known that many data types used in programming are defined as initial algebra or final coalgebra for an appropriate functor \(F:\mathbb{C}\to\mathbb{C}\), where \(\mathbb{C}\) is a category of data types. In this paper we assume that \(\mathbb{C}\) is the category of sets (see, e.g., [3]). If \(F\) is bicontinous, i.e., it preserves the colimits of \(\omega\)-sequences and and the limits of \(\omega^{op}\)-sequences, then the initial algebra (model) is obtained via colimit of the \(\omega\)-sequence
\[\mathbf{0}\stackrel{{\mathrm{i}}}{{\to}}F\ \mathbf{0} \stackrel{{ F\ \mathrm{i}}}{{\longrightarrow}}F\ F\ \mathbf{0}=F^{2}\ \mathbf{0}\stackrel{{ F^{2}\ \mathrm{i}}}{{\longrightarrow}}F^{3}\ \mathbf{0}\stackrel{{ F^{3}\ \mathrm{i}}}{{\longrightarrow}}...\] (Ini)
where \(\mathbf{0}\) is the initial object in \(\mathbb{C}\), and \(\mathbf{0}\stackrel{{\mathrm{i}}}{{\to}}X\) is the unique arrow from the initial object, and the final coalgebra (model) is the limit of the \(\omega^{op}\)-sequence
\[\mathbf{1}\stackrel{{\mathrm{!}}}{{\leftarrow}}F\ \mathbf{1} \stackrel{{ F\ \mathrm{!}}}{{\longleftarrow}}F\ F\ \mathbf{1}=F^{2}\ \mathbf{1} \stackrel{{ F^{2}\ \mathrm{!}}}{{\longleftarrow}}F^{3}\ \mathbf{1} \stackrel{{ F^{3}\ \mathrm{!}}}{{\longleftarrow}}...\] (Fin)
where \(\mathbf{1}\) is the final object in \(\mathbb{C}\), and \(\mathbf{1}\stackrel{{\mathrm{!}}}{{\leftarrow}}X\) is the unique arrow to the final object [3].
This is a nice abstract framework, but, as we know, the evil is hidden in details. How the elements of the initial and final models look like for various concrete functors? How could they be handled in practice?
A possible answer can be obtained by capturing these objects in Matching Logic (ML), the logical foundation of the K Framework, where the program languages and the properties of their programs can be specified in a uniform way (see, e.g., [13, 9, 14, 8, 12]). First steps are done in [7], where the initial algebra semantics is captured in ML, and in [8], where it is shown how examples of inductive/coinductive data types are fully specified in ML. We say that ML captures a (inductive/coinductive) data type \(DT\) if there is an ML theory \(\mathsf{Th}^{\mathsf{ML}}(DT)\) such that:
* from each \(\mathsf{Th}^{\mathsf{ML}}(DT)\)-model \(M\) we may extract a structure \(\alpha(M)\) that is isomorphic to \(DT\), and
* each deduction principle for \(DT\) (e.g., induction or coinduction) can be expressed as a theorem within ML using its proof system.
In this paper, we investigate how data types specified as initial \(F\)-algebras or as a final \(F\)-coalgebras, where \(F\) is a polynomial functor, can be captured in ML. The polynomial functors can be defined in two ways:
1. Using "classical" inductive definition of polynomials (see, e.g., [15]): the polynomial functors is the smallest class including the constant and the identity functors, and it is closed to sum, product, and constant-exponent functors.
2. Using unary container functors (see, e.g.,[4]): a polynomial functor is of the form \(X\mapsto\sum_{\alpha A}X^{B[a]}\), where \(a:A\vdash B[a]\) is an \(A\)-indexed family.
The constant and the identity functors, together with their initial and final models, can be easily captured in ML. Moreover, if we exclude the exponent functor, then the initial algebra can be captured using the approach similar to that from [7]. The exponent functor complicates the things. A possible approach for the classical definition is as follows: supposing that we have captured \(F_{1}\,X\) and \(F_{2}\,X\) by the ML theories (specifications) \(\operatorname{SPEC}(F_{1}\,X)\) and resp. \(\operatorname{SPEC}(F_{2}\,X)\), then use these specifications to build \(\operatorname{SPEC}(F_{1}\,X\,\,op\,F_{2}\,X)\), i,.e., to obtain something like
\[\operatorname{SPEC}(F_{1}\,X\,\,op\,F_{2}\,X)=\operatorname{SPEC}(F_{1}\,X)\, \,\overline{op}\,\,\operatorname{SPEC}(F_{2}\,X)\]
where \(\overline{op}\) reflects \(op\) to the level of ML specifications, and it follows to be defined. In order to accomplish that, we need a "uniform standard" definition for the specifications \(\operatorname{SPEC}(F\,X)\).
The container functors already have a uniform standard definition, and therefore they are more tempting for our investigation. This approach is a work-in-progress, and the results obtained up now show that:
* it is possible to specify unary container functors, together with their initial algebras and final coalgebras, as ML theories;
* it is possible to derive the induction principle and the coinduction principle as theorems of the corresponding theories;
* the ML reasoning can be used to understand better the intimate structure of the initial algebra and the final coalgebra.
The approach is instantiated on the lists example, in order to see the relationship with the classical approach of these data types, and on that of Moore machines, in order to see the (constant) exponential functor at work.
The paper is structured as follows. Section 2 recalls the definitions for polynomial functors and for unary container functors, and the the relationship between them. Section 3 briefly recalls the main elements of matching logic, together with the theories of the equality and of the sorts. Section 4 includes the main contribution, showing that how the unary container functors and their related concepts can be specified in matching logic. The instantiation of the general approach on the examples of lists and Moore machines are included in Section 5 and Section 6, respectively. The paper ends with some concluding remarks.
## 2 Polynomials Functors
This section briefly recalls the definition of the polynomial functors and their (co)algebras. We consider only the particular case of the functors defined over the category of sets \(\mathbb{C}\).
**Definition 2.1**.: [11] Given a functor \(F:\mathbb{C}\to\mathbb{C}\), an \(F\)_-algebra_ consists of an object \(X\) in \(\mathbb{C}\) and an arrow \(\alpha:F\,X\to X\). An algebra morphism \((X,\alpha)\to(X^{\prime},\alpha^{\prime})\) is an arrow \(h:X\to X^{\prime}\) in \(\mathbb{C}\) such that \(h\circ\alpha=\alpha^{\prime}\circ F\)\(h\). An _initial algebra_ for the functor \(F\) is an initial object in the category of \(F\)-algebras and \(F\)-algebra morphisms.
**Example 2.1** (Lists).: The lists over a set of elements \(E\) can be defined as an \(L\)-algebra \(\alpha:L\,X\to X\) for the functor \(L:\mathbb{C}\to\mathbb{C}\) given by \(L\,X=\mathbf{1}+E\times X\). Usually, such an algebra is defined by a constant \(nil:\mathbf{1}\to X\) and a binary operation \(cons:E\times X\to X\). The initial \(L\)-algebra is isomorphic to the finite lists inductively defined [16] by the grammar
\[List:=nil\mid cons(E,List)\] (Lst)
We use \(\mu X\). \(L\,X\) or \(\mu L\) to denote the initial model of the functor \(L\).
**Definition 2.2**.: [11] Given a functor \(F:\mathbb{C}\to\mathbb{C}\), an \(F\)_-coalgebra_ consists of an object \(X\) in \(\mathbb{C}\) and an arrow \(\gamma:X\to F\,X\). A coalgebra morphism \((X,\gamma)\to(X^{\prime},\gamma^{\prime})\) is an arrow \(h:X\to X^{\prime}\) in \(\mathbb{C}\) such that \(h\circ\gamma=\gamma^{\prime}\circ F\). An _final coalgebra_ for the functor \(F\) is a final object in the category of \(F\)-coalgebras and \(F\)-coalgebra morphisms.
**Example 2.2** (Colists).: The colists over a set of elements \(E\) can be defined as an \(L\)-coalgebra \(\gamma:X\to LX\), where \(L\) is the functor used for lists (see Example 2.1). Usually, such a coalgebra is defined by
* a total operation \(\_?:X\to\{1,2\}\) such that \(x.?=\text{if }g(x)=\iota_{1}(\star)\) then 1 else 2 fi, where \(\star\) is the unique element of \(\mathbf{1}\), and
* two partial operations: \(\_.\,hd:X\to E\), and \(\_.\,tl:X\to X\) such that \(x.\,\,hd=e\) and \(x.\,\,tl=x^{\prime}\) iff \(x.?=2\) and \(g(x)=\iota_{2}\big{(}(e,x^{\prime})\big{)}\).
The final coalgebra is isomorphic to the possible infinite lists coinductively defined by the grammar (Lst) [16]. We use \(\nu X\). \(L\,X\) or \(\nu L\) to denote the final model of the functor \(L\).
**Example 2.3** (Moore machines).: A Moore machine is an \(M\)-coalgebra \(\alpha:X\to M\,X\), where \(M\) is the functor \(M:X\to O\times X^{I}\). Usually, a Moore machine is defined by an _output_ function \(out:X\to O\) and a _transition_ function \(tr:X\to X^{I}\)[15]. The final coalgebra \(\nu X\). \(M\,X\) is isomorphic to \((\overline{out,tr}):O^{I^{*}}\to O\times(O^{I^{*}})^{I}\), \(\overline{out}(f)=f(\varepsilon)\), \(\overline{tr}(f)=g\) with \(g(i)(w)=f((i)\cdot w)\) for \(i\in I\) and \(w\in I^{*}\), where \(f\) is a function \(F:I^{*}\to O\), \(g\) a function \(g:I\to O^{I^{*}}\), \(\varepsilon\) the empty sequence, and \(\cdot\) the concatenation of sequences.
**Definition 2.3**.: [15] The class of _polynomial functors_ is inductively defined as follows:
* the constant functor \(A\) (where \(A\) is an object in \(\mathbb{C}\)) is a polynomial functor;
* the identity functor \(ID\) is a polynomial functor;
* the sum \(F_{1}+F_{2}\) of two polynomial functors \(F_{1}\) and \(F_{2}\) is a polynomial functor;
* the product \(F_{1}\times F_{2}\) of two polynomial functors \(F_{1}\) and \(F_{2}\) is a polynomial functor; and
* the function space functor \(F(X)=X^{A}\), where A is an arbitrary object.
An alternative to define polynomial functors is given by the (unary) container functors [2, 5, 4].
**Definition 2.4**.: An \(A\)_-indexed family_\(a:A\vdash B[a]\) is a family of objects of \(\mathbb{C}\) indexed by elements of \(A\). Categorically, it is an object \(B\) of \(\mathbb{C}/A\) and \(B[a]\) denotes the elements of \(B\) mapped to \(a\).
**Definition 2.5**.: Given an \(A\)-indexed family \(a:A\vdash B[a]\), the _dependent product_\(\prod_{a:A}B[a]\) is the object of the _dependent functions_, which maps an \(a:A\) into a \(b:B[a]\). Set theoretically, we have
\[\prod_{a:A}B[a]=\left\{f\in(\bigcup_{a:A}B[a])^{A}\middle|\forall a:A.f(a)\in B[ a]\right\}\]
The _dependent sum_\(\sum_{a:A}B[a]\) is the dual of the dependent product and it consists of the pairs \((a,b)\) with \(a:A\) and \(b:B[a]\).
**Definition 2.6**.: A functor \(F:\mathbb{C}\to\mathbb{C}\) is a _(unary) container functor_ iff it is naturally isomorphic to a functor of the form \(F\,X=\sum_{a:A}X^{B[a]}\), for some objects \(A\) in \(\mathbb{C}\) and an \(A\)-indexed family \(a:A\vdash B[a]\).
_Remark_.: An element of \(\sum_{a:A}X^{B[a]}\) is a pair \((a,f)\), where \(a:A\) is the _shape_ and \(f:B[a]\to X\) is the function that labels the _positions_\(B[a]\) with elements from \(X\). Another way to define \(B\) is as an object in \(\mathbb{C}/A\).
### Polynomial Functors as Container Functors
Here we recall the relationship between polynomial functors and container functors (see, e.g., [2]).
#### Constant Functor
The main idea is to identify the constant value with the shapes \(A\). Consider \(B\) as being \(a\colon A\vdash\mathbf{0}\) (no positions of shape \(a\)), where \(\mathbf{0}\) denotes the initial object of \(\mathbb{C}\). We get \(\sum_{a:A}X^{B[a]}\approx\sum_{a:A}X^{\mathbf{0}}\approx A\).
_Remark_.: The elements of \(\sum_{a:A}X^{\mathbf{0}}\) are pairs \(\langle a,f\colon\mathbf{0}\to X\rangle\), where \(a\in A\). Since \(f\colon\mathbf{0}\to X\) is unique, we obtain \(\langle a,f:\mathbf{0}\to X\rangle\cong a\).
#### Identity Functor
Consider \(A\) as being \(\mathbf{1}\) (just one shape) and \(B\) as being \(\star\colon\mathbf{1}\vdash\mathbf{1}\) (just one position), where \(\star\) is the unique element in \(\mathbf{1}\). It follows that \(\sum_{a:A}X^{B[a]}\approx\sum_{\star}X^{\mathbf{1}}\approx X\).
_Remark_.: The elements of \(\sum_{\star}X^{\mathbf{1}}\) are pairs \(\langle\star,f\colon\mathbf{1}\to X\rangle\). Since \(f\) selects just one element \(x\) in \(X\), it follows that \(\langle\star,f:\mathbf{1}\to X\rangle\approx x\).
#### Sum
Assume that \(F\,X=\sum_{a:A}X^{B[a]}\) and \(F^{\prime}\,X=\sum_{a^{\prime}:A^{\prime}}X^{B^{\prime}[a]^{\prime}}\). Then \((F+F^{\prime})\,X\) is \(\sum_{a^{\prime\prime}:A+A^{\prime}}X^{[B,B^{\prime}][a^{\prime\prime}]}\), where \([B,B^{\prime}][a^{\prime\prime}]=B[a]\) if \(a^{\prime\prime}=\iota_{1}\,a\), and \([B,B^{\prime}][a^{\prime\prime}]=B^{\prime}[a^{\prime}]\) if \(a^{\prime\prime}=\iota_{2}\,a^{\prime}\).
_Remark_.: The following commutative diagram may help to understand the definition of \(a^{\prime\prime}\colon A+A^{\prime}\vdash[B,B^{\prime}][a^{\prime\prime}]\):
The arrow \(B+B^{\prime}\to A+A^{\prime}\) is equivalently written as the \(A+A^{\prime}\)-indexed set \(a^{\prime\prime}\colon A+A^{\prime}\vdash[B,B^{\prime}][a^{\prime\prime}]\). Set theoretically, \(\sum_{a^{\prime\prime}:A+A^{\prime}}X^{[B,B^{\prime}][a^{\prime\prime}]}\) is the set of pairs \(\langle a^{\prime\prime},[f,f^{\prime}]\colon[B,B^{\prime\prime}]\,a^{\prime \prime}\to X\rangle\), where
* either \(a^{\prime\prime}=\iota_{1}[a]\), \(\langle a,f\colon B[a]\to X\rangle\) in \(F\,X\), and \(\forall x\colon B[a]\). \([f,f^{\prime}]\,x=f\,x\), or
* \(a^{\prime\prime}=\iota_{2}[a]^{\prime}\), \(\langle a^{\prime},f^{\prime}\colon B^{\prime}[a]^{\prime}\to X\rangle\) in \(F^{\prime}\,X\), and \(\forall x^{\prime}\colon B[a^{\prime}]\). \([f,f^{\prime}]\,x^{\prime}=f^{\prime}\,x^{\prime}\).
In other words, the shape of the sum is the sum of the component shapes, and a labelling function for the sum is the sum of two corresponding component labelling functions.
#### Product
We have \((F+F^{\prime})\,X=\sum_{\langle a,a^{\prime}\rangle:A\times A^{\prime}}X^{ \langle B,B^{\prime}\rangle[\langle a,a^{\prime}\rangle]}\), where \(F\,X\) and \(F^{\prime}\,X\) are similar to those from the sum, and \(\langle B,B^{\prime}\rangle[\langle a,a^{\prime}\rangle]=B[a]+B^{\prime}[a^{ \prime}]\) (the positions of the product is the disjoint union of the component positions).
_Remark_.: We prefer to write \(\langle B,B^{\prime}\rangle[a,a^{\prime}]\) for \(\langle B,B^{\prime}\rangle[\langle a,a^{\prime}\rangle]\). Set theoretically, \(\sum_{\langle a,a^{\prime}\rangle:A\times A^{\prime}}X^{\langle B,B^{\prime} \rangle[a,a^{\prime}]}\) is the set of pairs \(\langle\langle a,a^{\prime}\rangle,[f,f^{\prime}]:B[a]+B^{\prime}[a^{\prime}] \to X\rangle\), where \(f:B[a]\to A\) and \(f^{\prime}:B^{\prime}[a^{\prime}]\to A^{\prime}\). In other words, the shape of the product is the product of the component shapes and a labelling function for the product is the sum of two corresponding component labelling functions.
#### Exponentiation
Assuming \(F\ X=\sum_{a\,:A}X^{B[a]}\), the (constant) exponent functor \(\left(F\ X\right)^{C}\) is \(\sum_{gC\to A}X^{\sum_{c\,:c}B[g\ c]}\).
_Remark_.: An element of \(\sum_{gC\to A}X^{\sum_{c\,:c}B[g\ c]}\) is a pair \(\left(g,f\right)\) consisting of a function \(g\,:C\to A\) assigning shapes to \(C\) and a \(C\)-indexed function \(c\,:C\vdash f_{c}\,:B[g\ c]\to X\) labelling the positions \(B[g\ c]\) for each \(c\) in \(C\).
## 3 Matching Logic (ML)
Matching logic [13, 9, 8] provides a unifying framework for defining semantics of programming languages. A programming language is defined in matching logic as a _logical theory_, i.e., a set of axioms. The key concept in matching logic is that of _patterns_, which can be _matched_ by certain elements. By building complex patterns, we can match elements that have complex structures or certain properties, or both. The presentation of matching logic in this review section follows [8].
**Definition 3.1**.: Let us fix two sets \(EV\) and \(SV\). The set \(EV\) includes _element variables_\(x,y,\dots\). The set \(SV\) includes _set variables_\(X,Y,\dots\). A _matching logic signature_\(\Sigma\) is a set of _(constant) symbols_, denoted \(\sigma,\sigma_{1},\sigma_{2},\dots\). Let us fix a signature \(\Sigma\). The set of (\(\Sigma\)-)_patterns_ is inductively defined as follows:
\[\varphi::=x\mid X\mid\sigma\mid\varphi_{1}\ \varphi_{2}\mid\bot\mid\varphi_{1} \rightarrow\varphi_{2}\mid\exists x.\ \varphi\mid\mu X.\ \varphi\]
where in \(\mu X.\ \varphi\), called a least-fixpoint pattern, we require that \(\varphi\) is positive in \(X\), i.e., \(X\) does not occur in an odd number of times of the left-hand sides of implications \(\varphi_{1}\rightarrow\varphi_{2}\).
**Definition 3.2**.: A _(matching logic) \(\Sigma\)-model \(M\)_ consists of
1. a nonempty _carrier set_, which we also denote \(M\);
2. an _application function_\(\_\_\):\(M\times M\rightarrow\mathbb{P}(M)\), where \(\mathbb{P}(M)\) is the powerset of \(M\); and
3. a _symbol interpretation_\(\sigma_{M}\subseteq M\) as a subset for \(\sigma\in\Sigma\).
**Definition 3.3**.: Given \(M\) and a _variable valuation_\(\rho\,:(EV\cup SV)\to M\cup\mathbb{P}(M)\) such that \(\rho(x)\in M\) for \(x\in EV\) and \(\rho(X)\subseteq M\) for \(X\in SV\), we inductively define _pattern valuation_\(\left|\varphi\right|_{M,\rho}\) as follows:
1. \(\left|x\right|_{M,\rho}=\left\{\rho(x)\right\}\) for \(x\in EV\)
2. \(\left|X\right|_{M,\rho}=\rho(X)\) for \(X\in SV\)
3. \(\left|\sigma\right|_{M,\rho}=\sigma_{M}\) for \(\sigma\in\Sigma\)
4. \(\left|\varphi_{1}\,\varphi_{2}\right|_{M,\rho}=\bigcup_{a\in\left|\varphi \right|_{M,\rho},\rho\,\{1,2\}}a_{1}\,\boldsymbol{\cdot}\,a_{2}\); note that \(a_{1}\,\boldsymbol{\cdot}\,a_{2}\) is a subset of \(M\).
5. \(\left|\bot\right|_{M,\rho}=\varnothing\)
6. \(\left|\varphi_{1}\rightarrow\varphi_{2}\right|_{M,\rho}=M\smallsetminus\left( \left|\varphi_{1}\right|_{M,\rho}\smallsetminus\left|\varphi_{2}\right|_{M,\rho}\right)\)
7. \(\left|\exists x.\ \varphi\right|_{M,\rho}=\bigcup_{a\in M}\left|\varphi \right|_{M,\rho\,\{a/x\}}\)
8. \(\left|\mu X.\ \varphi\right|_{M,\rho}=\mathbf{Ifp}(A\mapsto\left|\varphi \right|_{M,\rho\,\{A/X\}})\)
where \(\rho\,\{a/x\}\) (resp. \(\rho\,\{A/X\}\)) is the valuation \(\rho^{\prime}\) with \(\rho^{\prime}(x)=a\) (resp. \(\rho^{\prime}(X)=A\)) and agreeing with \(\rho\) on all the other variables. in \(EV\cup SV\smallsetminus\{x\}\) (resp. \(EV\cup SV\smallsetminus\{X\}\)). We use \(\mathbf{Ifp}(A\mapsto\left|\varphi\right|_{M,\rho\,\{A/X\}})\) to denote the smallest set \(A\) such that \(A=\left|\varphi\right|_{M,\rho\,\{A/X\}}\). The existence of such an \(A\) is guaranteed by the requirement that \(\varphi\) is positive in \(X\). We abbreviate \(\left|\varphi\right|_{M,\rho}\) as \(\left|\varphi\right|_{\rho}\) and further as \(\left|\varphi\right|\) if \(\varphi\) is closed.
**Definition 3.4**.: We say that \(\varphi\)_holds_ in \(M\), written \(M\vDash\varphi\), if \(\left|\varphi\right|_{M,\rho}=M\) for all \(\rho\). For a pattern set \(\Gamma\), we write \(M\vDash\Gamma\), if \(M\vDash\varphi\) for all \(\psi\in\Gamma\). We write \(\Gamma\vDash\varphi\), if \(M\vDash\Gamma\) implies \(M\vDash\varphi\) for all \(M\).
The following common constructs can be defined from the basic pattern syntax as syntactic sugar in the usual way:
\[\begin{array}{ll}\neg\varphi\equiv\varphi\rightarrow\bot&\varphi_{1}\lor \varphi_{2}\equiv\neg\varphi_{1}\rightarrow\varphi_{2}&\varphi_{1}\wedge\varphi_ {2}\equiv\neg(\neg\varphi_{1}\vee\neg\varphi_{2})\\ \top\equiv\neg\bot&\forall x.\ \varphi\equiv\neg\exists x.\ \neg\varphi&\nu X.\ \varphi\equiv\neg\mu X.\ \neg\varphi[\neg X/X]\end{array}\]
We assume the standard precedence between the above constructs.
#### Equality
The equality can be defined as a derived construct (see [13, 8]). Two patterns \(\varphi_{1}\) and \(\varphi_{2}\) are equal, written \(\varphi_{1}=\varphi_{2}\), iff it is equivalent to \(\top\) if the two patterns are matched by the same elements. Otherwise, it is equivalent to \(\bot\). To express that in ML, a new symbol \(\mathit{def}\in\Sigma\) is introduced, called the _definedness_ symbol, and specify it with the axiom (Definedness). The resulted theory can be described as follows:
```
specEQUALITY Symbols:def Notations: [\(\varphi\)]\(\equiv\)def\(\varphi\) Axioms: (Definedness) \(\forall x.\ [x]\) Notations: \(\lfloor\varphi\rfloor\equiv\neg\neg\varphi\rfloor\)//totality \(\varphi_{1}=\varphi_{2}\equiv\lfloor\varphi_{1}\leftrightarrow\varphi_{2}\rfloor\)//equality \(\varphi_{1}\subseteq\varphi_{2}\equiv\lfloor\varphi_{1}\rightarrow\varphi_{2}\rfloor\)//setinclusion \(x\in\varphi\equiv x\subseteq\varphi\)//membership endspec
```
#### Sorts
Matching logic has no builtin support for sorts. Instead, we define a _theory of sorts_ to support arbitrary sort structures following the "sort-as-predicate" paradigm. A _sort_ has a name and is associated with a set of its _inhabitants_. In matching logic, we use a symbol \(s\in\Sigma\) to represent the sort name and use \((\mathsf{inh}\ s)\) to represent all its inhabitants, where \(\mathsf{inh}\in\Sigma\) is an ordinary symbol. For better readability, we define the notation \(\mathsf{T}_{s}\equiv\mathsf{inh}\ s\).
``` specSORTImports:EQUALITY Symbols:\(\mathit{inh},\mathit{Sort}\) Notations: \(\mathsf{T}_{s}\equiv\mathit{inh}\ s\)//inhabitantsofsort\(s\) \(s_{1}\leq s_{2}\equiv\mathsf{T}_{s_{1}}\subseteq\mathsf{T}_{s_{2}}\)//subsortrelation \(\neg s_{s}\varphi\equiv(\neg\varphi)\wedge\mathsf{T}_{s}\)//negationwithinsort\(s\) \(\forall x.\mathit{s}.\ \varphi\equiv\forall x.\ x\in\mathsf{T}_{s}\rightarrow\varphi\)//\(\forall\)withinsort\(s\) \(\exists x.\mathit{s}.\ \varphi\equiv\exists x.\ x\in\mathsf{T}_{s}\wedge\varphi\)//\(\exists\)withinsort\(s\) \(\mu X.\mathit{s}.\ \varphi\equiv\mu X.\ X\subseteq\mathsf{T}_{s}\wedge\varphi\)//\(\mu\)withinsort\(s\) \(\nu X.\mathit{s}.\ \varphi\equiv\nu X.\ X\subseteq\mathsf{T}_{s}\wedge\varphi\)//\(\forall\)withinsort\(s\) \(\varphi.\mathit{s}\equiv\exists\exists\varepsilon{:}s.\ \varphi=z\)//"typing" \(f{:}s_{1}\otimes\cdots\otimes s_{n}\to s\equiv\forall x_{1}{:}s_{1} \ldots\forall x_{n}{:}s_{n}.\ \exists y{:}s.\ f{x_{1}}\ldots x_{n}=y\)//functional \(f{:}s_{1}\otimes\cdots\otimes s_{n}\to s\equiv\forall x_{1}{:}s_{1} \ldots\forall x_{n}{:}s_{n}.\ \exists y{:}s.\ f{x_{1}}\ldots x_{n}\to y\)//partiallyfunctional \(\forall x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\ \varphi\equiv\exists x.\ \varphi\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\ \varphi\equiv\exists x.\ \varphi\ \varphi\equiv\exists x.\ \varphi\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\ \varphi\equiv\exists x.\ \varphi\ \varphi\equiv\exists x.\ \varphi\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\ \varphi\equiv\exists x.\ \varphi\ \varphi\equiv\exists x.\ \varphi\ \varphi\equiv\exists x.\ \varphi\ \varphi\equiv\exists x.\ \varphi\ \varphi\equiv\exists x.\ \varphi\equiv\exists x.\ \varphi\ \varphi\equiv\exists x.\ \varphi\ \varphi\equiv\exists x.\ \varphi\ \varphi\equiv\exists x.\ \varphi\ \varphi\equiv\exists x.\ \varphi\ \varphi\equiv\exists x.\ \varphi\ \varphi\equiv\exists x.\ \varphi\ \varphi\equiv\exists x.\ \varphi\ \varphi\equiv\exists x.\ \varphi\ \varphi\equiv\exists x.\ \varphi\ \varphi\equiv\exists x.\ \varphi\ \varphi\equiv\exists x.\ \varphi\ \varphi\equiv\exists x.\ \varphi\ \varphi\equiv\exists x.\ \varphi\ \varphi\equiv\exists x.\ \varphi\ \varphi\equiv\exists x.\ \varphi\ \varphi\equiv\exists x.\ \varphi\ \varphi\ \varphi\equiv\exists x.\ \varphi\ \varphi\equiv\exists x.\ \varphi\ \varphi\equiv\exists x.\ \varphi\ \varphi\equiv\exists x.\ \varphi\ \varphi\equiv\exists x.\ \varphi\ \varphi\equiv\exists x.\ \varphi\ \varphi\equiv\exists x.\ \varphi\ \varphi\ \varphi\equiv\exists x.\ \varphi\ \varphi\ \varphi\equiv\exists x.\ \varphi\ \varphi\ \varphi\equiv\exists x.\ \varphi\ \varphi\ \varphi\equiv\exists x.\ \varphi\ \varphi\ \varphi\equiv\exists x.\ \varphi\ \varphi\ \varphi\equiv\exists x.\ \varphi\ \varphi\ \varphi\equiv\exists x.\ \varphi\ \varphi\ \varphi\ \varphi\equiv\exists x.\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\equiv\exists x.\ \varphi\ \varphi\ \varphi\ \varphi\ \ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\varphi\ \varphi\ \varphi\ \varphi\varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\ \varphi\
Axioms:
\(\exists x\). \(Sort=x\)
\(Sort\in\mathbb{T}_{Sort}\)
endspec
The ML specifications for the sum, product, and function sorts are given in [8]. For reader convenience, we recall them in Appendix A.
## 4 Specifying Initial Algebra and Final Coalgebra for Container Functors in ML
In this section we show how to specify a unary container functor in ML and how to extract the structures for their initial algebra and the final coalgebra.
### Capturing Elements from the Category \(\mathbb{C}\)
* **0** is specified by \(\bot\);
* = \(\{\star\}\) is specified by
* a symbol \(star\in\Sigma\);
* a notation: \(\star\equiv star\); and
* an axiom \(\exists y\). \(\star=y\) (singleton)
* **0**\(\xrightarrow{!}X\) is specified by
* a symbol \(iniMor\in\Sigma\);
* a notation: \(\mathsf{i}\equiv iniMor\); and
* an axiom \(\forall x\). \(\mathsf{i}\ x=\bot\) (captures **0**\(\xrightarrow{!}X\))
* **1**\(\stackrel{{!}}{{\leftarrow}}X\) is specified by
* a symbol \(finMor\in\Sigma\);
* a notation: \(!\equiv finMor\); and
* an axiom \(\forall x\). \(!\ \mathsf{x}=\star\) (captures **1**\(\stackrel{{!}}{{\leftarrow}}X\))
_Remark_.: An alternative way to specify **1** is by \(\top\), in which case \(\mathbf{1}\stackrel{{!}}{{\leftarrow}}X\) is the inclusion: \(\forall x\). \(x\in X\rightarrow!\,x=x\). This version is used when we compute the greatest fixpoint.
### Expressing Indexed Families in ML
An \(A\)-indexed family \(a\):\(A\vdash B[a]\) is specified by a constant symbol \(depOf\in\Sigma\), a notation \(B[a]\equiv depOf\ B\ a\), and two axioms:
* \(A\) is a sort: \(A\):\(Sort\) (equivalent to \(A\in\mathbb{T}_{Sort}\)), and
* for each \(a\):\(A\), \(B[a]\) is a sort: \(\forall a\):\(A\). \(B[a]\):\(Sort\)
where we assumed that \(A\) and \(B\) are specified as functional patterns.
### Expressing Dependent Products/Sums in ML
A dependent product \(\prod_{a:A}B[a]\) is specified by a constant symbol \(\mathit{DepProd}\in\Sigma\), a notation \(\prod_{a:A}B[a]\equiv\mathit{DepProd}\ A\ B\) (\(a\) plays a local role and its name can be changed1), and by adding the axioms:
Footnote 1: Actually, \(\Pi\) should be captured as a binder [9], but this is not needed for the purpose of this paper.
* \(\prod_{a:A}B[a]\) is a sort: \(\prod_{a:A}B[a]\):\(\mathit{Sort}\)
* \(\prod_{\prod_{a:A}B[a]}\) is the set of dependent functions: \[\prod_{\prod_{a:A}B[a]}=\exists f\cdot f\wedge\left(\left(\lceil\tau_{A}\rceil \wedge\forall a\colon A\colon\exists b\colon B[a]\colon f\ a=b\right)\vee \left(\neg\lceil\tau_{A}\rceil\wedge f=i\right)\right)\] The above axiom distinguishes between two cases: the sort \(A\) is non-empty, in which case \(\top_{\prod_{a:A}B[a]}\) includes the dependent functions that maps an \(a\colon A\) into a \(b\colon B[a]\), and when the sort \(A\) is empty, in which case \(\top_{\prod_{a:A}B[a]}\) consists of the function given by the initial morphism.
Similarly, a dependent sum \(\sum_{a:A}B[a]\) is specified by a constant \(\mathit{DepSum}\in\Sigma\), a notation \(\sum_{a:A}B[a]\equiv\mathit{DepSum}\ A\ B\) (again, \(a\) plays a local role and its name can be changed), and by adding the axioms:
* \(\sum_{a:A}B[a]\) is a sort: \(\sum_{a:A}B[a]\colon\mathit{Sort}\)
* \(\top_{\sum_{a:A}B[a]}\) is the set fo dependent pairs: \[\top_{\sum_{a:A}B[a]}=\exists a\colon A\colon\exists b\colon B[a]\colon\langle a,b\rangle\]
### Expressing Unary Container Functors in ML
Let \(X\mapsto F\ X=\sum_{a:A}X^{B[a]}\) be a container functor. Recall that \(F\ X\) is the set of pairs \(\langle a,f\rangle\) with \(a\colon A\) and \(f\colon X^{B[a]}\) (or, equivalently, \(f\colon B[a]\to X\)). If \(X\) is specified as the set of inhabitants \(\top_{s}\) of a sort \(s\colon\mathit{Sort}\) and \(\top_{B[a]}\not\perp\) (that is equivalent to \(B[a]\not\not\equiv\mathbf{0}\) in \(\mathbb{C}\)), then \(X^{B[a]}\) is specified by the sort \(B[a]\bigodot s\)[7] (see also Appendix A.3) and the specification of \(\sum_{a:A}X^{B[a]}\) is a particular case of dependent sum specification. Otherwise, we have to explicitly specify \(X^{B[a]}\) by the notation
\[X^{B[a]}\equiv\exists f\cdot f\wedge\left(\left(\left(\top_{B[a]}\not\perp \right)\wedge\forall b\colon B[a]\colon\exists x.\ f\ b=x\wedge x\in X\right) \vee\left(\left(\top_{B[a]}=\perp\right)\wedge f=i\right)\right)\]
and use it directly in the specification of \(\sum_{a:A}X^{B[a]}\):
\[\exists a\colon A\colon\exists f\cdot\langle a,f\rangle\wedge f\in X^{B[a]}\]
which is equivalent to
\[\exists a\colon A\colon\exists f\cdot\langle a,f\rangle\wedge\left(\left( \top_{B[a]}\not\perp\right)\wedge\forall b\colon B[a]\colon\exists x.\ f\ b=x \wedge x\in X\right)\vee\left(\left(\top_{B[a]}=\perp\right)\wedge f=i \right)\right)\]
Recall that if \(\top_{B[a]}=\perp\) (\(B[a]\cong\mathbf{0}\)) then there is just one function \(\mathbf{0}\xrightarrow{i}X\) specified by \(i\equiv\mathit{iniMor}\).
### Specifying Initial Algebra
Let \(X\mapsto F\ X=\sum_{a:A}X^{B[a]}\) be a container functor. The initial \(F\)-algebra is specified by using the characterization given by the "no junk and no confusion" properties for the constructors [7]:
* a constructor \(\mathit{cons}\in\Sigma\) specified by the following axioms: \[\forall a\colon A\ \forall f\colon f\in X^{B[a]}\rightarrow\exists x\cdot x \in X\wedge\mathit{cons}\ \langle a,f\rangle=x\] (Functional) \[\forall a,a^{\prime}\colon A\ \forall f,f^{\prime}\cdot\mathit{cons}\ \langle a,f\rangle=\mathit{cons}\ \langle a^{\prime},f^{\prime}\rangle \to a=a^{\prime}\wedge f=f^{\prime}\] (No Confusion)
* a sort \(\mu F\) with initial semantics: \[\top_{\mu F}=\mu X\cdot\exists a\colon A\mathit{cons}\ \left\langle a,X^{B[a]}\right\rangle\] (No Junk)
Computing the least fixpointLet \(\varphi_{F}(X)\) denote the pattern \(\exists a\colon A.\ {cons}\ \left\langle a,X^{B[a]}\right\rangle\). Given a model \(M\) and a valuation \(\rho\), we have \(|\mu X\). We also denote by \(A\mapsto\phi_{F}(A)\) the function \(A\mapsto|\varphi_{F}(X)|_{M,\rho[A/X]}\). If \(\phi_{F}\) is continuous, then \(\textbf{Ifp}(\phi_{F})=\bigcup_{n\geq 0}\phi_{F}^{n}(\varnothing)=\varnothing \cup\phi_{F}(\varnothing)\cup\phi_{F}^{2}(\varnothing)\cup\cdots\). Since \(\phi_{F}(\varnothing)=|\varphi_{F}(\bot)|_{M,\rho})\) and writing \(\varphi(\psi)\) for \(\varpi[\psi/X]\), we (informally) obtain that \(\textbf{Ifp}(\phi_{F})\) is the interpretation of the infinite disjunction
\[\bot\vee\varphi_{F}(\bot)\vee\varphi_{F}^{2}(\bot)\vee\varphi_{F}^{3}(\bot) \vee\cdots\] (Lfp)
according to \(M\) and \(\rho\). We have \(\varphi_{F}^{n}(\bot)\rightarrow\varphi_{F}^{n+1}(\bot)\) and each \(\varphi_{F}^{n}(\bot)\) gives an approximation of \(\tau_{\mu F}\cong\textbf{Ifp}(\phi_{F})\). So, in order to understand how the elements of the initial algebra look like, we have to investigate these ML patterns.
\(\varphi_{F}(\bot)\).We have \(X^{B[a]}=\bot^{B[a]}\not\bot\) iff \(\tau_{B[a]}=\bot\), because otherwise we have \((\forall b\colon B[a].\ \exists y.\ f\ b=y\wedge y\in\bot)=\bot\). It follows that \(\bot^{B[a]}\) consists of the unique function \(\mathsf{i}\). We obtain
\[\varphi_{F}(\bot)=\exists a_{1}\colon A.\ {cons}\ \left\langle a_{1},\mathsf{i} \right\rangle\land(\tau_{B[a_{1}]}=\bot)\]
i.e., each \(a_{1}\colon A\), with its corresponding dependent sort \(B[a_{1}]\) empty, defines a constant constructor.
If \(\forall a\colon A.\ {\tau_{B[a]}}\not\bot\) then \((\exists a\colon A.\ {cons}\ \left\langle a,\tau_{B[a]\odot\times X}\right\rangle)=\bot\) and hence \(\tau_{\mu L}=\bot\).
\(\varphi_{F}^{2}(\bot)=\exists a_{2}\colon A.\ {cons}\ \left\langle a_{2},\varphi_{F}(\bot)^{B[a_{2}]}\right\rangle\). If \(\tau_{B[a_{2}]}=\bot\) then \(\varphi_{F}(\bot)^{B[a_{2}]}\) consists of the constant constructor \({cons}\ \left\langle a_{2},\mathsf{i}\right\rangle\), i.e., \(\varphi_{F}(\bot)^{B[a_{2}]}=\bot^{B[a_{2}]}\). If \(\tau_{B[a_{2}]}\not\bot\) then we have
\[\varphi_{F}(\bot)^{B[a_{2}]}=\exists f.\ f\wedge\forall b\colon B[a_{2}].\ \exists a_{1}\colon A.\ f\ b={cons}\left\langle a_{1},\mathsf{i} \right\rangle\land(\tau_{B[a_{1}]}=\bot)\]
We obtain
\[\varphi_{F}^{2}(\bot)=\varphi_{F}(\bot)\vee\exists a_{2}\colon A.\ {cons}\ \left\langle a_{2},\varphi_{F}(\bot)^{B[a_{2}]}\right\rangle\land(\tau_{B[a_{2 }]}\not\bot)\]
...
_Remark_.: The ML pattern (Lfp) can be seen as an informal translation in ML of the colimit (Ini).
Deriving Induction PrincipleOnce we have seen how the least fixpoint is computed, we may derive the following _Induction Principle_:
\[\frac{\forall a\colon A.\ {cons}\ \left\langle a,\psi\right\rangle \rightarrow\psi}{\tau_{\mu F}\rightarrow\psi}\]
The justification for this principle is similar to that for lists given in [8].
### Specifying Final Coalgebra
Let \(X\mapsto F\ X=\sum_{a\colon A}X^{B[a]}\) be a container functor. The final \(F\)-coalgebra is specified by:
* the constructor \({cons}\) together with its axioms;
* a sort \(\nu F\) with final semantics: \(\top_{\nu F}=\nu X\). \(\exists a\colon A.\ {cons}\ \left\langle a,X^{B[a]}\right\rangle\) (No Redundancy (Cojunk))
* two destructors \({out},{nx}\in\Sigma\) together with the notations: \({x}.{out}\equiv{out}\ x\) \({x}.{nx}\equiv{nx}\ x\) and the axioms: \({\forall a\colon A.\ {\forall f.}\ ({cons}\ \left\langle a,f\right\rangle).{out}=a}\) (No Ambiguity (Coconfusion).1)) \({\forall a\colon A.\ {\forall f.}\ ({cons}\ \left\langle a,f\right\rangle).{nx}=f}\) (No Ambiguity (Coconfusion).2)) \({\forall x\colon\nu F.}\ ({cons}\ \left\langle{{x}.{out}.{x}.{nx}}\right\rangle)=x\) (No Ambiguity (Coconfusion).3))
Computing the greatest fixpointSince \(\mathbf{gfp}(\phi_{F})=\bigcap_{n\geq 0}\phi_{F}^{n}(M)=M\cap\phi_{F}(M)\cap\phi_{F} ^{2}(M)\cap\cdots\), we have to investigate the infinite conjunction
\[\top\wedge\phi_{F}(\top)\wedge\phi_{F}^{2}(\top)\wedge\phi_{F}^{3}(\top)\wedge\cdots\] (Gfp)
in order to understand how the elements of the final coalgebra look like.
\(\varphi_{F}(\top)\)**.**: We have \(cons\ \left\langle a,\top^{B[a]}\right\rangle=\exists x,y.\ x\wedge x.out=a\wedge x. nxt=y\).
\(\varphi_{F}^{2}(\top)\)**.**: We have
\[\varphi_{F}^{2}(\top) =\exists a\colon A.\ cons\ \left\langle a,\varphi_{F}(\top)^{B[a]}\right\rangle\] \[=(\exists a\colon A.\ \exists x.\ \exists y\colon\ltimes F\ \top.\ x\wedge x.out=a\wedge x.nxt=y)\] \[=(\exists a,a^{\prime}\colon A.\ \exists x,z.\ \exists y\colon\ltimes F\ \top.\ x \wedge x.out=a\wedge x.nxt=y\wedge y.out=a^{\prime}\wedge y.nxt=z)\]
\(\cdots\)
_Remark_.: The ML pattern (Gfp) can be seen as an ML informal translation of the colimit (Fin).
Deriving Coinduction PrincipleOnce we have seen how the greatest fixpoint is computed, we may derive the following _Conduction Principle_:
\[\frac{\forall a\colon A.\ \psi\to cons\ \left\langle a,\psi\right\rangle}{ \top_{\psi\to vF}}\]
The justification for this principle is similar to that for streams given in [8].
## 5 Case Study: Lists Using Container Functors
First, we express \(L\,X=\mathbf{1}+E\times X\) as a unary container functor, using \(\mathbf{1}\approxeq\sum_{*\colon 1}X^{\emptyset[*]}\), \(E\approxeq\sum_{e\colon E}X^{\emptyset[e]}\), and \(X\approxeq\sum_{*\colon 1}X^{\llbracket*\rrbracket}\):
\[L\,X =\sum_{*\colon 1}X^{\emptyset[*]}+\sum_{e\colon E}X^{\emptyset[e] }\times\sum_{*\colon 1}X^{\llbracket*\rrbracket}\] \[=\sum_{*\colon 1}X^{\emptyset[*]}+\sum_{\left\langle e,*\right\rangle \colon E\times 1}X^{\emptyset[e]+1[*]}\] \[=\sum_{a\colon\mathbf{1}+E\times 1}X^{\llbracket\emptyset, \left\langle\mathbf{0},\mathbf{1}\right\rangle\rrbracket[a]}\]
Using the isomorphisms \(E\times\mathbf{1}\approxeq E\) and \(\left\langle\mathbf{0},\mathbf{1}\right\rangle\llbracket\left\langle e,* \right\rangle\rrbracket=\mathbf{0}[e]+\mathbf{1}[*]=\mathbf{0}+\mathbf{1}\approxeq \mathbf{1}\) in the category \(\mathbb{C}\), we obtain the reduced form \(L^{c}\,X=\sum_{a\colon\mathbf{1}+E}X^{\llbracket\mathbf{0},\mathbf{1}\rrbracket [a]}\) of the container functor \(L\). From the definition of the sum of container functors we deduce that the elements of \(\sum_{a\colon\mathbf{1}+E}X^{\llbracket\mathbf{0},\mathbf{1}\rrbracket[a]}\) are pairs \(\{\left\langle*,f\right\rangle|f\colon\mathbf{0}\to X\}\uplus\{\left\langle e,f^{\prime}\right\rangle|e\colon E,f^{\prime}\colon\mathbf{1}\to X\}\). The only function \(\mathbf{0}\to X\) is \(\mathrm{i}\), and \(f^{\prime}\colon\mathbf{1}\to X\approxeq f^{\prime}\in X\).
The specification in ML of the initial algebra \(\mu L^{c}\) includes:
* a constructor symbol \(cons\) and a sort symbol \(\mu F\);
* the axioms: \(\exists y\colon\mu L^{c}\cdot cons\ \left\langle\star,\mathrm{i}\right\rangle=y\) (Functional.1) \(\forall e\colon E\colon\forall f^{\prime}\colon\mu L^{c}\cdot\exists y\colon\mu L ^{c}\cdot cons\ \left\langle e,f^{\prime}\right\rangle=y\) (Functional.2) \(\forall e\colon E\colon\forall f^{\prime}\colon\mu L^{c}\cdot cons\ \left\langle\star,\mathrm{i}\right\rangle \neq cons\ \left\langle e,f^{\prime}\right\rangle\) (No Confusion.1) \(\forall e,e^{\prime}\colon E\colon\forall f,f^{\prime}\colon\mu L\cdot cons\ \left\langle e,f\right\rangle= cons\ \left\langle e,f^{\prime}\right\rangle\to e=e^{\prime}\wedge f=f^{\prime}\) (No Confusion.2) \(\tau_{\mu L^{c}}=\mu X\). (\(cons\ \left\langle\star,\mathrm{i}\right\rangle\lor cons\ \left\langle E,X\right\rangle\)) (No Junk)
Comparing with the specification from [8], we obviously have the equivalences \(nil\cong cons\ \left\langle\star,\mathrm{i}\right\rangle\) and \(cons\ e\not\cong cons\ \left\langle e,\ell\right\rangle\).
The ML specification of the final coalgebra further includes:
* the destructor symbols \(out,nxt\in\Sigma\);
* the axioms \(cons\ \left\langle\star,\mathrm{i}\right\rangle\cdot out=\star\) (No Coconfusion.1.1)) \(\forall e\colon E\colon\forall f\colon\nu L^{c}\cdot cons\ \left\langle e,f\right\rangle\cdot out=e\) (No Coconfusion.1.2)) \(cons\ \left\langle\star,\mathrm{i}\right\rangle\cdot nxt=\mathrm{i}\) (No Coconfusion.2.1)) \(\forall e\colon E\colon\forall f\colon\nu L^{c}\cdot cons\ \left\langle e,f\right\rangle\cdot nxt=f\) (No Coconfusion.2.2)) \(\forall x\colon\nu L^{c}\cdot(cons\ \left\langle x.out,x.nxt\right\rangle)=x\) (No Coconfusion.3)) \(\tau_{\nu L^{c}}=\nu X\cdot(cons\ \left\langle\star,\mathrm{i}\right\rangle\lor cons\ \left\langle E,X\right\rangle)\) (No Cojunk)
Comparing with the specification of streams (inifinite lists) from [8], we obviously have the equivalences \(\ell\). \(out\cong hd\ \ell\) and \(\ell\). \(nxt\cong tl\ \ell\). Using \(nil\cong cons\ \left\langle\star,\mathrm{i}\right\rangle\), we get \(hd\ nil=\star\) and \(tl\ nil=\mathrm{i}\), which is different from the usual approach, where \(hd\) and \(tl\) are partial operations.
## 6 Case Study: Moore Machines
Here we consider an example of (constant) exponential functor, whose ML specification is more tricky. Moore machines (automata) have the signature given by the functor \(M\ X=O\times X^{I}\), where \(O\) is for outputs and \(I\) for inputs.
#### Capturing Final M-Coalgebra in ML Using Container Functors
We first express \(M\) as a unary container functor:
\[M\ X =O\times X^{I}\] \[\cong\sum_{o:O}X^{\mathbf{0}}\times(\sum_{\star:\ \mathbf{1}}X^{ \mathbf{1}})^{\sum_{i:I}X^{\mathbf{0}}}\] \[=\sum_{o:O}X^{\mathbf{0}}\times\sum_{g:I\to\mathbf{1}}X^{\sum_{i:I} \mathbf{1}\left[g\ i\right]}\] \[=\sum_{\left\langle o,g\right\rangle:(O\times(I\to\mathbf{1}))}X ^{\mathbf{0}\left[o\right]+\sum_{i:I}\mathbf{1}\left[g\ i\right]}\] \[\cong\sum_{o:O}X^{\sum_{i:I}\mathbf{1}}\] \[\cong\sum_{o:O}X^{I}\] \[=M^{c}\ X\]
The instantiation of the ML specification for the final coalgebra is as follows:
* the constructor \(cons\in\Sigma\) is specified by the following axioms: \(\forall o\colon O\). \(\forall f\colon f\in X^{I}\rightarrow\exists x\). \(x\in X\wedge cons\)\(\left\langle a,f\right\rangle=x\) (Functional) \(\forall o,o^{\prime}\colon O\). \(\forall f,f^{\prime}\). \(cons\)\(\left\langle a,f\right\rangle=cons\)\(\left\langle a^{\prime},f^{\prime}\right\rangle\to a=a^{\prime}\wedge f=f^{\prime}\) (No Confusion)
* the destructors \(out,nxt\in\Sigma\) are specified by the axioms: \(\forall o\colon O\). \(\forall f\colon f\in X^{I}\rightarrow\left(cons\)\(\left\langle a,f\right\rangle\right)\). \(\forall a\colon A\). \(\forall f\colon f\colon f\in X^{I}\rightarrow\left(cons\)\(\left\langle a,f\right\rangle\right)\).\(nxt=f\) (No Coconfusion.2)) \(\forall x\colon vF\). \(\left(cons\)\(\left\langle x.out,x.nxt\right\rangle\right)=x\) (No Coconfusion.3))
* the sort \(\forall M^{c}\) with final semantics: \(\top_{\forall M^{c}}=\nu X\). \(\exists o\colon O\). \(cons\)\(\left\langle o,X^{I}\right\rangle\) (No Cojunk)
We should have
\[\top_{\forall M^{c}}\cong\nu X\). \(M^{C}\)\(X\)
Computing \(\top\wedge\varphi_{M}(\top)\wedge\varphi_{M}^{2}(\top)\wedge\varphi_{M}^{3}(\top) \wedge\cdots\), where \(\varphi_{M}(X)\equiv\exists o\colon O\). \(cons\)\(\left\langle o,X^{I}\right\rangle\):
\(\varphi_{M}(\top)=\)\(\exists o_{0}\colon O\). \(cons\)\(\left\langle o,X^{I}\right\rangle=\exists o_{0}\colon O\). \(\exists f_{0}\). \(cons\)\(\left\langle o_{0},f_{0}\right\rangle\wedge\left(\forall i\colon I\). \(\exists x_{1}\). \(f_{0}\)\(i=x_{1}\right)\). When describing a dynamic system, the use of destructors is more intuitive:
\(\varphi_{M}(\top)=\exists x_{0}\). \(\exists o_{0}\colon O\). \(x_{0}\wedge\left(x_{0}.out=o_{0}\wedge\forall i\colon I\). \(\exists y\). \(x_{0}.nxt\)\(i=y\right)\)
It is easy to see that \(x_{0}=cons\)\(\left\langle o_{0},f_{0}\right\rangle\) and \(f_{0}\)\(i=x_{0}.nxt\)\(i\).
\(\varphi_{M}^{2}(\top)=\)\(\exists o_{1}\colon O\). \(cons\)\(\left\langle o_{1},X^{I}\right\rangle=\exists o_{1}\colon O\). \(\exists f_{1}\). \(cons\)\(\left\langle o_{1},f_{1}\right\rangle\wedge\left(\forall i\colon I\). \(\exists x_{2}\). \(f_{1}\)\(i=x_{2}\wedge x_{2}\in\varphi_{M}(\top)\right)\). Again, it becomes more suggestive using the destructors:
\(\varphi_{M}^{2}(\top)=\exists x_{1}\). \(\exists o_{1}\colon O\). \(x_{1}\wedge\left(x_{1}.out=o_{1}\wedge\forall i\colon I\). \(\exists x_{0}\). \(x_{1}.nxt\)\(i=x_{0}\wedge x_{0}\in\varphi_{M}(\top)\right)\)
\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(x_{1}\). \(\exists o_{1}\colon O\). \(x_{1}\wedge\left(x_{1}.out=o_{1}\wedge\forall i\colon I\). \(\exists x_{0}\). \(\exists o_{0}\colon O\). \(x_{0}.out=o_{0}\wedge\forall i\colon I\). \(\exists y\). \(x_{0}.nxt\)\(i=y\right)\)
\(\cdots\)
We have \(\top_{\forall M}\ni x\cong f\in O^{*}\) iff \(x.out=f\)\(\varepsilon\) and \(\forall i\colon I\). \(x.nxt\)\(i=\overline{tr}(f)(i)\).
## 7 Conclusion
The technical experiments reported in this paper show that both the initial models and the final models for polynomial functors can be fully captured in matching logic (ML) using their representation as unary container functors. The ML specification of the polynomial functors is possible due to the fact the sum, product, and function sorts can be specified in ML [8], and these specifications are a part of capturing the category of sets \(\mathbb{C}\) in ML.
A functor represented as a "classical" polynomial can be translated into a container functor shape using the fact the later are closed under sum, product, and exponential. However, the result could be cumbersome and not easy to handle in matching logic because the construction starts from constant and identity functors. Therefore it is preferable to simplify it using the isomorphisms in the category of sets.
This result can help in defining in ML programming languages using both inductive data-types and coinductive data-types. Another advantage is given by a better understanding of the abstract constructions from the category theory. A possibly use is as follows:
* define in front-end a suitable syntax for data types intended to be defined as initial algebra or final coalgebra;
* extract the canonical form of the functor underlying the front-end definition;
* generate the corresponding ML theory;
* derive the proof principles needed to soundly handle the defined data type.
Future work will focus on the following aspects of the proposed approach:
* a more formal presentation of the approach;
* how to capture in ML the iteration principle and the primitive recursive principle;
* extending the approach to larger classes of functors admitting initial algebras and final coalgebras, e.g., indexed containers [4], the bounded natural functors (BNFs) underlying Isabelle/HOL's datatypes [17], or the quotients of polynomial functors, experimentally implemented in Lean [6].
|
2310.20225 | A Multi-Modal Foundation Model to Assist People with Blindness and Low
Vision in Environmental Interaction | People with blindness and low vision (pBLV) encounter substantial challenges
when it comes to comprehensive scene recognition and precise object
identification in unfamiliar environments. Additionally, due to the vision
loss, pBLV have difficulty in accessing and identifying potential tripping
hazards on their own. In this paper, we present a pioneering approach that
leverages a large vision-language model to enhance visual perception for pBLV,
offering detailed and comprehensive descriptions of the surrounding
environments and providing warnings about the potential risks. Our method
begins by leveraging a large image tagging model (i.e., Recognize Anything
(RAM)) to identify all common objects present in the captured images. The
recognition results and user query are then integrated into a prompt, tailored
specifically for pBLV using prompt engineering. By combining the prompt and
input image, a large vision-language model (i.e., InstructBLIP) generates
detailed and comprehensive descriptions of the environment and identifies
potential risks in the environment by analyzing the environmental objects and
scenes, relevant to the prompt. We evaluate our approach through experiments
conducted on both indoor and outdoor datasets. Our results demonstrate that our
method is able to recognize objects accurately and provide insightful
descriptions and analysis of the environment for pBLV. | Yu Hao, Fan Yang, Hao Huang, Shuaihang Yuan, Sundeep Rangan, John-Ross Rizzo, Yao Wang, Yi Fang | 2023-10-31T06:56:51Z | http://arxiv.org/abs/2310.20225v2 | VisPercep: A Vision-Language Approach to Enhance Visual Perception for People with Blindness and Low Vision
###### Abstract
People with blindness and low vision (pBLV) encounter substantial challenges when it comes to comprehensive scene recognition and precise object identification in unfamiliar environments. Additionally, due to the vision loss, pBLV have difficulty in accessing and identifying potential tripping hazards on their own. In this paper, we present a pioneering approach that leverages a large vision-language model to enhance visual perception for pBLV, offering detailed and comprehensive descriptions of the surrounding environments and providing warnings about the potential risks. Our method begins by leveraging a large image tagging model (i.e., Recognize Anything (RAM)) to identify all common objects present in the captured images. The recognition results and user query are then integrated into a prompt, tailored specifically for pBLV using prompt engineering. By combining the prompt and input image, a large vision-language model (i.e., InstructBLIP) generates detailed and comprehensive descriptions of the environment and identifies potential risks in the environment by analyzing the environmental objects and scenes, relevant to the prompt. We evaluate our approach through experiments conducted on both indoor and outdoor datasets. Our results demonstrate that our method is able to recognize objects accurately and provide insightful descriptions and analysis of the environment for pBLV.
## 1 Introduction
The prevalence of visual impairment has reached alarming levels, affecting millions of individuals worldwide, as highlighted by recent estimates from the World Health Organization (WHO) [24, 11]. The number of people experiencing moderate to severe visual impairment or complete blindness continues to rise steadily, with projections indicating a further surge in these numbers by 2050 [23]. Visual impairment, whether partial or complete, presents significant challenges that profoundly impact various aspects of the daily life of the pBLV [20]. Among the critical tasks that pose difficulties for pVLB is visual search, which involves actively scanning the environment and locating a specific target among distracting elements [30]. Even for individuals with normal vision, visual search can be demanding, especially in complex environments. However, for individuals with blindness or low vision, these challenges are further compounded [19]. Those with peripheral vision loss, central vision loss, or hemi-field vision loss struggle to pinpoint a particular location or search for objects due to the reduced field of view. They often require assistance to accurately identify the environment or locate objects of interest. Similarly, individuals experiencing blurred vision or near-sightedness encounter difficulties in identifying objects at varying distances. Color-deficient vision and low-contrast vision further exacerbate the challenges of distinguishing objects from the background when they share similar colors. In addition to understanding their surroundings and locating objects of interest, assessing potential risks and hazards within the visual environment becomes an intricate task, demanding a comprehensive analysis to ensure personal safety [6]. The magnitude of these challenges emphasizes the need for innovative solutions to enhance visual perception and empower visually impaired individuals in their daily lives.
Current assistive technologies [13, 29, 18] driven by computer vision approaches and wearable devices have led to the development of assistive systems that utilize object recognition [27], GPS navigation [8], and text-to-speech tools [16]. While these technologies have provided valuable assistance to visually impaired individuals [22], they still face certain challenges and limitations. One of the primary challenges with existing assistive technologies is their limited ability to provide comprehensive scene understanding and guidance to address the specific needs of visually impaired individuals. For instance, while many tools focus on specific functionalities, such as obstacle detection or route planning, they often fall short in delivering detailed descriptions and guidance based on user questions. The current so
lutions lack the capability to generate contextually relevant information about objects, scenes, and potential risks in the environment, limiting an in-depth understanding of the environment for visually impaired individuals. This limitation hinders their ability to fully perceive and understand their surroundings, resulting in reduced independence and increased reliance on external assistance.
In this work, as shown in Figure 1, we present a novel approach named VisPercep that leverages the advanced large vision-language model to enhance visual perception for individuals with blindness and low vision including scene understanding, object localization, and risk assessment. Our work addresses the challenges faced by pBLV by providing them with detailed and comprehensive scene descriptions and risk guidance based on user questions, enabling an in-depth understanding of their surroundings, locating objects of interest, and identifying potential risks.
Our system includes three main modules, as illustrated in Figure 2: image tagging module, prompt engineering module, and vision-language module. The image tagging module, implemented using Recognize Anything Model (RAM) [33], recognizes all objects in the captured image. We then integrate the recognized objects and user questions into a customized prompt designed for visually impaired individuals through prompt engineering. Finally, the vision-language model utilizes InstructBLIP [15] to generate detailed and contextually relevant text, facilitating comprehensive scene understanding, object recognition, and risk assessment for visually impaired individuals. Our experiments demonstrate that our system is able to recognize objects of interest and provide detailed answers to user questions, significantly enhancing the visual understanding of visually impaired individuals about their surroundings.
## 2 Related Works
Several assistive technologies and applications have been developed to support individuals with visual impairments in understanding their environments and enhancing their visual perception [20, 7, 32]. Traditional tools such as white canes [21] and guide dogs [31] have long been used to aid in mobility and spatial awareness. Additionally, advancements in technology have led to the development of various assistive devices, including wearable cameras [10], GPS navigation systems, and object recognition technologies.
Wearable camera systems, such as OrCam MyEye and Seeing AI [9], offer real-time text reading and text-to-speech capabilities to provide auditory feedback to visually impaired individuals. These systems assist in object identification, text reading, and facial recognition, enhancing their ability to interact with their surroundings. GPS navigation systems, such as BlindSquare [14] and Lazarillo [3], utilize location-based services to provide audio instructions and guidance for navigation in both indoor and outdoor environments.
Computer vision-based technologies have also been explored for visual perception enhancement. These include object detection systems using deep learning models like YOLO [26] and Faster R-CNN [28], which provide real-time identification of objects in the environment. Detect and Approach [12] proposes a real-time monocular-based navigation solution based on YOLO for pBLV. Additionally, vision-language models like VizWiz [2] and SoundScape [1] incorporate natural language processing to describe vi
Figure 1: Our method employs a large vision-language model to provide visually impaired individuals a comprehensive and detailed description of object composition in the camera view, and a risk assessment based on the user query. Our system utilizes a smartphone to capture images and record user questions (left). Based on the input image and user question, VisPercep generates detailed and comprehensive scene descriptions and risk assessments (right). [The camera input image is from Visual7W dataset [34]]
sual scenes, answer questions, and provide context-aware information.
While these existing assistive technologies have made significant advancements, they still face limitations. Many systems provide partial solutions focused on specific functionalities such as object recognition or detection, but often fall short in delivering comprehensive scene understanding and detailed descriptions. Moreover, these technologies may lack the ability to provide guidance based on user questions, limiting their effectiveness in addressing the specific needs and queries of visually impaired individuals [29]. Furthermore, these technologies often require multiple devices or interfaces, leading to complexity and decreased usability for visually impaired individuals [7]. In contrast to these existing approaches, our proposed method offers a comprehensive and integrated solution. By combining advanced vision-language models, image tagging, and prompt engineering, our approach enhances visual perception, provides real-time guidance, and offers context-aware prompts tailored specifically for visually impaired individuals.
## 3 Method
Our method aims to overcome the limitations of existing assistive technologies and empower visually impaired individuals with improved guidance. In section 3.1, we introduce our image tagging module. Section 3.2 illustrates the prompt engineering tailored specifically for visually impaired individuals. We explain the large vision-language module in section 3.3.
### Image Tagging Module
As shown in the yellow box of Figure 2, the image tagging module is utilized to generate tags for each object present in the captured images, which is crucial as it provides a comprehensive understanding of the visual scene by accurately recognizing various objects. By incorporating the image tagging module, we obtain a catalog of objects present in the environment, facilitating a more precise and comprehensive environment description. We employ the Recognize Anything Model (RAM) [33] as our image tagging module, which has demonstrated the zero-shot ability to recognize any common category with high accuracy.
Specifically, The image tagging module begins with a pre-trained image encoder, which processes an input image and extracts high-level visual features. These features capture important characteristics and representations of the objects in the image. After the initial feature extraction stage, an attention mechanism [17] is employed to focus on the most salient regions within the image. This attention mechanism allows the model to pay more attention to relevant objects and suppress irrelevant ones. Thus, the image tagging module can generate accurate and informative tags for the recognized objects. The final stage involves mapping the extracted features to a set of object categories or tags by the image-tag recognition decoder. This mapping is learned
Figure 2: Overview. Our method includes three key modules: image tagging module, prompt engineering module and vision-language module. Firstly, the image tagging module identifies all common objects present in the captured image. Secondly, using prompt engineering, we integrate the recognized objects and user queries to create customized prompts tailored for visually impaired individuals. Lastly, the vision-language module generates detailed and contextually relevant output text, enabling comprehensive and precise scene understanding, object localization, and risk assessment for visually impaired individuals. [The input image is from Visual7W dataset [34]].
through a training process that leverages large-scale annotated datasets, ensuring the model's ability to generalize to various objects and scenes. The trained RAM model can then be applied to new images, accurately recognizing and generating tags for the objects present in the environment.
### Prompt Engineering for pBLV
We incorporate prompt engineering, as shown in the green box of Figure 2, to create customized prompts tailored specifically for visually impaired individuals. This involves integrating the output of the image tagging module with user questions to form contextually relevant and informative prompts. Moreover, the use of prompt engineering eliminates the need for traditional machine learning approaches that require training models on labeled datasets, as prompt engineering focuses on generating effective prompts rather than optimizing model's parameters.
The RAM generates a set of tags that represent the recognized objects within the captured images. We utilize these object {_tags_} to enhance the final prompt. We include the prompt "_The image may contain elements of {tags}_" to seamlessly integrate the object recognition results into an prompt. By incorporating these recognized object tags into the prompt, we ensure that the vision-language module receives specific and accurate information about the objects in their surroundings. This approach significantly enhances the understanding and awareness of the visual environment for the users.
Furthermore, we consider user questions as a vital input for prompt engineering. By incorporating user questions into the prompts, we address the individual's specific needs for environment understanding and ensure that the prompts are highly relevant to their current situation. This personalized approach allows visually impaired individuals to obtain the targeted information about their environment and the objects of interest. For example, in the case of risk assessment, we employ a specific prompt that guides the model to act as an assistant for visually impaired individuals, providing comprehensive analysis. The prompt we use is "_I am visually impaired. You are the assistant for visually impaired individuals. Your role is to provide helpful information and assistance based on my query. Your task is to provide a clear and concise response that addresses my needs effectively. Don't mention that I am visually impaired to offend me. Now, please answer my questions: [user_query]. Your answer should be like a daily conversation with me._" where {user_query} is the user question. This prompt enables the model to deliver detailed and accurate explanations regarding potential risks, ensuring that the information is communicated in a respectful and informative manner.
### Vision-Language Module
To generate descriptive output text based on the prompts obtained by the prompt engineering module, we employ InstructBLIP [15], a powerful large vision language model for comprehensive scene understanding and analysis as shown in the right blue box of Figure 2.
Specifically, InstructBLIP begins by encoding the input image using the frozen Vision Transformer (VIT) [5], which captures high-level visual representations of the image. The input prompt is also encoded as the token by the tokenizer. Then, the encoded prompt tokens and visual features are fed into the Q-Former [15] together to generate contextualized image tokens through cross-attention. Then, a linear projection layer is employed to convert the image tokens into the tokens that Large Language Model (LLM) can understand. We further utilize a LLM, i.e., Vicuna-13B [4], to generate the final output text. The LLM incorporates both the generated image token and the text token from the user question to generate rich and comprehensive textual descriptions. We demonstrate the algorithm of our VisPercep in Algorithm 1.
```
Input: Image: The captured image UserQuery: The user question Output: OutputText: The generated output text Step 1: Predict Tags Image \(\longrightarrow\) Image Tagging Module \(\longrightarrow\) Tags Step 2: Prompt Engineering for pBLV Tags + UserQuery \(\longrightarrow\) Prompt Engineering for pBLV \(\longrightarrow\) Prompt Step 3: Generate OutputText Image + Prompt \(\longrightarrow\) Vision-Language Module \(\longrightarrow\) OutputText
```
**Algorithm 1**Algorithm of VisPercept
## 4 Experiment
### Implementation Details
Our system leverages the capabilities of a smartphone, employing a monocular phone camera to capture images and the phone's microphone to receive user voice questions, creating a seamless interaction between the user and the system as shown in Figure 3. The image and voice input
Figure 3: Client-server architecture.
are then transferred to our server, where the processing and generation of comprehensive descriptions take place. To convert the user's voice question into text for further processing, we employ Whisper [25], a powerful speech recognition system. This technology accurately transcribes the user's voice question into a textual form, enabling seamless integration with our vision-language model. After the input text is obtained, our system processes the image and text to generate detailed and contextually relevant output descriptions. The system selects the corresponding image frame once the user question is detected, ensuring accurate and timely responses. The output text is then transformed into audio format to provide a more accessible experience for visually impaired individuals. For text-to-speech conversion, we utilize the robust system Azure [16]. This allows us to transform the output text into clear and natural-sounding audio. The synthesized audio is then sent from the server to the user's phone, enabling real-time delivery of the enhanced visual perception information. By implementing this client-server architecture and incorporating speech recognition and synthesis technologies, our system facilitates seamless interaction between the user and our system.
### Tests on Visual7W dataset
#### 4.2.1 Dataset Preparation
We evaluate our proposed approach on the Visual7W dataset [34]. Compared with previous studies that solely relied on textual answers, Visual7W introduces a novel form of question answering that includes visual answers [34]. This is achieved by establishing a semantic connection through object-level grounding between textual descriptions and image regions [34]
We notice that there are strong connections between objects in images, both in terms of spatial location and meaning of existence. To test our model in assisting people with visual impairment, we selected some images from specific perspectives in this dataset. From these perspectives, blind people often require additional assistance to better understand the current environment.
#### 4.2.2 Ablation Study
We conduct ablation studies to verify the effectiveness of the individual module in our model. The experimental settings are listed in Figure 4 where "/" denotes the module is enabled. In the first experimental setting, we only uti
Figure 4: Ablation study with different model settings on Visual7W dataset.
ize the vision-language module, which directly sends user questions and images to InstructBLIP. In the second experimental setting, we employ the image tagging module to generate tags for the input image, which are then integrated into the user question. Then, both the modified question and the input image are fed into the vision-language module. In the third experimental setting, we employ prompt engineering specifically designed for visually impaired individuals to further refine the prompt by incorporating the generated tags and user questions.
As shown in Figure 4, for the left scene, when only using the vision-language module, the model provides some answers that do not match the facts, such as "_cat sitting inside an open suitcase_", "_possibly a table or a shelf_" and "_such as a pillow_". These are more likely to be inferred from a large language model due to model's bias (learnt from the training data) than what actually exists in the image. After combining with the image tagging module, the model dropped answers that does not match the facts in the image and the generated answer correctly describes the current scenario. You should describe what is the modified user questions after integrating with the tags. Furthermore, if prompt engineering for pBLV is applied, answers become more precise e.g., it also accurately describes the location of the cat. Here you should describe what is the final prompt generated.
In the case of the right scene, the model that only uses the vision-language module does provide a detailed description of the scene, but there are still errors. The description "_There are several cars parked on the street, and a traffic light can be seen in the background._" is inconsistent with the fact shown in the image, as there is no traffic light and only a white street light and an orange fire hydrant. When adding the image tagging module, the model gives a more factual description but lacks details. Again you should provide the new user question based on the tags In contrast, Prompt Engineering for pBLV makes the answer more precise and detailed. Again you should provide the new user question after prompt engineering.
The example in Figure 4 demonstrates that the integration of the vision-language module, image tagging module, and prompt engineering yield the most accurate and detailed descriptions. In Figure 5, we further present some randomly selected results of the image tagging module. As depicted in the figure, the module successfully recognizes common objects within the images, demonstrating its ability to provide a comprehensive understanding of the visual scene by accurately identifying various objects.
#### 4.2.3 Scene Understanding
In this section, we evaluate the effectiveness of our approach on outdoor and indoor scene understanding. Sample results are shown at the top of Figure 6. In our experiment, the user's input is "_Can you describe the environment around?_". For both indoor and outdoor examples, it is evident that the model's output provides a comprehensive and accurate description of the object composition in the environment depicted in the image.
#### 4.2.4 Object Localization
In this section, we evaluate the effectiveness of our approach in addressing object recognition challenges, as demonstrated in the middle of Figure 6. The user question text for this task is "_Where is the {giraffe, sheep, bookshelf, rubbish bin} in the image?_", where "{ }" is what the user wants to find out.
In the outdoor scene, the left image is focused on the giraffe. From the answer, we can see that the results are very detailed, not only describing the location of the giraffe on the grass and under the trees, but also providing information "_The giraffes appear to be enjoying the shade provided by the tree and the lush green environment around them._" for users to better understand the capture environmental images.
#### 4.2.5 Risk Assessment
For risk assessment, as shown at the bottom of Figure 6, our model provides safety tips for people with visual impairment to help them identify and deal with potential risks according to the current environment. The question is "_Is there a risk for me to continue moving forward?_".
The first picture depicts a scene where a pedestrian crossing has a red light. The model can provide feedback to the user regarding the risk of crossing the street when the traffic signal is red. In the second scene, a train is approaching, which can be extremely dangerous if proper precautions are not taken. The model can send an alert that it is risky to cross the railway at the current time. It demonstrates that our model can effectively analyze risks and provide necessary alerts for visually impaired individuals.
Figure 5: Random selected results of image tagging module on Visual7W dataset.
### Real-world Tests
We also conducted experiments to evaluate the proposed system in real-world situations as shown in Figure
Figure 6: Examples of scene understanding (top), object localization (middle), and risk assessment (bottom) on Visual7W dataset.
7. Specifically, we simulate the walking process of visually impaired people. The main content is a real video of a person walking on the street entering and then exiting a subway station in New York. Even though this route is easy for ordinary people, it may engender many risks for people with visual impairment. We captured several characteristic images from this video and passed them into our model for evaluation. These scenes are on the street, before entering the station, in the subway station, and after exiting the station.
The first scene shows a street with a crowd. Moreover, there is a shop in the left of the image. Our model returns the answer "_A crowded shopping street is filled with people walking and strolling along the pavement._", which is consistent with the image content. For the second scene, the user is walking on a straight and empty street, when the user asks "_Is there a risk for me to continue moving forward?_" The model answers that "_No, it is not risky for you to go ahead._", which is also in line with the actual situation. In the third scene, when the user asks where the subway gate is, the model provides a very detailed explanation of the location of the subway gate with directional adjectives, such as front, back, left, and right. From the answer, it is clear that there are two gates and they are on left and right sides. In the last scenario where there is a staircase, and the model reminds the user that there currently exists a certain level of risk. Due to the presence of stairs, the ground becomes more difficult for the user to move forward. Therefore, the model provides this answer "_be careful about going up the stairs because they are narrow_".
## 5 Conclusion
In this paper, we present a pioneering approach that addresses the challenges faced by people with blindness and low vision (pBLV) in comprehensive scene understanding, precise object localization, and risk assessment in unfamiliar environments. By leveraging the large vision language model and integrating it with an image tagging module, our method provides visually impaired individuals detailed and comprehensive descriptions and guidance to address their specific needs based on the user's questions. [I think you need to say some thing about Prompt engineering.] We evaluate our approach through experiments conducted on both indoor and outdoor datasets. Our results demonstrate that our method is able to recognize objects accurately and provides insightful descriptions and analysis for visually impaired individuals.
Figure 7: Examples of scene understanding, object detection and localization, and risk assessment under real-world settings. |
2309.12492 | Quantum Computing Perspective for Electromagnetic Wave Propagation in
Cold Magnetized Plasmas | Electromagnetic waves are an inherent part of all plasmas -- laboratory
fusion plasmas or astrophysical plasmas. The conventional methods for studying
properties of electromagnetic waves rely on discretization of Maxwell equations
suitable for implementing on classical, present day, computers. The traditional
methodology is not efficient for quantum computing implementation -- a future
computational source offering a tantalizing possibility of enormous speed up
and a significant reduction in computational cost. This paper addresses two
topics relevant to implementing Maxwell equations on a quantum computer. The
first is on formulating a quantum Schrodinger representation of Maxwell
equations for wave propagation in a cold, inhomogeneous, magnetized plasma.
This representation admits unitary, energy preserving, evolution and
conveniently lends itself to appropriate discretization for a quantum computer.
Riding on the coattails of these results, the second topic is on developing a
sequence of unitary operators which form the basis for a qubit lattice
algorithm (QLA). The QLA, suitable for quantum computers, can be implemented
and tested on existing classical computers for accuracy as well as scaling of
computational time with the number of available processors. In order to
illustrate the QLA for Maxwell equations, results are presented from a time
evolving, full wave simulation of propagation and scattering of an
electromagnetic wave packet by non-dispersive dielectric medium localized in
space. | Efstratios Koukoutsis, Kyriakos Hizanidis, George Vahala, Min Soe, Linda Vahala, Abhay K. Ram | 2023-09-21T21:23:19Z | http://arxiv.org/abs/2309.12492v2 | # Quantum Computing Perspective for Electromagnetic Wave Propagation in Cold Magnetized Plasmas
###### Abstract
Electromagnetic waves are an inherent part of all plasmas - laboratory fusion plasmas or astrophysical plasmas. The conventional methods for studying properties of electromagnetic waves rely on discretization of Maxwell equations suitable for implementing on classical, present day, computers. The traditional methodology is not efficient for quantum computing implementation - a future computational source offering a tantalizing possibility of enormous speed up and a significant reduction in computational cost. This paper addresses two topics relevant to implementing Maxwell equations on a quantum computer. The first is on formulating a quantum Schrodinger representation of Maxwell equations for wave propagation in a cold, inhomogeneous, magnetized plasma. This representation admits unitary, energy preserving, evolution and conveniently lends itself to appropriate discretization for a quantum computer. Riding on the coattails of these results, the second topic is on developing a sequence of unitary operators which form the basis for a qubit lattice algorithm (QLA). The QLA, suitable for quantum computers, can be implemented and tested on existing classical computers for accuracy as well as scaling of computational time with the number of available processors. In order to illustrate the QLA for Maxwell equations, results are presented from a time evolving, full wave simulation of propagation and scattering of an electromagnetic wave packet by non-dispersive dielectric medium localized in space.
## I Introduction
Propagation of electromagnetic waves in thermonuclear fusion plasmas is one of the most significant fields of research in the pursuit for magnetic fusion. In magnetic confinement experiments, electromagnetic waves play a vital role in plasma temperature control, localized non-inductive current drive, heating, and plasma instability control. Therefore, there is an utmost need for understanding the physics and mechanics of wave propagation and scattering inside an inhomogeneous magnetized plasma to enable the optimization for fusion applications.
While the bedrock for the theoretical and analytical studies of wave propagation in plasmas has long been established,[1; 2] penetrating into the complex processes that occur in plasmas and unraveling their physics require a computational treatment. To that end, taking into consideration the aforementioned importance of electromagnetic wave propagation in plasmas, a plethora of computational tools have been developed,[3; 4; 5] ranging from ray-tracing methods to full-wave simulations along with different domains of application.
However, solving the mathematical and physical problem of wave propagation in an actual fusion device poses a challenge even for the most advanced supercomputers. With classical computers eventually reaching their limits and fusion research heavily relying on computational results we motivate a shift in the traditional computational methods, engaging the modern and uprising quantum technologies and quantum computing in particular.
Quantum computing is one of those computational pathways that can yield faster computations than those achieved on a classical computer,[6; 7] the so called quantum advantage, and has gained significant attention in the plasma physics community. Considerations on general applications in plasma simulation can be found in Ref.[[8]], whereas a fusion oriented review of possible quantum computing applications is Ref.[[9]]. In Refs. [[10]] and [[11]] the authors exploit the Quantum Signal Processing (QSP) protocol[12] for simulation of electrostatic Landau damping and wave propagation in a cold fluid plasma respectively. In addition, a quantum computing treatment for Vlasov equation with collisions has been presented in Ref. [[13]]. Finally, a comprehensive review on quantum computing applications in plasmas can be found in Ref.[[14]].
In this paper, we examine Maxwell equations for wave propagation in cold, inhomogeneous, magnetized plasma amendable to quantum computing without tackling the question of computational advantage over the classical methods. Quantum computers are restricted to unitary
operations following the physical laws of closed quantum systems. Thus, the first step towards a quantum implementation is to reformulate Maxwell equations as a quantum Schrodinger equation with Hermitian structure, extending the results of [15] to encompass the dispersive nature of cold magnetized plasma. Then, the second challenge entails decomposing the relevant unitary operator of evolution into a product sequence of unitary operators that can be encoded efficiently on a quantum computer. We accomplish this by leveraging the tensor product structure of the Hamiltonian, deriving a Trotterized unitary sequence that constitutes the basis for a latter Qubit Lattice Algorithm (QLA). The scaling of the quantum encoded QLA has been recently reported [15; 16; 17] to favor quantum implementation on real quantum hardware.
Qubit lattice algorithms along with its predecessors have found extensive computational applications in the fields of Maxwell equations, [18; 19; 20; 21; 22; 23] non-linear optics [24; 25] and quantum simulations. [26; 27; 28; 15]
To assess the capabilities of QLA we present full-wave simulation results from propagation and scattering of an electromagnetic wave packet in a reduced case of our formulation, [23] for a localized inhomogeneous, scalar dielectric. Such wave packet structures in plasma are related to the finite spatial extent applied RF waves that are routinely used for plasma heating. Although these simulations are implemented on classical supercomputers they can be directly transferred to quantum computers, acting as a precursor and validation factor for the proposed QLA generalization into cold magnetized plasma in the near term future.
This paper is structured in two main sections. Section II sets up the theoretical formulation of Maxwell equations as a quantum Schrodinger equations, following up a decomposition of the evolution operator into a convenient unitary product sequence for QLA discretization along with the pertinent discussion on complexity. In Sec.II.1 an augmented form of Maxwell equations in magnetized plasma is presented, serving as a stepping stone for the construction of a Schrodinger-Maxwell equation with unitary evolution in Sec.II.2. The importance of initial and boundary conditions is discussed in Sec. II.3. Decomposition of the established unitary evolution in a product formula of simple unitary operators based on Trotterization is the main subject of Sec.II.4. A simple complexity analysis is performed in Sec.II.5 regarding the scaling of QLA implementation in quantum hardware, indicating a polynomial scaling with the number of qubits required for the QLA discretization. Then, a commentary section II.6 follows, containing perspectives on the QLA implementation for wave propagation and scattering in the cold plasma. Section III serves as an indicator of QLA capabilities for the future implementation in the cold plasma case studied in Sec.II. Specifically, in sections III.1 and III.2 we present the algorithmic scheme of QLA along with some initial value simulations for full-wave scattering of an electromagnetic wave-packet from two-dimensional (2D) scalar, non-dispersive inhomogeneous dielectric objects. In particular, we contrast the different scattering characteristics from a local cylindrical dielectric with strong gradients in the finite boundary layer between the dielectric and vacuum, with that scattering from a local conic dielectric with weak boundary layer gradients in the refractive index. Finally, in Sec.IV we discuss our results along with the next necessary steps for an actual QLA implementation in the near future.
## II Quantum implementation of Maxwell equations in cold magnetized plasma
For a non-dispersive, tensorial and inhomgeneous medium, Maxwell equations can be written as a Schrodinger equation with unitary evolution [15]
\[i\frac{\partial\mathbf{\psi}}{\partial t}=\hat{D}_{\rho}\mathbf{\psi},\quad\hat{D}_{ \rho}=\hat{D}_{\rho}^{\dagger},\quad\mathbf{\psi}(\mathbf{r},0)=\mathbf{\psi}_{0}, \tag{1}\]
under a Dyson transformation \(\hat{\rho}\) on the electromagnetic fields \(\mathbf{u}=(\mathbf{E},\mathbf{H})^{T}\), with \(\mathbf{\psi}=\hat{\rho}\mathbf{u}\). In particular, the Hermitian operator \(\hat{D}_{\rho}\)
\[\hat{D}_{\rho}=\hat{\rho}\hat{D}\hat{\rho}^{-1}=\hat{\rho}\hat{W}^{-1}(\mathbf{r} )\hat{M}\hat{\rho}^{-1}, \tag{2}\]
with
\[\hat{M}=i\begin{bmatrix}0_{3\times 3}&\mathbf{\nabla}\times\\ -\mathbf{\nabla}\times&0_{3\times 3}\end{bmatrix},\quad\hat{W}=\begin{bmatrix} \epsilon(\mathbf{r})&0_{3\times 3}\\ 0_{3\times 3}&\mu(\mathbf{r})\end{bmatrix}. \tag{3}\]
In Eq.(3) the \(\hat{M}\) operator is the Maxwell curl operator and the Hermitian, positive definite \(\hat{W}\) matrix represents the constitutive relations of the medium. The explicit form of the Dyson map \(\hat{\rho}\) depends on the structure of the material matrix \(\hat{W}\): \(\hat{\rho}=\sqrt{\hat{W}}\).
On the other hand, the cold magnetized plasma as a dielectric medium is characterized by dispersion. This translates into a frequency dependent permittivity matrix \(\tilde{\epsilon}(\omega)\). Following the Stix notation [1],
\[\tilde{\epsilon}(\omega)=\begin{bmatrix}S&-iD&0\\ iD&S&0\\ 0&0&P\end{bmatrix} \tag{4}\]
with
\[S= \epsilon_{0}\Big{(}1-\sum_{j=i,e}\frac{\omega_{pj}^{2}}{\omega^{ 2}-\omega_{cj}^{2}}\Big{)}\] \[D= \epsilon_{0}\sum_{j=i,e}\frac{\omega_{cj}\omega_{pj}^{2}}{\omega (\omega^{2}-\omega_{cj}^{2})} \tag{5}\] \[P= \epsilon_{0}\Big{(}1-\sum_{j=i,e}\frac{\omega_{pj}^{2}}{\omega^ {2}}\Big{)}.\]
The definition of elements (5) in the Stix permittivity tensor is taken for a two-species, ions (i) and electrons (e), plasma with inhomogeneous plasma frequency
\(\omega_{pj}^{2}(\mathbf{r})=\frac{\mathbf{n}_{j}(\mathbf{r})\mathbf{q}_{0}^{2}}{ \mathbf{m}_{j}\mathbf{r}_{0}}\) and cyclotron frequency \(\omega_{cj}=\frac{q_{j}B_{0}}{m_{j}}\). The homogeneous magnetic field \(B_{0}\) is along the \(z\) axis and \(m_{j}\), \(q_{j}\) are the mass and charge of the \(j\)-species respectively. \(n_{j}(\mathbf{r})\) is the \(j^{th}\) species density.
### Maxwell equations in temporal domain
In contrast to the optical response case, the temporal domain transformation of \(\tilde{\epsilon}(\omega)\) is expressed through a convolution integral. As a result, the temporal domain, constitutive relations for a cold magnetized plasma are
\[\mathbf{d}=\hat{W}_{0}\mathbf{u}+\frac{1}{2\pi}\int_{0}^{t}\int_{-\infty}^{\infty}( \tilde{\epsilon}(\omega)-\epsilon_{0}I_{3\times 3})e^{-i\omega(t-\tau)}\mathbf{E}( \mathbf{r},\tau)d\,\omega d\,\tau, \tag{6}\]
with \(\mathbf{d}=(\mathbf{D},\mathbf{B})^{T}\). The matrix \(\hat{W}_{0}\) represents the optical response, as in Eq.(3), but now only that of the vacuum. Evaluation of the inner integral term in Eq. (6) requires the Plemelj formula [1] to yield
\[\mathbf{d}=\hat{W}_{0}\mathbf{u}+\int_{0}^{t}\hat{K}(t-\tau)\mathbf{E}(\mathbf{r},\tau)d\,\tau, \tag{7}\]
with the inhomogeneous susceptibility kernel \(\hat{K}(t)\)
\[\hat{K}(t)=\epsilon_{0}\sum_{j=i,e}\left[\begin{matrix}\frac{\omega_{pj}^{2}} {\omega_{cj}}\sin\omega_{cj}t&\frac{\omega_{pj}^{2}}{\omega_{cj}}(\cos\omega_ {cj}t-1)&0\\ \frac{\omega_{pj}^{2}}{\omega_{cj}}(1-\cos\omega_{cj}t)&\frac{\omega_{pj}^{2} }{\omega_{cj}}\sin\omega_{cj}t&0\\ 0&0&\omega_{pj}^{2}t\end{matrix}\right]. \tag{8}\]
From the expressions (7) and (8), Maxwell equations for a cold magnetized plasma now take the form
\[i\frac{\partial\mathbf{u}}{\partial t}=W_{0}^{-1}\hat{M}\mathbf{u}-i\int_{0}^{t}\frac {\partial\hat{G}(t-\tau)}{\partial t}\mathbf{u}(\mathbf{r},\tau)d\,\tau \tag{9}\]
where
\[\frac{\partial\hat{G}(t)}{\partial t}=\left[\begin{matrix}\frac{1}{\epsilon_{ 0}}\frac{\partial\hat{K}}{\partial t}&0_{3\times 3}\\ 0_{3\times 3}&0_{3\times 3}\end{matrix}\right],\quad\frac{1}{ \epsilon_{0}}\frac{\partial\hat{K}}{\partial t}=\sum_{j=i,e}\omega_{pj}^{2}( \mathbf{r})\left[\begin{matrix}\cos\omega_{cj}t&-\sin\omega_{cj}t&0\\ \sin\omega_{cj}t&\cos\omega_{cj}t&0\\ 0&0&1\end{matrix}\right]. \tag{10}\]
### Schrodinger representation
Returning back to \(\tilde{\epsilon}(\omega)\) in Eq. (4), its Hermitian structure ensures that the conductivity current does not produce dissipation inside the plasma, i.e the cold magnetized plasma is a lossless dispersive dielectric. Hence, it is possible to construct a Schrodinger representation of Maxwell equations (9) that admit unitary evolution corresponding to electromagnetic energy conservation. Such mathematical representations of Maxwell equations for lossless dispersive media are well studied in the literature [29; 30].
Defining the total conductivity current density \(\mathbf{J}_{c}\) as
\[\mathbf{J}_{c}=\int_{0}^{t}\frac{\partial\hat{K}}{\partial t}\mathbf{E}(\mathbf{r},\tau)d \,\tau=\mathbf{J}_{ce}+\mathbf{J}_{ci}, \tag{11}\]
we exploit the rotational symmetry of \(\frac{\partial\hat{K}}{\partial t}\) in Eq.(10) to reformulate Maxwell equations (9) as
\[i\frac{\partial\mathbf{E}}{\partial t} =\frac{i}{\epsilon_{0}}\mathbf{\nabla}\times\mathbf{H}-\frac{i}{\epsilon_{ 0}}\mathbf{J}_{c}, \tag{12}\] \[i\frac{\partial\mathbf{H}}{\partial t} =-\frac{i}{\mu_{0}}\mathbf{\nabla}\times\mathbf{E},\] \[i\frac{\partial\mathbf{J}_{cj}}{\partial t} =i\epsilon_{0}\omega_{pj}^{2}(\mathbf{r})\mathbf{E}+\omega_{cj}\hat{S}_{z} \mathbf{J}_{cj},\quad j=i,e.\]
The set of equations (12) represent the augmented Maxwell system which self-consistently describes the behaviour of electromagnetic fields inside a cold magneto-plasma. We point out that Eq.(12) is the basis for FDTD simulations, [31] but for a stationary plasma. The Hermitian matrix \(\hat{S}_{z}\),
\[\hat{S}_{z}=\begin{bmatrix}0&-i&0\\ i&0&0\\ 0&0&0\end{bmatrix} \tag{13}\]
represents the projection of spin-1 onto the \(z\)-axis.
To obtain an explicit Schrodinger representation of Eq.(12) we apply a Dyson transformation [15],
\[\hat{\rho}=diag(\epsilon_{0}^{1/2}I_{3\times 3},\mu_{0}^{1/2}I_{3\times 3},\frac{1}{ \epsilon_{0}^{1/2}\omega_{pi}}I_{3\times 3},\frac{1}{\epsilon_{0}^{1/2}\omega_{pe}}I_{3 \times 3}) \tag{14}\]
resulting in
\[i\frac{\partial}{\partial t}\begin{bmatrix}\epsilon_{0}^{1/2}\mathbf{E}\\ \mu_{0}^{1/2}\mathbf{H}\\ \frac{1}{\epsilon_{0}^{1/2}\omega_{pe}}\mathbf{J}_{ci}\\ \frac{1}{\epsilon_{0}^{1/2}\omega_{pe}}\mathbf{J}_{ce}\end{bmatrix}=\begin{bmatrix} 0_{3\times 3}&ic\mathbf{\nabla}\mathbf{\times}&-i\omega_{pi}&-i\omega_{pe}\\ -ic\mathbf{\nabla}\mathbf{\times}&0_{3\times 3}&0_{3\times 3}&0_{3\times 3}\\ i\omega_{pi}&0_{3\times 3}&\omega_{ci}\hat{S}_{2}&0_{3\times 3}\\ i\omega_{pe}&0_{3\times 3}&0_{3\times 3}&\omega_{ce}\hat{S}_{z}\end{bmatrix} \begin{bmatrix}\epsilon_{0}^{1/2}\mathbf{E}\\ \mu_{0}^{1/2}\mathbf{H}\\ \frac{1}{\epsilon_{0}^{1/2}\omega_{pi}}\mathbf{J}_{ci}\\ \frac{1}{\epsilon_{0}^{1/2}\omega_{pi}}\mathbf{J}_{ce}\end{bmatrix}\Leftrightarrow i \frac{\partial\mathbf{\psi}}{\partial t}=\hat{D}\mathbf{\psi}. \tag{15}\]
It should be noted that we have switched from using the Riemann-Silberstein-Weber [32] field representation to the vacuum field representation, and the plasma inhomogeneity is now thrust into the source terms \(\mathbf{J}_{ci},\mathbf{J}_{ce}\) through the species plasma frequencies \(\omega_{pj}(\mathbf{r})\). Additionally, Eq.(15) can be easily extended to incorporate different ions species by adding the respective ion-species current components in the stave vector \(\mathbf{\psi}\). In realistic fusion experiments there will be hydrogen, deuterium and tritium ions, so their contribution must be included in Eq.(15) for a complete description of the total inhomogeneity profiles.
Under suitable Dirichlet boundary conditions the operator \(\hat{D}\) in the Schrodinger-Maxwell Eq.(15) is Hermitian. As a result, the evolution operator \(\hat{\mathcal{U}}=e^{-itD}\) is unitary and corresponds to the conservation of an extended electromagnetic energy \(E(t)\) through the inner product,
\[E(t)=\langle\mathbf{\psi}|\mathbf{\psi}\rangle=\int_{\Omega}\Big{(} \epsilon_{0}|\mathbf{E}|^{2}+\frac{|\mathbf{B}|^{2}}{\mu_{0}}\Big{)}d\,\mathbf{r}+\int_{ \Omega}\Big{(}\frac{|\mathbf{J}_{ci}|^{2}}{\epsilon_{0}\omega_{pi}^{2}(\mathbf{r} )}+\frac{|\mathbf{J}_{ce}|^{2}}{\epsilon_{0}\omega_{pe}^{2}(\mathbf{r})}\Big{)}d \,\mathbf{r}=E(0)=\int_{\Omega}\Big{(}\epsilon_{0}|\mathbf{E}_{0}|^{2}+\frac{|\mathbf{B}_ {0}|^{2}}{\mu_{0}}\Big{)}d\,\mathbf{r},\quad\Omega\subset\mathbb{R}^{3}. \tag{16}\]
The extended electromagnetic energy Eq.(16) consists of two terms. The first term is the standard electromagnetic energy in a vacuum whereas the second term reflects the energy associated with the cold plasma response. We have denoted with \(\mathbf{E}_{0}\) and \(\mathbf{B}_{0}\) the initial values of the electromagnetic fields. Notice that due to the causality constraint in the plasma response, the initial values of the conductivity currents according to Eq.(11) are zero, \(\mathbf{J}_{ce,i}(t\leq 0)=0\).
A subtlety related with the extended electromagnetic energy (16) is the smoothness of \(E(t)\) because of the Laplace Transform in Eq.(6). As a result, even for resonant frequencies \(\omega=\omega_{cj}\) we obtain a bounded dispersive electromagnetic energy \(E_{disp}(t)\leq E(0)\). Thus, it is possible to quantify the resonant energization for each plasma population without considering resonant wave-particle interactions or pertubative approximations for the RF field.
### Initial and boundary conditions
In this section we will restate our problem comparing the imposed mathematical conditions with the ones in a plasma fusion device.
The plasma as a dielectric is considered to be confined inside a volume \(\Omega\subset\mathbb{R}^{3}\) with a boundary surface \(\partial\Omega\). By selecting the boundary condition
\[\mathbf{n}\times\mathbf{E}=0,\quad\text{on }\partial\Omega, \tag{17}\]
the "Hamiltonian operator" \(\hat{D}\) in the Maxwell-Schrodinger equation (15) is Hermitian so the standard quantum-mechanical analogies are present. In fusion devices, the plasma is confined by a vacuum vessel at which the Perfect Electric Conductor (PEC) boundary condition (17) no longer holds due to electromagnetic losses in the walls. Alteration of the PEC boundary condition results in the non-Hermiticity of the operator \(\hat{D}\) and subsequently, a break in the unitary evolution. In this case, the quantum simulation of the dynamics becomes troublesome. A solution has been proposed in Ref.[[33]] where instead of the quantum simulation of the Maxwell dynamics, the linear system of equations is solved through quantum singular value decomposition as a boundary value problem. This approach could run into some difficulties as one moves to 2D and 3D plasma wave propagation. Alternatively, one could resort to some dilation by embedding the subsystem into a higher dimensional Hilbert space and thereby recover unitarity within this higher dimensional space.
For completeness, one could eventually introduce into the set of equations (12) the effect of an antenna by coupling the Faraday equation with a monochromatic oscillator [11]\(\mathbf{Q}(\mathbf{r},t)=\mathbf{Q}_{a}(\mathbf{r}_{a})e^{-i\omega_{a}t}\) with frequency \(\omega_{a}\). The subscript \(a\) denotes the antenna-related quantities. In that way, the Faraday equation in (15) is augmented by
\[\begin{split} i\frac{\partial(\mu_{0}^{1/2}\mathbf{H})}{\partial t}& =-ic\mathbf{\nabla}\mathbf{\times}(\epsilon_{0}^{1/2}\mathbf{E})+\beta_{\mathbf{r },\mathbf{r}_{a}}\mathbf{Q}\\ i\frac{\partial\mathbf{Q}}{\partial t}&=\beta_{\mathbf{r},\mathbf{r }_{a}}(\mu_{0}^{1/2}\mathbf{H})+\omega_{a}\mathbf{Q},\end{split} \tag{18}\]
where \(\beta_{\mathbf{r},\mathbf{r}_{a}}=\beta\delta_{\mathbf{r},\mathbf{r}_{a}}\), \(\delta_{\mathbf{r},\mathbf{r}_{a}}\) is the Kronecker symbol and \(\beta\) is the coupling strength between the antenna emitted wave and the magnetic field.
Finally we turn our attention to the initial conditions. The initial state vector of Eq. (15) is
\[\mathbf{\psi}(\mathbf{r},0)=\mathbf{\psi}_{0}=\begin{bmatrix}e_{0}^{1/2}\mathbf{E}_{0}\\ \mu_{0}^{1/2}\mathbf{H}_{0}\\ 0\\ 0\end{bmatrix}. \tag{19}\]
Inclusion of the antenna coupling Eq. (18) adds to the initial state \(\mathbf{\psi}_{0}\) the term \(\mathbf{Q}(\mathbf{r},0)=\mathbf{Q}_{a}\). The selection of the initial vacuum electromagnetic filed profiles is dictated by the satisfaction of the divergence set of Maxwell equations.
\[\mathbf{\nabla}\mathbf{\cdot}\mathbf{D}_{0}=\mathbf{\nabla}\mathbf{\cdot}\mathbf{E}_{0}=0,\quad\mathbf{ \nabla}\mathbf{\cdot}\mathbf{B}_{0}=0. \tag{20}\]
In that way, the divergence Maxwell equations are guaranteed to be satisfied for \(t>0\) along with \(\mathbf{\nabla}\mathbf{\cdot}\mathbf{J}_{cj}=0\) from the charge continuity equation in the continuum limit.
### Trotter Product Evolution Approximation
Application of QLA or any other quantum protocol for simulation of electromagnetic wave propagation in a cold inhomogeneous magnetized plasma requires a decomposition of the \(\hat{D}\) operator in Eq.(15) into simpler matrices,
\[\hat{D}=\hat{D}_{vac}+\sum_{j=i,e}[\hat{D}_{\omega_{pj}}+\hat{D}_{\omega_{cj}}], \tag{21}\]
with
\[\hat{D}_{vac} =-\frac{c}{2}(I_{2\times 2}+\hat{\sigma}_{z})\otimes\hat{\sigma}_ {y}\otimes\mathbf{\nabla}\mathbf{\times} \tag{22}\] \[\hat{D}_{\omega_{pi}} =\frac{1}{2}\hat{\sigma}_{y}\otimes(I_{2\times 2}+\hat{\sigma}_{z}) \otimes\omega_{pi}\] (23) \[\hat{D}_{\omega_{pe}} =\frac{1}{2}(\hat{\sigma}_{x}\otimes\hat{\sigma}_{y}+\hat{\sigma }_{y}\otimes\hat{\sigma}_{x})\otimes\omega_{pe}\] (24) \[\hat{D}_{\omega_{ci}} =\frac{1}{4}(I_{2\times 2}-\hat{\sigma}_{z})\otimes(I_{2\times 2 }+\hat{\sigma}_{z})\otimes\omega_{ci}\hat{S}_{z}\] (25) \[D_{\omega_{ce}} =\frac{1}{4}(I_{2\times 2}-\hat{\sigma}_{z})\otimes(I_{2\times 2 }-\hat{\sigma}_{z})\otimes\omega_{ce}\hat{S}_{z}. \tag{26}\]
For simplicity let us assume that all quantities are only \(x\)-dependent, rendering our model 1D. The inclusion of \(y\)- and \(z\)-dependence is straightforward, following the usual Alternate Direction Iteration (ADI) Cartesian integration procedure with no extraneous couplings of the respective quantum operators. Then, the curl operator in Eq.(22) reads
\[\mathbf{\nabla}\mathbf{\times}=\hat{S}_{x}\hat{p}_{x},\quad\hat{S}_{x}=\begin{bmatrix} 0&0&0\\ 0&0&-i\\ 0&i&0\end{bmatrix},\quad\hat{p}_{x}=-i\frac{\partial}{\partial x}. \tag{27}\]
Trotterizing the total unitary evolution \(e^{-i\delta t\hat{D}}\) whose components are given in Eqs.(21)-(26) we obtain
\[\mathbf{\psi}(\mathbf{r},\delta t)=e^{-i\delta t\hat{D}_{vac}}\prod_{j=i,e}e^{-i \delta t\hat{D}_{\omega_{pj}}}e^{-i\delta t\hat{D}_{\omega_{cj}}}\mathbf{\psi}_{0 }+\textit{O}(\delta t^{2}). \tag{28}\]
Each of the exponential operators in Eq.(28) can be written as a product of unitary operators based on the their tensor-fold Pauli structure. Specifically, we have the following diagonalization relations for the \(\hat{\sigma}_{y},\hat{\sigma}_{x},\hat{S}_{x},\hat{S}_{z}\) matrices
\[\begin{array}{ll}\hat{\sigma}_{x}=\hat{H}\hat{\sigma}_{z}\hat{H},&\hat{ \sigma}_{y}=\hat{H}_{y}\hat{\sigma}_{z}\hat{H}_{y},\\ \hat{S}_{x}=\hat{H}_{y}^{(x)}\hat{\sigma}_{z}^{(x)}\hat{H}_{y}^{(x)},&\hat{S}_ {z}=\hat{H}_{y}^{(z)}\hat{\sigma}_{z}^{(z)}\hat{H}_{y}^{(z)},\end{array} \tag{29}\]
where \(\hat{H}\) is the unitary Hadamard gate, \(\hat{H}_{y}\) is the unitary variant of Hadamard gate that diagonalizes \(\hat{\sigma}_{y}\) whereas the unitary set of matrices \(\hat{H}_{y}^{(x)},\hat{H}_{y}^{(z)}\) and Hermitian \(\hat{\sigma}_{z}^{(x)},\hat{\sigma}_{z}^{(z)}\) are the three-dimensional extensions of \(\hat{H}_{y}\) and \(\hat{\sigma}_{z}\) for \(x\) and \(z\) axes respectively,
\[\begin{array}{ll}\hat{H}=\frac{1}{\sqrt{2}}\begin{bmatrix}1&1\\ 1&-1\end{bmatrix},&\hat{H}_{y}=\frac{1}{\sqrt{2}}\begin{bmatrix}1&-i\\ i&-1\end{bmatrix},&\hat{H}_{y}^{(x)}=\frac{1}{\sqrt{2}}\begin{bmatrix}1&0&0 \\ 0&1&-i\\ 0&i&-1\end{bmatrix},\\ H_{y}^{(z)}=\frac{1}{\sqrt{2}}\begin{bmatrix}1&-i&0\\ i&-1&0\\ 0&0&1\end{bmatrix},&\hat{\sigma}_{z}^{(x)}=\begin{bmatrix}0&0&0\\ 0&1&0\\ 0&0&-1\end{bmatrix},&\hat{\sigma}_{z}^{(z)}=\begin{bmatrix}1&0&0\\ 0&-1&0\\ 0&0&0\end{bmatrix}.\end{array} \tag{30}\]
This enables us to express the unitary exponential of operators (22)-(26) using the identities:
\[e^{-i\delta t\hat{V}_{1}\hat{A}\hat{V}_{1}^{\dagger}\otimes\hat{V}_{2}\hat{B} \hat{V}_{2}^{\dagger}}=(\hat{V}_{1}\otimes\hat{V}_{2})e^{-i\delta t\hat{A} \otimes\hat{B}}(\hat{V}_{1}^{\dagger}\otimes\hat{V}_{2}^{\dagger}), \tag{31}\]
\[e^{-i\delta tL_{2\times 2}\otimes\hat{A}}=I_{2\times 2}\otimes e^{-i\delta t\hat{A}}, \tag{32}\]
\[e^{-i\frac{\theta}{2}\hat{\sigma}_{i}\otimes\hat{A}}=I_{2\times 2}\otimes\cos \left(\hat{A}\theta/2\right)-i\hat{\sigma}_{i}\sin\left(\hat{A}\theta/2\right). \tag{33}\]
Therefore, the exponential operator \(e^{-i\delta t\hat{D}_{vac}}\) can be written
\[e^{-i\delta t\hat{D}_{vac}}=\hat{C}_{vac}\hat{S}\hat{C}_{vac} \tag{34}\]
where the unitary collision operator \(\hat{C}_{vac}\) has the form
\[\hat{C}_{vac}=I_{2\times 2}\otimes\hat{H}_{y}\otimes\hat{H}_{y}^{(x)}, \tag{35}\]
and the advection operator in \(x\)-direction:
\[\hat{S}=\exp\Bigl{\{}i(I_{2\times 2}+\hat{\sigma}_{z})\otimes\hat{\sigma}_{z} \otimes\hat{\sigma}_{z}^{(x)}c\delta t\hat{p}_{x}/2\Bigr{\}}. \tag{36}\]
Similarly, we express the rest of the operators in the Trotterized evolution Eq.(28) as follows
\[e^{-i\delta t\hat{D}_{\omega_{pi}}}=\hat{C}_{\omega_{pi}}(\hat{\mathcal{R}}_{ z}^{(pi)}\otimes I_{3\times 3})\hat{C}_{\omega_{pi}}, \tag{37}\]
where \(\theta_{pi}=\omega_{pi}\delta t\), \(\hat{C}_{\omega_{pi}}\) is the collision operator
\[\hat{C}_{\omega_{pi}}=\hat{H}_{y}\otimes I_{2\times 2}\otimes I_{3\times 3} \tag{38}\]
and the \(\hat{\mathcal{R}}_{z}^{(pi)}\) operator is defined through identity (33) which in principle represents a functional \(\hat{R}_{i}(\cdot)\) rotations,
\[\hat{\mathcal{R}}_{z}^{(pi)}=[\hat{R}_{z}(\theta_{pi})\otimes I_{2\times 2 }]\hat{R}_{z}(\hat{\sigma}_{z}\theta_{pi}). \tag{39}\]
For \(e^{-i\delta t\hat{D}_{\omega_{pi}}}\) we obtain
\[e^{-i\delta t\hat{D}_{\omega_{pi}}}=\hat{C}_{\omega_{pe}}^{(1)}(\hat{\mathcal{ R}}_{z}^{(pe)}\otimes I_{3\times 3})\hat{C}_{\omega_{pi}}^{(1)}\hat{C}_{\omega_{pe}}^{(2)}( \hat{\mathcal{R}}_{z}^{(pe)}\otimes I_{3\times 3})\hat{C}_{\omega_{pi}}^{(2)} \tag{40}\]
with
\[\hat{C}_{\omega_{pe}}^{(1)} =\hat{H}\otimes\hat{H}_{y}\otimes I_{3\times 3}, \tag{41}\] \[\hat{C}_{\omega_{pe}}^{(2)} =\hat{H}_{y}\otimes\hat{H}\otimes I_{3\times 3},\] (42) \[\hat{\mathcal{R}}_{z}^{(pe)} =\hat{R}_{z}(\hat{\sigma}_{z}\theta_{pe}). \tag{43}\]
We now move to the terms containing the cyclotron angle \(\theta_{cj}\),
\[e^{-i\delta t\hat{D}_{\omega_{ci}}} =\hat{C}_{\omega_{ci}}[I_{4\times 4}\otimes\hat{R}_{z}^{(z)}( \theta_{ci}/2)][I_{2\times 2}\otimes\hat{R}_{z}(\hat{\sigma}_{z}^{(z)}\theta_{ci}/2)]\] \[\times\hat{\mathcal{R}}_{z}^{(1),(ci)\dagger}\hat{\mathcal{R}}_{ z}^{(2),(ci)\dagger}\hat{C}_{\omega_{ci}}^{(1)}, \tag{44}\]
with
\[\hat{C}_{\omega_{ci}}^{(2)} =\hat{H}\otimes\hat{H}_{y}\otimes I_{3\times 3} \tag{45}\]
and operators \(\hat{R}_{z}^{(z)}(\theta_{ci}/2),\hat{\mathcal{R}}_{z}^{(1),(ci)},\hat{ \mathcal{R}}_{z}^{(2),(ci)}\) representing \(z\)-rotation based on the \(3\times 3\)\(\hat{\sigma}_{z}^{(z)}\) matrix and functional \(z\)-rotations respectively,
\[\hat{R}_{z}^{(z)}(\theta_{ci}/2) =e^{-i\frac{\theta_{ci}}{4}\hat{\sigma}_{z}^{(z)}}, \tag{46}\] \[\hat{\mathcal{R}}_{z}^{(1),(ci)\dagger} =e^{i\frac{\theta_{ci}}{4}\hat{\sigma}_{z}\otimes I_{3\times 2} \otimes\hat{\sigma}_{z}^{(z)}},\] (47) \[\hat{\mathcal{R}}_{z}^{(2),(ci)\dagger} =e^{i\frac{\theta_{ci}}{4}\hat{\sigma}_{z}\otimes\hat{\sigma}_{z} ^{(z)}}. \tag{48}\]
Finally,
\[e^{-i\delta t\hat{D}_{\omega_{ee}}} =\hat{C}_{\omega_{ee}}[I_{4\times 4}\otimes\hat{R}_{z}^{(z)}( \theta_{ce}/2)][I_{2\times 2}\otimes\hat{R}_{z}^{\dagger}(\hat{\sigma}_{z}^{(z)} \theta_{ce}/2)]\] \[\times\hat{\mathcal{R}}_{z}^{(1),(ce)\dagger}\hat{\mathcal{R}}_{ z}^{(2),(ce)}\hat{C}_{\omega_{ci}}^{(1)}. \tag{49}\]
It is important to note that after we have made the somewhat standard leading-order Trotterized approximation to the total unitary evolution operator in Eq.(15), the evaluations of all the operators in Eqs.(34)-(49) are exact and no further approximations have been made.
Consequently, the fully unitary evolution sequence reads
\[\mathbf{\psi}(\mathbf{r},\delta t)=\hat{C}_{vac}\hat{S}\hat{C}_{vac}\hat{C }_{\omega_{pi}}(\hat{R}_{z}^{(pi)}\otimes I_{3\times 3})\hat{C}_{\omega_{pi}}\hat{C}_{ \omega_{pe}}^{(1)}(\hat{R}_{z}^{(pe)}\otimes I_{3\times 3})\hat{C}_{\omega_{spe}}^{(1)}\hat{C}_{ \omega_{pe}}^{(2)}(\hat{\mathcal{R}}_{z}^{(pe)}\otimes I_{3\times 3})\hat{C}_{\omega_{pe}}^{(2)}\hat{C}_{ \omega_{ci}}[I_{4\times 4}\otimes\hat{R}_{z}^{(z)}(\theta_{ci}/2)]\] \[\times[I_{2\times 2}\otimes\hat{R}_{z}(\hat{\sigma}_{z}^{(z)} \theta_{ci}/2)]\hat{\mathcal{R}}_{z}^{(1),(ci)\dagger}\hat{\mathcal{R}}_{z}^{(2),(ci) \dagger}\hat{C}_{\omega_{ci}}\hat{C}_{\omega_{ee}}[I_{4\times 4}\otimes\hat{R}_{z}^{(2)}( \theta_{ce}/2)][I_{2\times 2}\otimes\hat{R}_{z}^{\dagger}(\hat{\sigma}_{z}^{(z)}\theta_{ce}/2 )]\hat{\mathcal{R}}_{z}^{(1),(ce)\dagger}\hat{\mathcal{R}}_{z}^{(2),(ce)}\hat{C}_ {\omega_{ci}}\mathbf{\psi}_{0}. \tag{50}\]
### Quantum encoding and complexity analysis
Implementation of the Trotterized unitary product formula Eq.(50) in a digital quantum computer requires spatial discretization. We pursue a qubit lattice algorithm (QLA) discretization where the evolution (50) transcends into an interleaved sequence of non-commuting QLA collision \(\hat{\mathcal{C}}\) and streaming \(\hat{\mathcal{S}}\) operators that recover the Schrodinger-Maxwell equation (15) to a second order diffusion scheme, \(\delta t\sim\delta^{2}\), \(\delta x\sim\delta\). The advantage of this description stems from treating the advection operator \(\hat{S}\) in Eq.(36), through the QLA streaming operators \(\hat{\mathcal{S}}\)'s, enabling an efficient quantum implementation [15; 16; 17; 27]. The rest of the participating operators in Eq.(50) will comprise the QLA collision operators \(\hat{\mathcal{C}}\).
Ultimately, to implement the QLA evolution derived from Eq.(50) onto a quantum computer we must express the participating operators into elementary quantum gates acting on a set of qubits. We will use two qubit registers. The first encodes the amplitude dimensionality of the state vector \(\mathbf{\psi}\) in Eq.(15), hence containing \(n_{i}=4\) qubits with \(\{|i\rangle\}\) basis. The second register labels
the spatial discretization. For a one-dimensional lattice with \(N\) nodes and a discretization step \(\delta\), we will need \(n_{p}=\log_{2}N\) qubits with basis \(\{\left|p\right\rangle\}\). Therefore, a total number of \(n_{total}=n_{p}+4\) qubits are required for the complete description of the state \(\mathbf{\psi}\).
Then, the qubit encoding of the state vector \(\mathbf{\psi}\) reads,
\[\left|\mathbf{\psi}\right\rangle=\sum_{p=0}^{N-1}\sum_{i=0}^{11}\psi_{0ip}\left|i \right\rangle\left|p\right\rangle, \tag{51}\]
with amplitudes \(\psi_{ip}\) characterize the \(i\)-component of the state vector \(\mathbf{\psi}\) in the lattice site \(p\). The quantum state \(\left|\mathbf{\psi}\right\rangle\) is normalized to the square root of the initial (constant) electromagnetic energy so that \(\sum_{i,p}\left|\psi_{0ip}\right|^{2}=1\).
Establishing the required circuit width (total number of qubits) for the quantum encoding of our state, we proceed to analyze the decomposition scaling (circuit depth) of operators in Eq.(50) into simple one-qubit and CNOT gates to \(n_{total}\). All the unitary collision \(\hat{C}\)'s operators are in tensor product of elementary single-qubit gates like the Hadamard gate \(\hat{H}\) and rotation gate \(\hat{H}_{y}=\hat{\sigma}_{z}\hat{R}_{x}(\pi/2)\) whereas the \(\hat{H}_{y}^{(z)},\hat{H}_{y}^{(x)}\) two-level gates can be easily implemented within simple, one-qubit gates. In addition, those operators act solely in the 4-qubit amplitude register \(\{\left|i\right\rangle\}\), resulting to constant scaling and can be implemented in the worst case scenario as \(\mathit{O}(k\cdot 4^{2}),\ k\in\mathcal{N}\). The integer \(k\) accounts for the total number of collision operators \(\hat{C}\) in Eq.(50). As far as the unitary rotation operators which contain the plasma inhomogeneity are concerned, they are all diagonal and can be decomposed into simpler two-level \(z\)-rotations or directly implemented within \(\mathit{O}(m\cdot 2^{n_{total}+1})\) CNOTs and single-qubit gates [34]. As previous, the natural number \(m\) now accounts for the total number of those diagonal inhomogeneous operators in Eq.(50). Finally, the QLA streaming \(\hat{\mathcal{S}}\) operators offer the advantage of implementing the associated advective operator \(\hat{S}\) as a quantum walk [16]. The explicit circuit implementation of this quantum walk into a quantum computer is presented in Refs.[15], [17]. The QLA streaming operators act only in the spatial discretization register \(\{\left|p\right\rangle\}\), controlled by the \(\{\left|i\right\rangle\}\) qubits, so based on the results of Refs.[15] and [17] they are expected to scale as \(\mathit{O}(l\cdot n_{p}^{2}),\ l\in\mathcal{N}\).
Consequently, the total quantum implementation scaling of the QLA discretization scheme of unitary evolution (50) is expected to be \(\mathit{O}(32m\cdot 2^{n_{p}}+l\cdot n_{p}^{2}+16k)\). For fusion relevant applications the inhomogeneity plasma profile is localized, enabling us to reduce the encoding cost of the inhomogeneous diagonal rotation operators to \(\mathit{O}[poly(n_{p})]\) which in turn implies that the total implementation cost of our algorithm scales polynomially \(\mathit{O}[poly(n_{p})]\) with the number of qubits in the \(p\)-register. This polynomial scaling promotes QLA as prominent candidate for implementation in real quantum hardware in the near future.
### Discussion
Comparing the Schrodinger representation of Maxwell equations for inhomogeneous non-dispersive media Eq.(1) with Eq.(15) for the magnetized plasma, it seems that the latter supports more complexity due to the dimensionality of the state vector \(\mathbf{\psi}\). But, in contrast with the optical case where the respective spatial displacement operator interferes with the inhomogeneity of the refractive index (see Eq.(2)) the respective exponential of operator \(\hat{D}_{vac}\) in Eq.(22) is explicitly decomposed without implicit dependence on the inhomogeneity plasma profile which is reflected in the plasma frequencies. As a consequence, the expected QLA will be free of the non-unitary potential operators \(\hat{V}\) such those introduced in Refs.[18; 19; 20], resulting in a fully unitary product sequence similar to that of a homogeneous medium [21].
Subsequently, a vacuum QLA sequence denoted as \(\hat{U}_{X}^{vac}\) can be immediately employed to calculate the term \(e^{-i\delta t\hat{D}_{vac}}\) in the Trotterized evolution approximation of \(e^{-i\delta t\hat{D}}\),
\[\begin{split} e^{-i\delta t\hat{D}}&=e^{-i\delta t \hat{D}_{disp}}e^{-i\delta t\hat{D}_{vac}}+\mathit{O}(\delta t^{2})\\ &=e^{-i\delta t\hat{D}_{disp}}\hat{U}_{X}^{vac}+\mathit{O}( \delta t^{2}).\end{split} \tag{52}\]
Implementation of the dispersive part \(e^{-i\delta t\hat{D}_{disp}}\), where \(\hat{D}_{disp}=\sum_{j=i,e}\hat{D}_{\omega_{pj}}+\hat{D}_{\omega_{ij}}\) can be performed in parallel with the QLA. The main advantage of this approximation is that we can decide whether to classically compute the \(\hat{U}_{X}^{vac}\mathbf{\psi}_{0}\), store the information and proceed with a follow up quantum computation for the \(e^{-i\delta t\hat{D}_{disp}}\) term resulting in a hybrid computation, or purely quantum computing the whole sequence based on the quantum encoding of QLA as described in Sec.II.5.
In addition, the unitary QLA derived from evolution sequence (50) conserve the extended electromagnetic energy Eq.(16) and the divergence conditions. Thus, our full-wave scheme can be extended beyond the usual plane-wave or monochromatic wave approximations. This is very important in the case of fusion plasmas where the RF waves that are used for plasma heating and current drive are wave-packets that are localized in space and of finite duration in time. The interaction of the inhomogeneity plasma profile with the envelope of the carrier wave, as well as with the individual components that a spatially confined beam consists of, will lead to complex electromagnetic structures that will affect the current densities in the dispersive plasma. More importantly, those transport effects correspond to energy transfer from the initial electromagnetic fields to the current density fields and can be explicitly measured due to Eq.(16) which describes the extended electromagnetic energy. Hence, examination of wave packet propagation in plasmas is relevant to realistic fusion experiments. For instance, an initial X-wave polarization \(\mathbf{E}_{0}=E_{y}(k_{x}x)\hat{\mathbf{y}}\) profile, the scattering from a two dimensional \(x-y\) plasma inhomogeneity will generate the electromag
netic fields \(\mathbf{E}=E_{x}(k_{x}x,k_{y}y,\omega_{X}t)\hat{\mathbf{x}}+E_{y}(k_{x}x,k_{y}y,\omega_{X} t)\hat{\mathbf{y}}\) and \(\mathbf{B}=B_{z}(k_{x}x,k_{y}y,\omega_{X}t)\hat{\mathbf{z}}\) but most importantly will produce the conductivity current density \(\mathbf{J}_{cj}=J_{xcj}(k_{x}x,k_{y}y,\omega_{X}t)\hat{\mathbf{x}}+J_{ycj}(k_{x}x,k_{y} y,\omega_{X}t)\hat{\mathbf{y}}\) to satisfy \(\mathbf{\nabla}\mathbf{\cdot}\mathbf{E}=\mathbf{\nabla}\mathbf{\cdot}\mathbf{B}=\mathbf{\nabla}\mathbf{\cdot} \mathbf{J}_{cj}=0\).
Given the fact that the QLA scales linearly with the number of processors and its quantum variant is probably expected to scale as \(O[poly(n_{p})]\), it is evident that our considerations pose a strong alternative to the cost-inefficient FDTD methods, particularly in 2D and 3D.
On the other hand, it may be necessary to further manipulate the evolution sequence (50) for an optimized QLA to be produced.[28; 23] Therefore, considerable research is required before applying the QLA for simulation of wave propagation into a plasma characterized by fusion-reactor parameters.
## III Example: QLA for scattering from 2D scalar non-dispersive dielectric objects
Although the analytical and algorithmic establishments in Sec.II should result in an efficient quantum computer code for electromagnetic wave propagation in cold inhomogeneous magnetized plasmas, much work remains to be done in optimizing the qubit presentation of a QLA code for Eq.(50) before tackling the propagation of such fusion relevant RF wave-packets in plasma.
It is thus instructive to first investigate our Maxwell QLA code capabilities and behavior for the scattering of an electromagnetic pulse from a non-dispersive 2D inhomogeneous dielectric object, and we shall observe some interesting physics arising from the initial value simulations.
### The algorithm
To showcase what a QLA sequence looks like and what we expect to obtain from the "QLAzation" of Eq.(50), we briefly present the algorithmic scheme for a 2D \(x-y\) scattering of a wave-packet from a scalar but non-dispersive localized inhomogeneities with refractive index \(n=n(x,y)\), as displayed in Fig.1. The shape of the inhomogeneities, can be related to cylindrical filaments or smooth localized concentrations of plasma density.
In our reduced case of non-dispersive dielectric, QLA is a discrete representation of unitary representation of Maxwell equations (1) which, at a mesoscopic level, uses an appropriately chosen interleaved sequence of three non-commuting operators. Two of the operators are unitary collision and streaming operators - the collision operator entangles the on-site qubits and the streaming operator propagates the entangled state through the lattice. The gradients in the medium constitutive properties are included via a third operator referred to as a potential operator.
For 2D \(x-y\) scattering of electromagnetic fields for a scalar dielectric state vector that evolves unitarily is
\[\mathbf{q}=\begin{bmatrix}nE_{x}\\ nE_{y}\\ nE_{z}\\ \mu_{0}^{1/2}H_{x}\\ \mu_{0}^{1/2}H_{y}\\ \mu_{0}^{1/2}H_{z}\end{bmatrix}=\begin{bmatrix}q_{0}\\ q_{1}\\ q_{2}\\ q_{3}\\ q_{4}\\ q_{5}\end{bmatrix}. \tag{53}\]
In (diagonal) tensor dielectric media one would simply have \(q_{0}\to n_{x}E_{x}\), \(q_{1}\to n_{y}E_{y}\), \(q_{2}\to n_{z}E_{z}\).
The decomposition of the electromagnetic Schrodinger
Figure 1: Two different inhomogeneity refractive index profiles \(1\leq n(x,y)\leq 2\) and the electric field \(E_{z0}(x)\) of the incident wave-packet. The cylinder dielectric has strong spatial gradient near the vacuum-dielectric interface, while the conic dielectric has very weak spatial gradients. In Fig.1a these two profiles are shown superimposed. In Fig.1b the conic dielectric is shown together with the incident wave-packet (arbitrary normalization).
equation (1) in Cartesian components is
\[\frac{\partial q_{0}}{\partial t} =\frac{1}{n}\frac{\partial q_{5}}{\partial y},\quad\frac{\partial q _{1}}{\partial t}=\frac{1}{n}\frac{\partial q_{5}}{\partial x},\quad\frac{ \partial q_{2}}{\partial t}=\frac{1}{n}\Big{[}\frac{\partial q_{4}}{\partial x }-\frac{\partial q_{3}}{\partial y}\Big{]}, \tag{54}\] \[\frac{\partial q_{3}}{\partial t} =\frac{\partial(q_{2}/n)}{\partial y},\quad\frac{\partial q_{4}} {\partial t}=\frac{\partial(q_{2}/n)}{\partial x},\] \[\frac{\partial q_{5}}{\partial t} =-\frac{\partial(q_{1}/n)}{\partial x}+\frac{\partial(q_{0}/n)}{ \partial n_{y}}.\]
For the discrete QLA, using the Alternating Directions Implicit (ADI) integration, the unitary collision operators in the x and y directions are
\[\hat{C}_{X}=\begin{bmatrix}1&0&0&0&0&0\\ 0&\cos\theta_{0}&0&0&0&-\sin\theta_{0}\\ 0&0&\cos\theta_{0}&0&-\sin\theta_{0}&0\\ 0&0&0&1&0&0\\ 0&0&\sin\theta_{0}&0&\cos\theta_{0}&0\\ 0&\sin\theta_{0}&0&0&0&\cos\theta_{0}\end{bmatrix}, \tag{55}\]
\[\hat{C}_{Y}=\begin{bmatrix}\cos\theta_{0}&0&0&0&0&\sin\theta_{0}\\ 0&1&0&0&0&0\\ 0&0&\cos\theta_{0}&\sin\theta_{0}&0&0\\ 0&0&-\sin\theta_{0}&\cos\theta_{0}&0&0\\ 0&0&0&0&1&0\\ -\sin\theta_{0}&0&0&0&\cos\theta_{0}\end{bmatrix}. \tag{56}\]
with collision angle \(\theta_{0}=\delta/4n\). The form of \(\hat{C}_{X}\) can be readily discerned from the coupling of the \(\frac{\partial}{\partial t}\) with \(\frac{\partial}{\partial x}\) derivatives in (54): \(q_{1}-q_{5}\), and \(q_{2}-q_{4}\), as well as the respective collision angle. Similarly for the unitary matrix \(\hat{C}_{Y}\).
We now define the unitary streaming operator \(\hat{S}_{ij}\) which shifts the amplitudes \(\{q_{i},q_{j}\}\) one lattice unit, either in the \(x\) or in the y direction, while leaving all the other amplitudes unaffected. Then the collide-stream sequence along each direction is,
\[\begin{split}\hat{U}_{X}&=\hat{S}_{25}^{+x}\hat{C}_{X}^{ \dagger}\hat{S}_{25}^{-x}\hat{C}_{X}\hat{S}_{14}^{-x}\hat{C}_{X}^{\dagger} \hat{S}_{14}^{+x}\hat{C}_{X}\hat{S}_{25}^{-x}\hat{C}_{X}\hat{S}_{25}^{+x}\hat {C}_{X}^{\dagger}\hat{S}_{14}^{+x}\hat{C}_{X}\hat{S}_{14}^{-x}\hat{C}_{X}^{ \dagger}\\ \hat{U}_{Y}&=\hat{S}_{25}^{+y}\hat{C}_{Y}^{\dagger}\hat{S}_{25}^{-y} \hat{C}_{Y}\hat{S}_{03}^{-y}\hat{C}_{Y}^{\dagger}\hat{S}_{03}^{+y}\hat{C}_{Y} \hat{S}_{25}^{-y}\hat{C}_{Y}\hat{S}_{25}^{+y}\hat{C}_{Y}^{\dagger}\hat{S}_{03} ^{+y}\hat{C}_{Y}\hat{S}_{03}^{-y}\hat{C}_{Y}^{\dagger}.\end{split} \tag{57}\]
It should be noted that the first set of four collide-stream operators in \(\hat{U}_{X}\) and \(\hat{U}_{Y}\) would yield (54) to first order in \(\delta\). An in-depth analysis on derivation of the QLA sequences Eq.(57) can be found in Refs.[18; 19; 20; 23] and in references therein.
The terms in (54), containing the derivatives of the refractive index, are recovered through the following potential operators
\[\hat{V}_{X}=\begin{bmatrix}1&0&0&0&0&0\\ 0&1&0&0&0&0\\ 0&0&1&0&0&0\\ 0&0&0&1&0&0\\ 0&0&-\sin\beta_{0}&0&\cos\beta_{0}&0\\ 0&\sin\beta_{0}&0&0&0&\cos\beta_{0}\end{bmatrix} \tag{58}\]
and
\[\hat{V}_{Y}=\begin{bmatrix}1&0&0&0&0&0\\ 0&1&0&0&0&0\\ 0&0&1&0&0&0\\ 0&0&\cos\beta_{1}&\sin\beta_{1}&0&0\\ 0&0&0&0&1&0\\ -\sin\beta_{1}&0&0&0&0&\cos\beta_{1}\end{bmatrix}. \tag{59}\]
The angles \(\theta_{0}=\delta/4n\), \(\beta_{0}=\delta^{2}\frac{\partial n/\partial x}{n^{2}}\), and \(\beta_{1}=\delta^{2}\frac{\partial n/\partial y}{n^{2}}\), that appearing in matrices (55), (56), (58), and (59) are chosen so that the discretized system reproduces (54) to order \(\mathit{O}(\delta^{2})\).
The evolution of the state vector \(\mathbf{q}\) from time \(t\) to \(t+\Delta t\) is given by,
\[\mathbf{q}(t+\Delta t)=\hat{V}_{Y}\hat{V}_{X}\hat{U}_{Y}\hat{U}_{X}\mathbf{q}(t). \tag{60}\]
Note that the external potential operators \(\hat{V}_{X},\hat{V}_{Y}\), as given above, are not unitary. Quantum implementation of the non-unitary potential operators \(\hat{V}_{X},\hat{V}_{Y}\) can be handled using the Linear Combination of Unitaries (LCU) method.[35] We direct the reader to Ref. [15] for a detailed discussion on the quantum circuit implementation of these QLA non-unitary operators.
A detailed analysis of the QLA for the more general case of a bi-axial medium along with simulation results for scattering of Gaussian pulses can be found in Ref. [23].
### QLA simulation results
In all simulations, the total energy is conserved to the seventh significant digit. A numerical study of errors with respect to spatial resolution was performed in Ref.[27]. It indeed verified that the QLA performs better than 2nd order accuracy. This scaling was further verified in Ref.[36] for spinor BECs. In addition, from current discrete simulation 2D QLA runs[21; 23], it appears that divergence cleaning is not required as QLA divergence errors are spatially localized and do not accumulate. We also reiterate that in applications of QLA to nonlinear spinor
Bose-Einstein condensates, the QLA produced an algorithm that was ideally parallelized to all available cores on a classical supercomputer (over \(750,000\) cores on the now-retired IBM Blue Gene/\(Mira\) supercomputer at Argonne).
The initial electromagnetic wave-packet \(\mathbf{u}_{0}=(E_{z0}(x),-B_{y0}(x))^{T}\) is a Gaussian envelope with internal oscillations, Fig.1b. The wave-packet propagates in the \(x\)-direction, from a vacuum \(n=1\) towards a localized dielectric inhomogeneous object with \(n_{max}(x,y)=2\). This polarization satisfies the initial divergence conditions. As the 1D vacuum wave-packet interacts with the 2D refractive index of the dielectric. the \(B_{y}\) field now becomes 2D, with \(B_{y}(x,y,t)\). This self-consistently generates a \(B_{x}(x,y,t)\) so that \(\nabla\cdot\mathbf{B}=0\) as well as a 2D \(E_{z}(x,y,t)\). Throughout the QLA scattering simulation, \(\nabla\cdot\mathbf{B}\) is monitored and is non-zero in very small isolated spatial regions with some time instants in which \(max_{x,y}|\nabla\cdot\mathbf{B}/\mathbf{B}_{0}|\leq 0.006\). \(\nabla\cdot\mathbf{D}\) is identically zero throughout the simulation. [For initial \(E_{y0}(x)\)-polarization, 2D QLA simulations retain \(\nabla\cdot\mathbf{B}=0\) identically zero for all time.]
In Fig.2, the wave-packet has interacted with the dielectric object. The viewpoint is looking down from the \(z-\)axis onto the \(x-y\) plane. The apex of the cone is seen as a white dot, while the interior of the dielectric cylinder is in a somewhat darker color than the surrounding vacuum. In the case of a dielectric cone, Fig.2a, there is a mild slowing down of that part of the packet that is around the apex of the cone - since the phase velocity is reduced to \(c/n(x,y)\). But more importantly, one does not see any reflected part of the packet from the slowly varying boundary region between vacuum and dielectric. Basically the propagation is WKB-like. On the other hand there are immediate reflection fronts emitted back into the vacuum from the interaction of the wave-packet's oscillation peaks with the steep refractive index gradients in the boundary region of vacuum and cylinder dielectric, Fig.2b. There is also considerable retardation in the oscillation peaks within the dielectric cylinder as the refractive index away from the boundaries are \(n=2\).
As mentioned earlier, the transmitted component of the initial wave-packet propagates into the respective dielectrics with phase velocity
\[v_{ph}=\frac{c}{n(x,y)} \tag{61}\]
because there is no dispersion in the media. However, the wave crests and the envelope along the \(y\)-direction possess different phase velocities during their propagation in the dielectric resulting in a lag between the interior and outer wave components.Ultimately, both dielectrics exhibit complex diffraction patterns outside the dielectric as well as bounded eigenmodes within the latter. This behavior is clearly depicted in Fig.3.
As the bounded modes within the dielectric approach the vacuum boundary, the rapid change in the cylindrical dielectric object produces a secondary internal reflection that propagates back inside the cylinder. For the cone case, the slowly varying transition between the different regions contributes a negligible secondary reflection. Those secondary reflections, along with the secondary propagating wave-fronts in the vacuum region are presented in Fig.4.
The back and forth succession from Fig.4 to Fig.2 through higher order internal reflections in the cylindrical dielectric results in a radiating temporal pattern. It should be reminded that QLA is an initial value solver giving the temporal (and transient) evolution of the
Figure 2: QLA scattering simulation of \(z\)-component of an electromagnetic pulse, \(E_{z0}\) off a dielectric inhomogeneity in the shape of a cone (Fig.2a), versus a cylindrical dielectric (Fig.2b). The perspective is looking down the z-axis onto the x-y plane. The full-wave simulation for the wave-cylinder encounter reveals strong initial reflection phenomena whereas the reflection is very weak in the cone case. This differentiation in the wave behavior is directly related to the steepness of the inhomogeneity gradient. The weak reflected wave from the cone corresponds to asymptotic WKB type of solution.
scattered field without the introduction of any internal boundary conditions to handle vacuum-dielectric effects. Even though the simulations are for non-dispersive dielectrics they reveal that the QLA accurately grasps the interconnection of the transient behavior of waves with the inhomogeneity profile. Extending those considerations to inhomogeneous fusion plasma will provide insights in the temporal evolution of the electromagnetic fields and the species current densities (see the state vector \(\mathbf{\psi}\) in Eq.(15)) that potentially could affect the heating efficiency and the energy transfer.
## IV Conclusions
The contributions of this paper are: (1) the analytical formulation of Maxwell equations in a magnetized plasma, Eq.(15), as a Schrodinger equation, and (2) a fully unitary QLA representation of this augmented Schrodinger equation indicating a polynomial scaling for implementation in a quantum computer that can be tested on present day classical computers.
The augmented Schrodinger representation has advantages over the standard Helmholtz formulation [37; 38] both in the regularity of the spatial derivative of the fields as well as in the construction of formal solutions. The Hermitian structure of the full operator \(\hat{D}\) permits a nor
Figure 4: The absence of internal reflections from the conical dielectric Fig.(a)a versus the internal reflections from the cylindrical dielectric Fig.(b)b. Similar to the behavior of the primary reflections in Fig.2 the inhomogeneity gradient of the dielectrics plays a pivotal role on the strength of the internal reflection.
Figure 3: The propagation of the transmitted wave within the conical and cylindrical dielectrics. The wave propagation is now distorted because the initial wave crests along the \(y-axis\) diffract on the dielectric boundary. In both cases, Figs.(a)a, (b)b, transmitted bounded modes are observed towards the exit point to vacuum.
mal mode decomposition of the solution in terms of the eigenfunctions \(\mathbf{\phi}(\mathbf{r},\lambda)\) of \(\hat{D}\) operator with \(\lambda\) being the respective eigenvalues. This is very important in cases where the inhomogeneous plasma profile does not possess a simple symmetry. In addition, the unitary evolution of Eq.(15) explicitly preserves an extended electromagnetic energy integral (16) beyond the usual Landau and Brillouin approximations [39].
While various quantum simulation schemes can be devised for the solution of the augmented Schrodinger equation (15) for wave propagation in a cold magnetized plasma we are currently pursuing a QLA scheme by expressing the energy preserving evolution as the unitary product formula (50). This decomposition is deemed suitable for construction of a fully unitary QLA,which no longer requires the introduction of potential operators, and their subsequent quantum encoding. Our findings support that the produced QLA sequence of unitary collision-streaming operators could be implemented on a quantum computer with polynomial scaling in respect with the number of qubits \(n_{p}=\log_{2}N\) required to describe the \(N\) lattice cites.
To benchmark the capabilities of QLA we present here the two dimensional scattering of a wave-packet from either a cylindrical or a conical scalar, inhomogeneous non-dispersive dielectrics. For the conic dielectric there are weak spatial gradients in the layer connecting the vacuum to the dielectric. As a result, there is negligible reflection at the first encounter of the wave packet with the dielectric and then following the interaction with the steep cone apex there is no internal reflections within the dielectric. This results in a simple scattered field from the cone. However, for the cylindrical dielectric, the sharp (but continuous) gradient in the layer connecting the dielectric to the vacuum, yields an immediate reflected wave front from the first interaction of the wave packet with the dielectric followed by subsequent reflection/transmission of the wave packet at the dielectric-vacuum layer. This leads to quite complex interference in the scattered fields.
We are now exploring QLA simulations of the wave propagation in a cold magnetized (dispersive) plasma, exploiting the QLA operator splitting approach. While only the \(x\)-dependent fully unitary QLA is presented here, the use of the Alternating Direction Implicit (ADI) integration scheme will permit extensions to fully 3D simulations. Moreover, the fact that QLA is ideally parallelized on classical supercomputers together with the polynomial scaling of its quantum implementation yields a pathway for high fidelity simulation results and possibly a hybrid classical-quantum computation model.
###### Acknowledgements.
This work has been carried out within the framework of the EUROfusion Consortium, funded by the European Union via the Euratom Research and Training Programme (Grant Agreement No 101052200 -- EUROfusion). Views and opinions expressed are however those of the authors only and do not necessarily reflect those of the European Union or the European Commission. Neither the European Union nor the European Commission can be held responsible for them. This research was partially supported by Department of Energy grants DE-SC0021647, DE-FG0291ER-54109, DE-SC0021651, DE-SC0021857, and DE-SC0021653. This research used resources of the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility located at Lawrence Berkeley National Laboratory, operated under Contract No. DE-AC02-05CH11231 using NERSC award FES-ERCAP0020430.
## Author Declarations
### Conflict of Interest
The authors have no conflicts to disclose.
### Author Contributions
**Efstratios Koukoutsis**: Conceptualization (lead); Formal analysis (lead); Methodology (equal); Investigation (equal); Writing - original draft (lead); Writing - review & editing (equal). **Kyriakos Hizanidis**: Methodology (equal); Supervision (supporting); Investigation (supporting); Writing - review & editing (equal); Funding acquisition (equal). **George Vahala**: Conceptualization - QLA (lead); Methodology (equal); Investigation - QLA (lead); Visualization (equal); Writing - review & editing (equal); Funding acquisition (equal). **Min Soe**: Software - QLA MPI & Graphics routines (lead); Visualization (equal); Funding acquisition (equal). **Linda Vahala**: Data curation - data analysis (lead), Writing - review & editing (equal); Funding acquisition (equal). **Abhay K. Ram**: Methodology (equal); Investigation - physics (equal); Writing - review & editing (equal); Funding acquisition (equal).
## Data Availability
The data that support the findings of this research are available from the corresponding author upon reasonable request.
|
2307.16358 | Moreau-Yoshida Variational Transport: A General Framework For Solving
Regularized Distributional Optimization Problems | We consider a general optimization problem of minimizing a composite
objective functional defined over a class of probability distributions. The
objective is composed of two functionals: one is assumed to possess the
variational representation and the other is expressed in terms of the
expectation operator of a possibly nonsmooth convex regularizer function. Such
a regularized distributional optimization problem widely appears in machine
learning and statistics, such as proximal Monte-Carlo sampling, Bayesian
inference and generative modeling, for regularized estimation and generation.
We propose a novel method, dubbed as Moreau-Yoshida Variational Transport
(MYVT), for solving the regularized distributional optimization problem. First,
as the name suggests, our method employs the Moreau-Yoshida envelope for a
smooth approximation of the nonsmooth function in the objective. Second, we
reformulate the approximate problem as a concave-convex saddle point problem by
leveraging the variational representation, and then develope an efficient
primal-dual algorithm to approximate the saddle point. Furthermore, we provide
theoretical analyses and report experimental results to demonstrate the
effectiveness of the proposed method. | Dai Hai Nguyen, Tetsuya Sakurai | 2023-07-31T01:14:42Z | http://arxiv.org/abs/2307.16358v2 | Moreau-Yoshida Variational Transport: A General Framework For Solving Regularized Distributional Optimization Problems
###### Abstract
We consider a general optimization problem of minimizing a composite objective functional defined over a class of probability distributions. The objective is composed of two functionals: one is assumed to possess the variational representation and the other is expressed in terms of the expectation operator of a possibly nonsmooth convex regularizer function. Such a regularized distributional optimization problem widely appears in machine learning and statistics, such as proximal Monte-Carlo sampling, Bayesian inference and generative modeling, for regularized estimation and generation.
We propose a novel method, dubbed as **M**oreau-**Y**oshida **V**ariational **T**ransport (**MYVT**), for solving the regularized distributional optimization problem. First, as the name suggests, our method employs the Moreau-Yoshida envelope for a smooth approximation of the nonsmooth function in the objective. Second, we reformulate the approximate problem as a concave-convex saddle point problem by leveraging the variational representation, and then develop an efficient primal-dual algorithm to approximate the saddle point. Furthermore, we provide theoretical analyses and report experimental results to demonstrate the effectiveness of the proposed method.
## 1 Introduction
Many tasks in machine learning and computational statistics are posed as distributional optimization problems, where the goal is to optimize a functional \(F:\mathcal{P}_{2}(\mathcal{X})\rightarrow\mathbb{R}\) of probability distributions: \(\min_{q\in\mathcal{P}_{2}(\mathcal{X})}F(q)\), where \(\mathcal{P}_{2}(\mathcal{X})\) denotes the set of probability distributions defined on the domain \(\mathcal{X}\) (\(\subset\mathbb{R}^{d}\)) with finite second-order moment. Examples of this formulation include many well-known problems such as Bayesian inference (e.g. variational autoencoder [6]) and synthetic sample generation (e.g. generative adversarial networks [5]). Basically, these models aim to approximate a target distribution by generating samples (or also called particles) in a manner that minimizes the dissimilarity between the empirical probability distribution obtained from the samples and the target distribution. The dissimilarity function typically involves measures such as Kullback-Leiber (KL) divergence, Jensen-Shannon (JS) divergence or Wasserstein distance in the optimal transport [18].
In this paper, we consider a general class of distributional optimization problems with a composite objective functional of the following form:
\[\min_{q\in\mathcal{P}_{2}(\mathcal{X})}\left\{G(q)\coloneqq F(q)+\alpha \mathbb{E}_{\mathbf{x}\sim q}\left[g(\mathbf{x})\right]\right\} \tag{1}\]
where \(\mathbb{E}\) denotes the expectation operator and \(g:\mathbb{R}^{d}\rightarrow\mathbb{R}\) denotes a possibly nonsmooth convex regularizer function for \(\mathbf{x}\in\mathcal{X}\) and \(\alpha\) is a constant. Two choices of the function \(g\) are the \(l_{1}\)-norm function \(g(\mathbf{x})=|\mathbf{x}|\) which encourages sparse samples (with many elements equal 0), and one-dimensional total variation semi-norm \(g(\mathbf{x})=\sum_{i=2}^{d}|\mathbf{x}_{i}-\mathbf{x}_{i-1}|\) which encourages sparsity of the difference between nearby elements (i.e. local constancy of elements). Solving problem (1) could be challenging due to the non-smoothness of the function \(g\).
An example of the above formulation is the proximal Markov Chain Monte Carlo (MCMC) sampling [12, 3], which exploits convex analysis to obtain samples efficiently from log-concave density of the following form: \(\text{exp}(-U(\textbf{x}))\), where \(U(\textbf{x})=f(\textbf{x})+g(\textbf{x})\), \(f:\mathbb{R}^{d}\rightarrow\mathbb{R}\) is a smooth convex function while \(g\) is the nonsmooth convex function. By employing KL divergence to quantify the difference between the empirical probability distribution \(q\) obtained from the current samples and the target distribution, the proximal MCMC sampling can be regarded as a specific instance of problem (1).
We are particularly focused on devising an efficient algorithm for solving problem (1) when the functional \(F\) has a variational represenation in the following form:
\[F(q)=\sup_{h\in\mathcal{H}}\left\{\mathbb{E}_{\textbf{x}\sim q}\left[h(\textbf {x})\right]-F^{*}(h)\right\} \tag{2}\]
where \(\mathcal{H}\) is a class of square-integrable functions on \(\mathcal{X}\) with respect to the Lebesgue measure and \(F^{*}:\mathcal{H}\rightarrow\mathbb{R}\) is a convex conjugate functional of \(F\). For instance, when \(F\) is the KL divergence, its variational representation is defined as:
\[KL(q,\pi)=\sup_{h\in\mathcal{H}}\left\{\mathbb{E}_{\textbf{x}\sim q}\left[h( \textbf{x})\right]-\log\mathbb{E}_{\textbf{x}\sim\pi}\left[e^{h(\textbf{x}) }\right]\right\} \tag{3}\]
where \(\pi\) denotes the target distribution. The solution to this problem can be estimated using samples from \(q\) and \(\pi\). In general, directly optimizing \(F\) (problem (1) with \(\alpha=0\)) can be achieved through an iterative algorithm called Wasserstein Gradient Descent [16]. This algorithm involves two steps in each iteration: 1) computing Wasserstein gradient of \(F\) based on the current probability distribution and 2) performing an exponential mapping on \(\mathcal{P}_{2}(\mathcal{X})\), the space of probability distributions. Variational Transport (VT) [7] is introduced as a method to minimize \(F\) by approximating a probability distribution using a set of particles. It leverages the variational representation of \(F\) to update particles. Specifically, VT solves problem (2) by utilizing particles to approximate the Wasserstein gradient of \(F\). The obtained solution is then used to push each particle in a specified direction. This process can be considered as a forward discretization of the Wasserstein gradient flow.
However, when \(\mathcal{X}\) is a constrained domain, VT may push the particles outside of the domain when following the direction given by the solution of (2). To address this issue, MirrorVT [8] is introduced. The main idea behind mirrorVT is to map the particles from constrained domain (primal space) to an unconstrained domain (dual space), induced by a mirror map. Then, an approximate Wasserstein gradient descent is performed on the space of probability distributions defined over the dual space to update each particle, similar to VT. At the end of each iteration, the particles are mapped back to the original constrained domain.
**Contributions**. We propose a novel method, named **M**oreau-**Y**oshida **V**ariational **T**ransport (**MYVT**)1, for solving problem (1). Our method tackles the non-smoothness of \(g\) in the regularization term by employing the Moreau-Yoshida envelope [14] to obtain a smooth approximation of \(g\). By leveraging the variational representation of \(F\), we reformulate the original problem as a concave-convex saddle point problem and develop an efficient primal-dual algorithm to approximate the saddle point. In contrast to the particle-based methods [7, 8], MYVT employs a neural network to represent the probability distribution \(q\) and generate the particles. The network parameters are trained to optimize the objective of problem (1). This approach addresses the issue of limited approximation capacity and the need for significant memory resources to store a large number of particles in particle-based methods. Furthermore, we provide theoretical analyses and conduct experiments on synthetic datasets to verify the effectiveness of the proposed MYVT method.
Footnote 1: The code can be found at [https://github.com/haidinguyen0909/MYVT](https://github.com/haidinguyen0909/MYVT) after the acceptance of the paper.
## 2 Related Works
Our work is related to the research on methods for sampling distributions with nonsmooth convex composite potentials, which involve the sum of a continuously differentiable function and a possibly nonsmooth function. In particular, [12, 3] introduced the proximal MCMC algorithm for quantifying uncertainty in Bayesian imaging application. They focused on the total-variation semi-norm [15] and \(l_{1}\)-norm [11] which encourage the generation of parameter estimates with specific structural properties. To handle the non-smoothness of the penalties, they utilized the Moreau-Yoshida envelope to obtain a smooth approximation to the total-variation semi-norm and \(l_{1}\)-norm. The Langevin algorithm was then employed to generate samples from the smoothed posterior distribution.
Furthermore, our method builds on VT [7], which leverages the optimal transport framework and variational representation (2) of \(F\) to solve problem (1) without regularization via particle approximation. In each iteration, VT estimated the
Wasserstein gradient of \(F\) by solving a variational maximization problem associated with \(F\) and the current particles. It then performed Wasserstein gradient descent by moving particles in a direction specified by the estimated Wasserstein gradient. The advantage of VT was its ability to optimize \(F\) beyond commonly targeted KL divergence in MCMC sampling methods.
When \(g\) represents the indicator function of a set, problem (1) transforms into a constrained distributional optimization problem, where the probability distributions are defined over a constrained domain. Applying VT to this problem may lead to particles moving outside the constrained domain. MirrorVT [8] addressed this issue by mapping particles from the constrained domain to the unconstrained one through a mirror map, which is inspired by the Mirror Descent algorithm originally designed for constrained convex optimization [1].
## 3 Preliminaries
We first review notions from convex analysis essential for our proposed method, specifically Moreau-Yoshida envelopes and proximal operators. Then, we review some concepts in optimal transport and Wasserstein space, and also state some related properties. We lastly summarize VT.
### Moreau-Yoshida Envelopes and Proximal Operators
**Definition 1**.: (Moreau-Yoshida envelope) Given a convex function \(g:\mathbb{R}^{d}\rightarrow\mathbb{R}\) and a positive scaling parameter \(\lambda\), the Moreau-Yoshida envelope of \(g\), denoted by \(g^{\lambda}\), is given by:
\[g^{\lambda}(\mathbf{x})=\inf_{\mathbf{y}\in\mathbb{R}^{d}}\left\{g(\mathbf{y })+\frac{1}{2\lambda}\|\mathbf{x}-\mathbf{y}\|^{2}\right\} \tag{4}\]
The infimum of (4) is always uniquely attained and the minimizer defines the proximal map of \(g\):
**Definition 2**.: (Proximal Map) The proximal map of \(g\), denoted by \(\texttt{prox}_{g}^{\lambda}\), is given by:
\[\texttt{prox}_{g}^{\lambda}(\mathbf{x})=\operatorname*{arg\,min}_{\mathbf{y} \in\mathbb{R}^{d}}\left\{g(\mathbf{y})+\frac{1}{2\lambda}\|\mathbf{x}- \mathbf{y}\|^{2}\right\} \tag{5}\]
The approximation \(g^{\lambda}\) inherits the convexity of \(g\) and is always continuously differentiable. In particular, \(g^{\lambda}\) is gradient Lipschitz: for \(x,y\in\mathbb{R}^{d}\),
\[\|\nabla g^{\lambda}(\mathbf{x})-\nabla g^{\lambda}(\mathbf{y})\|\leq\frac{1 }{\lambda}\|\mathbf{x}-\mathbf{y}\|\]
where the gradient \(\nabla g^{\lambda}(\mathbf{x})\) is given by:
\[\nabla g^{\lambda}(\mathbf{x})=\frac{1}{\lambda}\left(\mathbf{x}-\texttt{ prox}_{g}^{\lambda}(\mathbf{x})\right) \tag{6}\]
Most importantly, \(g^{\lambda}(\mathbf{x})\) converges pointwise to \(g(\mathbf{x})\) as \(\lambda\) tends to zero [14].
### Optimal transport and Wasserstein space
Optimal Transport [18] has received much attention in the machine learning community and has been shown to be an effective tool for comparing probability distributions in many applications [13, 10, 9]. Formally, given a measurable map \(T:\mathcal{X}\rightarrow\mathcal{X}\) and \(p\in\mathcal{P}_{2}(\mathcal{X})\), we say that \(q\) is the _push-forward measure_ of \(p\) under \(T\), denoted by \(q=T_{\sharp}^{\sharp}p\), if for every Borel set \(E\subseteq\mathcal{X}\), \(q(E)=p(T^{-1}(E))\). For any \(p,q\in\mathcal{P}_{2}(\mathcal{X})\), the \(2\)-Wasserstein distance \(\mathcal{W}_{2}(p,q)\) is defined as:
\[\mathcal{W}_{2}^{2}(p,q)=\inf_{\pi\in\Pi(p,q)}\int_{\mathcal{X} \times\mathcal{X}}\|\mathbf{x}-\mathbf{x}^{\prime}\|_{2}^{2}d\pi(\mathbf{x}, \mathbf{x}^{\prime})\]
where \(\Pi(p,q)\) is all probability measures on \(\mathcal{X}\times\mathcal{X}\) whose two marginals are equal to \(p\) and \(q\), \(\|\cdot\|_{2}\) denotes the Euclidean norm. It is known that the metric space \((\mathcal{P}_{2}(\mathcal{X}),\mathcal{W}_{2})\), also known as Wasserstein space, is an infinite-dimensional geodesic space [18].
**Definition 3**.: (The first variation of functional) the first variation of \(F\) evaluated at \(p\), denoted by \(\partial F(p)/\partial p:\mathcal{X}\rightarrow\mathbb{R}\), is given as follows:
\[\lim_{\epsilon\to 0}\frac{1}{\epsilon}\left(F(p+\epsilon\chi)-F(p) \right)=\int_{\mathcal{X}}\frac{\partial F(p)}{\partial p}(\mathbf{x})\chi( \mathbf{x})\mathrm{d}\mathbf{x}\]
for all \(\chi=q-p\), where \(q\in\mathcal{P}_{2}(\mathcal{X})\).
With mild regularity assumptions, the Wasserstein gradient of \(F\), denoted by \(\mathtt{grad}F\), relates to the gradient of the first variation of \(F\) via the following continuity equation:
\[\mathtt{grad}F(p)(\mathbf{x})=-\mathtt{div}\left(p(\mathbf{x})\nabla\frac{ \partial F(p)}{\partial p}(\mathbf{x})\right) \tag{7}\]
for all \(\mathbf{x}\in\mathcal{X}\), where \(\mathtt{div}\) denotes the divergence operator.
**Definition 4**.: (Geodesically strong convexity) If \(F\) is geodesically \(\mu\)-strongly convex with respect to the \(2\)-Wasserstein distance, then for \(\forall p\), \(p^{\prime}\in\mathcal{P}_{2}(\mathcal{X})\), we have:
\[F(p^{\prime})\geq F(p)+\langle\mathtt{grad}F(p),\mathtt{Exp}_{p}^{-1}(p^{ \prime})\rangle_{p}+\frac{\mu}{2}\cdot\mathcal{W}_{2}^{2}(p^{\prime},p)\]
where \(\mathtt{Exp}_{p}\) denotes the exponential mapping, which specifies how to move \(p\) along a tangent vector on \(\mathcal{P}_{2}(\mathcal{X})\) and \(\mathtt{Exp}_{p}^{-1}\) denotes its inversion mapping, which maps a point on \(\mathcal{P}_{2}(\mathcal{X})\) to a tangent vector. We refer the readers to [16] for more details.
### Variational Transport
To optimize \(F\) defined on the unconstrained domain, we can utilize functional gradient descent with respect to the geodesic distance. This approach involves constructing a sequence of probability distributions \(\left\{q_{t}\right\}_{t\geq 1}\) in \(\mathcal{P}_{2}(\mathcal{X})\) as follows:
\[q_{t+1}\leftarrow\mathtt{Exp}_{q_{t}}[-\eta_{t}\cdot\mathtt{grad}F(q_{t})] \tag{8}\]
where \(\eta_{t}\) is the step size. The VP algorithm [7] is introduced to solve the distributional optimization problem by approximating \(q_{t}\) with an empirical measure \(\tilde{q}_{t}\) obtained from \(N\) particles \(\left\{\mathbf{x}_{t,i}\right\}_{i\in[N]}\). VP assumes that \(F\) can be expressed in the variational representation (2). One advantage of this formulation is that the Wasserstein gradient can be computed based on the solution \(h_{t}^{*}\) of problem (2), which can be estimated using samples drawn from \(q_{t}\). Specifically, it is shown that \(h_{t}^{*}=\partial F/\partial q_{t}\), representing the first variation of \(F\) (refer to Proposition 3.1 in [7]). Furthermore, under the assumption that \(\nabla h_{t}^{*}\) is \(h\)-Lipschitz continuous, it is demonstrated that for any \(\eta_{t}\in[0,1/h)\), the exponential mapping in (8) is equivalent to the push-forward mapping defined by \(h_{t}^{*}\): for \(\mathbf{x}_{t,i}\sim q_{t}\):
\[\mathbf{x}_{t+1,i}\leftarrow\mathtt{Exp}_{\mathbf{x}_{t,i}}[-\eta_{t}\cdot \nabla h_{t}^{*}(\mathbf{x}_{t,i})] \tag{9}\]
where \(\mathbf{x}_{t+1,i}\) is the updated particle which is drawn from \(q_{t+1}\), \(\mathtt{Exp}_{\mathbf{x}}[\eta\cdot\nabla u]\) denotes the transportation map which sends \(\mathbf{x}\in\mathcal{X}\) to a point \(\mathbf{x}+\eta\cdot\nabla u\in\mathcal{X}\) (see Proposition 3.2 in [7]). In addition, VP estimates the solution \(h_{t}^{*}\) by solving problem (2) using finite samples drawn from \(q_{t}\). This is achieved through stochastic gradient descent on the domain \(\mathcal{X}\):
\[\tilde{h_{t}^{*}}=\operatorname*{arg\,max}_{h\in\tilde{\mathcal{H}}}\left\{ \frac{1}{N}\sum_{i=1}^{N}h(\mathbf{x}_{t,i})-F^{*}(h)\right\} \tag{10}\]
where \(\tilde{\mathcal{H}}\) is a function class, which can be specified to be the following class of deep neural networks:
\[\tilde{\mathcal{H}}=\left\{\tilde{h}|\tilde{h}(\mathbf{x})=\frac{1}{\sqrt{n_{ w}}}\sum_{i=1}^{n_{w}}b_{i}\sigma([\mathbf{w}]_{i}^{T}\mathbf{x})\right\} \tag{11}\]
where \(n_{w}\) is the width of the neural networks, \([\mathbf{w}]_{i}\in\mathbb{R}^{d}\), \(\mathbf{w}=([\mathbf{w}]_{1},...,[\mathbf{w}]_{n_{w}})^{T}\in\mathbb{R}^{n_{w }\times d}\) is the input weight, \(\sigma\) denotes a smooth activation function, and \(b_{i}\in\{-1,1\}\). In each iteration, the weights \(\mathbf{w}\) is guaranteed to lie in the \(l_{2}\)-ball centered at the initial weights \(\mathbf{w}(0)\) with radius \(r_{h}\) defined as \(\mathcal{B}^{0}(r_{h})=\{\mathbf{w}:\|\mathbf{w}-\mathbf{w}(0)\|_{2}\leq r_{h}\}\). This choice of neural network class facilitates the analysis of the gradient error induced by the difference between \(h_{t}^{*}\) and \(\tilde{h_{t}^{*}}\)[7].
## 4 Moreau-Yoshida Variational Transport
In this section, we present our method, MYVT, for solving problem (1).
### Moreau-Yoshida approximation of problem (1)
To address the non-smoothness of function \(g\), our approach is to replace \(g\) with its envelope \(g^{\lambda}\), which leads to the following smooth approximate distributional optimization problem:
\[\min_{q\in P_{2}(\mathcal{X})}\left\{G^{\lambda}(q)\coloneqq F(q)+\alpha \mathbb{E}_{\mathbf{x}\sim q}[g^{\lambda}(\mathbf{x})]\right\} \tag{12}\]
We denote \(\pi\) and \(\pi^{\lambda}\) as the optimal solutions of problems (1) and (12), respectively. The following theorem establishes a connection between the two solutions.
**Theorem 1**.: _Given that \(F(q)\) is geodesically \(\mu\)-strongly convex (\(\mu\)>0), the solution \(\pi^{\lambda}\) converges to \(\pi\) as \(\lambda\) goes to 0 with respect to the 2-Wasserstein distance, i.e._
\[\lim_{\lambda\to 0}\mathcal{W}_{2}^{2}(\pi^{\lambda},\pi)=0 \tag{13}\]
The proof of Theorem 1 is given in Appendix A.1.
### Primal-Dual Approach to problem (12)
The objective in problem (12) still poses a challenge as it covers the entire space of probability distributions, making it generally computationally intractable. To tackle this problem, a common approach is to employ specific parameterization forms for the distribution \(q\) and optimize its parameters or approximate it using a set of particles, as discussed in [7, 8]. However, these approaches often have limitations in terms of approximation ability and memory resources they require. Taking inspiration from [20], we propose an alternative method of implicitly representing \(q\) using a neural network. In this approach, we generate \(\mathbf{x}_{\epsilon}\sim q\) by passing \(\epsilon\) drawn from a simple distribution \(p_{\epsilon}\) through a network, i.e. \(\mathbf{x}_{\epsilon}=V(\epsilon,\theta)\), where \(\theta\) denotes the network parameters, which are iteratively adjusted to minimize the objective in problem(12). In the following theorem, we present an equivalent primal-dual view of problem (12) by utilizing the variational representation of \(\bar{F}\):
**Theorem 2**.: _We can formulate problem (12) equivalently as:_
\[\max_{h\in\mathcal{H}}\mathbb{E}_{\epsilon\sim p_{\epsilon}}\left[\min_{ \mathbf{x}_{\epsilon}\in\mathcal{X}}\left\{h(\mathbf{x}_{\epsilon})+\alpha g ^{\lambda}(\mathbf{x}_{\epsilon})\right\}\right]-F^{*}(h) \tag{14}\]
Figure 1: Comparison of MYVT(\(\alpha=0.1\)) and VT in terms of MSE and sparsity (average \(l_{1}\)-norm of generated samples). (a) MSE of MYVT and VT over 2000 iterations, (b) average \(l_{1}\)-norm over 2000 iterations, (c) three example samples generated by VT, (d) three example samples generated by MYVT.
The primal-dual formulation (14) is derived by applying the variational representation of \(F\) and interchangeability principle introduced in [2]. The detailed proof of Theorem 2 can be found in Appendix A.2.
Based on the finding of Theorem 2, we can transition to handling the distribution \(q\) for each local variable. Specifically, given \(\epsilon\sim p_{e}\) and a fixed \(h\), we can solve the following local optimization problem for \(\textbf{x}_{\epsilon}\):
\[x_{\epsilon}^{*}=\operatorname*{arg\,min}_{x_{\epsilon}\in\mathcal{X}}\left\{ h(x_{\epsilon})+\alpha g^{\lambda}(x_{\epsilon})\right\} \tag{15}\]
To efficiently solve problem (15), we can take advantage of the advancements in optimization literature. In this work, we will focus on utilizing gradient descent for its simplicity and effectiveness. Specifically, in each iteration of our method, we draw a batch of random inputs \(\{\epsilon_{i}\}_{i=1}^{m}\), where \(m\) is mini-batch size. We then calculate the corresponding outputs for these inputs, which are subsequently used as the initial particles:
\[\textbf{x}_{i}^{(0)}=V(\epsilon_{i},\theta)\]
We perform \(T\) steps of gradient updates to optimize problem (15) for each particle, i.e., for \(t=0,...,T-1\):
\[\Delta_{i}^{(t)}=\nabla_{\textbf{x}}h(\textbf{x}_{i}^{(t)})+ \frac{\alpha}{\lambda}(\textbf{x}_{i}^{(t)}-\texttt{prox}_{g}^{\lambda}( \textbf{x}_{i}^{(t)})) \tag{16}\] \[\textbf{x}_{i}^{(t+1)}=\textbf{x}_{i}^{(t)}-\eta\Delta_{i}^{(t)}\]
The particles obtained from the last update, denoted as \(\textbf{x}_{i}^{(T)}\) (\(i=1,...,m\)), approximate the solutions of the local optimization problems and are utilized to estimate \(h\) in problem (14). Furthermore, the particles undergo updates over multiple steps to converge towards the minimizers of local optimization problems. Therefore, the parameters \(\theta\) of \(V\) need to be updated such that it outputs \(\{\textbf{x}_{i}^{(t+1)}\}_{i=1}^{m}\) instead of \(\{\textbf{x}_{i}^{(t)}\}_{i=1}^{m}\). In other words, we aim to update \(\theta\) as follows:
\[\theta^{(t+1)}\leftarrow\operatorname*{arg\,min}_{\theta}\sum_{i=1}^{m}\lVert V (\epsilon_{i},\theta)-\textbf{x}_{i}^{(t+1)}\rVert^{2} \tag{17}\]
As suggested in [4], we can perform only one step of gradient descent as follows:
\[\theta^{(t+1)}\leftarrow\theta^{(t)}-\eta\sum_{i=1}^{m}\nabla_{\theta}V( \epsilon_{i},\theta^{(t)})\Delta_{i}^{(t)} \tag{18}\]
While the update (18) is an approximation of (17), it offers computational efficiency and has shown promising performance in our experiments. Additionally, the following theorem establishes a connection between the optimization of the local variables and the gradient flow for optimizing the regularized functional in (12). This connection holds in the limit case when utilizing a specific form of \(h(\textbf{x}_{\epsilon})\):
**Theorem 3**.: _For a continuous time \(t=\eta T\) and step size \(\eta\to 0\), the distribution of particles \(\textbf{x}_{\epsilon}^{(t)}\), denoted as \(q_{t}\), follows the following Fokker-Planck equation:_
\[\frac{\partial q_{t}}{\partial t}=-\texttt{div}\left(q_{t}v_{t}\right) \tag{19}\]
_where \(v_{t}(\textbf{x})\coloneqq\nabla h_{t}^{*}(\textbf{x})+\frac{\alpha}{\lambda }\left(\textbf{x}-\texttt{prox}_{g}^{\lambda}(\textbf{x})\right)\), for all \(\textbf{x}\in\mathcal{X}\), and \(h_{t}^{*}=\operatorname*{arg\,max}_{h\in\mathcal{H}}\left\{\mathbb{E}_{ \textbf{x}\sim q_{t}}\left[h(\textbf{x})\right]-F^{*}(h)\right\}=\frac{\partial F }{\partial q}\left(q_{t}\right)\)._
_This is the Wasserstein gradient flow of \(G^{\lambda}(q)\) in the space of probability distributions with 2-Wasserstein metric. Suppose \(F\) is geodesically \(\mu\)-strongly convex, the convergence of \(G^{\lambda}(q_{t})\) is as follows, for \(t\geq 0\):_
\[G^{\lambda}(q_{t})-G^{\lambda}(\pi^{\lambda})\leq\exp(-2\mu t)(G^{\lambda}(q_ {0})-G^{\lambda}(\pi^{\lambda})) \tag{20}\]
The proof of Theorem 3 is given in Appendix A.3. We observe that in limit case, when \(F\) is geodesically convex, \(q_{t}\) exponentially converges to the minimizer of \(G^{\lambda}(q)\) as \(t\rightarrow\infty\).
### Practical Implementation
We are now ready to present our algorithm for solving problem (12). The components of our method can be easily parameterized by deep neural networks and their parameters can be estimated by stochastic gradient descent. For instance, our method requires the initialization \(\textbf{x}_{i}^{(0)}\), which is the output of the network \(V(\epsilon_{i},\theta)\), where \(\theta\) is the parameters. The function \(h\) is parameterized by another neural network with parameters \(W\). Taking into account of above parameters, we have our proposed algorithm illustrated in Algorithm 1. We perform \(K\) iterations. For each
iteration, initial particles are obtained by drawing a mini-batch of \(m\) random inputs and calculating their corresponding outputs through \(V\), then we perform \(T\) steps of updates for each particle. The particles obtained from the last steps are used for estimating \(h\) by performing \(T^{\prime}\) steps of updates to optimize its parameters \(W\).
```
Input: Functional \(F\), mini-batch size \(m\), number of iterations \(K\), number of steps \(T\) to optimize (16), number of steps \(T^{\prime}\) to optimize \(h\) in (15), step size \(\eta\) and \(\lambda\),\(\alpha\). Output: Networks \(V(\cdot,\theta)\), \(h_{W}(\cdot)\)
1 Randomly initialize \(\theta\), \(W\) (parameters of \(V\) and \(h\))
2\(k\gets 0\)
3while\(k<K\)do
4 sample mini-batch \(\left\{\epsilon_{i}\right\}_{i=1}^{m}\sim p_{\epsilon}\)
5 compute \(\mathbf{x}_{i}^{(0)}=V(\epsilon_{i},\theta)\), for \(i=1,...,m\)
6\(t\gets 0\)
7while\(t<T\)do
8\(\Delta_{i}^{(t)}=\nabla_{x}h(\mathbf{x}_{i}^{(t)})+\frac{\alpha}{\lambda}( \mathbf{x}_{i}^{(t)}-\texttt{prox}_{g}^{\lambda}(\mathbf{x}_{i}^{(t)}))\)
9 update \(\mathbf{x}_{i}^{(t+1)}=\mathbf{x}_{i}^{(t)}-\eta\Delta_{i}^{(t)}\), for \(i=1,...,m\)
10 update \(\theta\leftarrow\theta-\eta\sum_{i=1}^{m}\nabla_{\theta}V(\epsilon_{i},\theta) \Delta_{i}^{(t)}\)
11\(t\gets t+1\)
12
13 end while
14\(\mathbf{x}_{i}^{*}\leftarrow\mathbf{x}_{i}^{(T)}\), for \(i=1,...,m\)
15\(t^{\prime}\gets 0\)
16while\(t^{\prime}<T^{\prime}\)do
17 update \(W=W+\eta\left(\frac{1}{m}\sum_{i=1}^{m}\nabla_{W}h(\mathbf{x}_{i}^{*})-\nabla_ {W}F^{*}(h_{W})\right)\)
18\(t^{\prime}\gets t^{\prime}+1\)
19
20 end while
21\(k\gets k+1\)
22
23 end while
```
**Algorithm 1**Moreau-Yoshida variational transport (MYVT)
## 5 Numerical Experiments
In this section, we report numerical experiments on synthetic data sets to demonstrate the effectiveness of MYVT.
### Synthetic Experiments
**Experimental settings**. We consider two case studies corresponding to two common choices of the nonsmooth function \(g\): (a) \(l_{1}\)-norm and (b) total variation (TV) semi-norm, which promote sparsity of samples (i.e. few non-zero elements) and local constancy of elements (i.e. sparsity of the difference of nearby element), respectively. For the \(l_{1}\)-norm, the proximal map of \(g(\mathbf{x})=\|\mathbf{x}\|_{1}\) is the well-known soft-thresholding operator \(S_{\alpha}(\mathbf{x})\)[17]. For TV semi-norm, the proximal map does not have a closed-form solution, but it can be efficiently estimated using the alternating direction method of multipliers (ADMM) [19]. In our experiments, we perform 20 iterations of ADMM to estimate the proximal map. For the first case study, we design the truth \(\mathbf{z}\in\mathbb{R}^{100}\) to be sparse vector with only a few non-zero elements. For the second case study, we design \(\mathbf{z}\) to be a locally smooth vector. For each case study, we generate 500 examples \(\left\{\mathbf{y}_{i}\right\}_{i=1}^{500}\) by adding Gaussian noise with mean 0 and variance \(0.2^{2}\) to the truth. These settings allow to evaluate the performance of methods in recovering the underlying structure of the data in the presence of noise.
The generated examples are used to represent the target empirical distributions \(\pi\) we aim to approximate. We set \(F(q)=D(q,\pi)\), where \(D\) represents a dissimilarity measure (KL or JS divergences in our experiments) between two probability distributions. In the following, we report results when utilizing the KL divergence for the dissimilarity \(D\). For experiments with the JS divergence, refer to Appendix A.4.
**Comparing methods**. We compare MYVT and VT in the experiments. For VT, we represent \(q\) using a set of particles and updating particles. For MYVT, we set \(\alpha=0.1\) for both case studies. We evaluate and compare the quality of particles generated by the two methods using the following measures: a) mean squared error (MSE): the average squared difference between the generated samples and the truth \(\mathbf{z}\), (b) the average \(l_{1}\)-norm of generated samples for the
first study case and (c) the average TV semi-norm of generated samples for the second study case. We parameterize \(V\) using a neural network with four layers, each of which consists of a linear layer with 100 neurons, followed by an activation function. We parameterize \(h\) using another neural network with two layers, each of which has 100 neurons. The step sizes for VT and MYVT are fine-tuned and set to 0.01 and 0.0001, respectively. We run \(K=2000\) and \(K=4000\) iterations for the first and the second case studies. respectively. For MYVT, we set \(\lambda=0.0001,T=5\) and \(T^{\prime}=2\) for all of experiments.
**Results**. In the first case study, we compare MYVT and VT in terms of MSE and average \(l_{1}\)-norm. As illustrated in Figure 1, both methods are able to generate samples that are similar to the truth, as indicated by the decreasing MSE values over 2000 iterations. However, MYVT keeps the average \(l_{1}\)-norm much lower than that of VT over iterations. In particular, MYVT consistently produce samples with much lower average \(l_{1}\)-norm (around 15.81) compared to VT (around 30.68). This can be attributed to the effect of \(g(\mathbf{x})=\|\mathbf{x}\|_{1}\) in problem (1), which promotes the sparsity in the generated samples. Visually, samples generated by MYVT appear considerably sparser than those generated by VT (see Figures 0(c) and 0(d)).
Figure 3: (a) the truth image of size \(80\times 80\). (b) a noisy version of the truth by adding a Gaussian noise.
Figure 2: Evolutions of example samples generated by VT and MYVT(\(\alpha=100\)) and VT in terms of MSE and smoothness (average TV semi-norm of generated samples). (a) MSE of MYVT and VT over 4000 iterations, (b) average TV semi-norm over 4000 iterations, (c) three example samples generated by VT, (d) three example samples generated by MYVT.
In the second case study, we compare MYVT and VT in terms of MSE and average TV semi-norm, as shown in Figure 2. Again both methods are able to generate samples that are closely resemble the truth, as evidenced by the significant decrease in MSE values over 4000 iterations (see Figure 2a), while MYVT keeps the average TV semi-norm much lower than VT over iterations (see Figure 2b). In particular, the average TV semi-norm of samples generated by MYVT (18.52) is significantly smaller than that of samples generated by VT (31.02). Visually, samples generated by MYVT appear significantly smoother than those generated by VT (see Figures 2c and 2d). These results in both case studies demonstrate the regularization effect of problem (1) on the generated samples and highlight the effectiveness of MYVT.
### Image Denoising
In the third case study, we focus on the task of denoising an image that has been degraded by Gaussian noise, using two-dimensional TV. Given an image \(\mathbf{x}\in\mathbb{R}^{w\times h}\), its two-dimensional TV is defined as follows:
\[TV(\mathbf{x})=\sum_{i=1}^{h}\sum_{j=2}^{w}|\mathbf{x}[i,j]-\mathbf{x}[i,j-1] |+\sum_{j=1}^{w}\sum_{i=2}^{h}|\mathbf{x}[i,j]-\mathbf{x}[i-1,j]|\]
We consider a truth image: \(\mathbf{z}\in\mathbb{R}^{80\times 80}\) (see Figure 3a) and its noisy version: \(\mathbf{y}=\mathbf{z}+\mathcal{N}(0,100^{2}\mathbf{I})\) (see Figure 3b), and aim to apply VT and MYVT to generate image samples \(\{\mathbf{x}_{i}\}\), which are resemble \(\mathbf{y}\) (i.e. the Frobenius norm \(\|\mathbf{x}_{i}-\mathbf{y}\|_{F}\) is small) and smooth (i.e. \(TV(\mathbf{x}_{i})\) is small). This task can be formulated as follows:
\[\min_{q\in\mathcal{P}_{2}(\mathcal{X})}KL(q,\pi)+\alpha\mathbb{E}_{\mathbf{x} \sim q}[\text{TV}(\mathbf{x})]\]
where \(\pi(\mathbf{x})\propto e^{-\|\mathbf{x}-\mathbf{y}\|_{F}^{2}}\). In the case where we only have access to un-normalized density of \(\pi\), it becomes challenging to approximate the variational formulation of KL diveeregence using samples from \(\pi\) (see Equation (3)). However, we can overcome this issue by introducing the following change of variable (for the case of using KL divergence): \(h^{\prime}(\mathbf{x})=e^{h(\mathbf{x})}\pi(\mathbf{x})/p(\mathbf{x})\), where \(p\) is a probability distribution which is easy to sample from, e.g. \(p(\mathbf{x})=\mathcal{N}(0,\mathbf{I})\). Then the variational formulation of KL divergence can be rewritten as:
\[KL(q,\pi) =\max_{h^{\prime}\in\mathcal{H}^{+}}\left\{\mathbb{E}_{\mathbf{x} \sim q}\left[\log h^{\prime}(\mathbf{x})\right]+\mathbb{E}_{\mathbf{x}\sim q} \left[\log p(\mathbf{x})\right]\right.\] \[-\left.\mathbb{E}_{\mathbf{x}\sim q}\left[\log\pi(\mathbf{x}) \right]-\log\mathbb{E}_{\mathbf{x}\sim p}\left[h^{\prime}(\mathbf{x})\right]\right\}\]
Figure 4: Evolutions of example images generated by VT and MYVT(\(\alpha=100\)) over 3000 iterations. (a) images generated by VT, (b) images generated by MYVT.
where \(\mathcal{H}^{+}\) is the space of positive functions. As we can use \(\log\) of un-normalized density of \(\pi\) in the above optimization problem, \(h^{\prime}\) can be estimated using samples drawn from \(q\) and \(p\). Therefore, \(h\) can be estimated from \(h^{\prime}\) using: \(h(\mathbf{x})=\log h^{\prime}(\mathbf{x})+\log p(\mathbf{x})-\log\pi(\mathbf{x})\).
**Experiment setting.** For MYVT, we set \(\alpha=100.0\). We parameterize \(V\) using a neural network with five layers, each of which has 200 neurons. We parameterize \(h^{\prime}\) using another neural network with two layers, each of which has 200 neurons. We use the ReLU activation function in the last layer to guarantee the output of \(h^{\prime}\) to be positive. We set the number of iterations \(K\), mini-batch size, step sizes, \(T\) and \(T^{\prime}\) for both VT and MYVT as 3000, 200, 0.01, 5 and 2, respectively.
**Results.** Figure 3(a) and 3(b) display the evolution of example samples over 3000 iterations of VT and MYVT, respectively, for the denoising task applied to the given image. It is evident that the samples generated by MYVT become increasingly smooth over iterations. This behavior is a direct result of the inclusion of the TV semi-norm in problem (1).
## 6 Conclusion
We have addressed the regularized distributional optimization problem with a composite objective composed of two functionals. The first one has the variational representation while the second one is expressed in terms of the expectation operator of a non-smooth convex regularizer function. We have introduced MYVT as a solution to this problem. Its key idea is to approximate the original problem using Moreau-Yoshida approximation and reformulate it as a concave-convex saddle point problem by leveraging the variational representation. In our future work we aim to develop more efficient algorithms for estimating the solutions of problem (2). Additionally, we plan to extend MYVT to handle other forms of objective functionals which do not possess the variational representation. By exploring these directions, we aim to enhance the versatility and efficiency of MYVT and further advance the field of regularized distributional optimization.
|
2309.07921 | OpenIllumination: A Multi-Illumination Dataset for Inverse Rendering
Evaluation on Real Objects | We introduce OpenIllumination, a real-world dataset containing over 108K
images of 64 objects with diverse materials, captured under 72 camera views and
a large number of different illuminations. For each image in the dataset, we
provide accurate camera parameters, illumination ground truth, and foreground
segmentation masks. Our dataset enables the quantitative evaluation of most
inverse rendering and material decomposition methods for real objects. We
examine several state-of-the-art inverse rendering methods on our dataset and
compare their performances. The dataset and code can be found on the project
page: https://oppo-us-research.github.io/OpenIllumination. | Isabella Liu, Linghao Chen, Ziyang Fu, Liwen Wu, Haian Jin, Zhong Li, Chin Ming Ryan Wong, Yi Xu, Ravi Ramamoorthi, Zexiang Xu, Hao Su | 2023-09-14T17:59:53Z | http://arxiv.org/abs/2309.07921v2 | # OpenIllumination: A Multi-Illumination Dataset for Inverse Rendering Evaluation on Real Objects
###### Abstract
We introduce OpenIllumination, a real-world dataset containing over 108K images of 64 objects with diverse materials, captured under 72 camera views and a large number of different illuminations. For each image in the dataset, we provide accurate camera parameters, illumination ground truth, and foreground segmentation masks. Our dataset enables the quantitative evaluation of most inverse rendering and material decomposition methods for real objects. We examine several state-of-the-art inverse rendering methods on our dataset and compare their performances. The dataset and code can be found on the project page: [https://oppo-us-research.github.io/OpenIllumination](https://oppo-us-research.github.io/OpenIllumination).
## 1 Introduction
Recovering object geometry, material, and lighting from images is a crucial task for various applications, such as image relighting and view synthesis. While recent works have shown promising results by using a differentiable renderer to optimize these parameters using the photometric loss [51; 53; 52; 20; 32], they can only perform a quantitative evaluation on synthetic datasets since it is easy to obtain ground-truth information. In contrast, they can only show qualitative results instead of providing quantitative evaluations in real scenes.
Nevertheless, it is crucial to acknowledge the inherent gap between synthetic and real-world data, for real-world scenes exhibit intricate complexities, such as natural illuminations, diverse materials, and complex geometry, which may present challenges that synthetic data fails to model accurately. Consequently, it becomes imperative to complement synthetic evaluation with real-world data to validate and assess the ability of inverse rendering algorithms in practical settings.
It is highly challenging to capture real objects in practice. A common approach to capturing real-world data is using a handheld camera [20; 53]. Unfortunately, this approach frequently introduces the occlusion of ambient light by photographers and cameras, consequently resulting in different illuminations for each photograph. Such discrepancies are unreasonable for most methods that assume a single constant illumination. Furthermore, capturing images under multiple illuminations with a handheld camera often produces images with highly different appearances and results in inaccurate and even fail camera pose estimation, particularly for feature matching-based methods such as COLMAP [37]. Recent efforts have introduced some datasets [33; 43; 21] that incorporate
multiple illuminations in real-world settings. However, as shown in Tab. 1, most of them are limited either in the number of views [33, 21] or the number of illuminations [21]; few of them provide object-level data as well. Consequently, these existing datasets prove unsuitable for evaluating inverse rendering methods on real-world objects.
To address this, we present a new dataset containing objects with a variety of materials, captured under multiple views and illuminations, allowing for reliable evaluation of various inverse rendering tasks with real data. Our dataset was acquired using a setup similar to a traditional light stage [10, 11], where densely distributed cameras and controllable lights are attached to a static frame around a central platform. In contrast to handheld capture, this setup allows us to precisely pre-calibrate all cameras with carefully designed calibration patterns and reuse the same camera parameters for all the target objects, leading to not only high calibration accuracy but also a consistent evaluation process (with the same camera parameters) for all the scenes.
On the other hand, the equipped multiple controllable lights enable us to flexibly illuminate objects with a large number of complex lighting patterns, facilitating the acquisition of illumination ground truth.
With the help of high-speed cameras running at 30 fps, we are able to capture OLAT (One-Light-At-a-Time) images with a very high efficiency, which is critical for capturing data at a large scale. In the end, we have captured over 108K images, each with a well-calibrated camera and illumination
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Dataset & Capturing device & Lighting condition & Number of illuminations & HDR scenes/objects & Number of views \\ \hline DTU [19] & gantry & pattern & 7 & ✗ & 80 scenes & 49/64 \\ \hline NeRF-OSR [36] & commodity camera & env & 5\(\sim\)11 & ✗ & 9 scenes & \(\sim\)360 \\ \hline DiLGenT [39] & commodity camera & OLAT & 96 & ✓ & 10 objects & 1 \\ \hline DiLGenT-MV [26] & studio/desktop scanner & OLAT & 96 & ✓ & 5 objects & 20 \\ \hline NeRDIC [23] & commodity camera & env & 4\(\sim\)6 & ✗ & 3 objects & 40 \\ \hline MTHT-Intrinsic [15] & commodity camera & OLAT & 10 & ✗ & 20 objects & 1 \\ \hline Murmann et al. [33] & light probe & env & 25 & ✗ & 1000 scenes & 1 \\ \hline LSM [21] & light probe & env & 3 & ✗ & 2700 scenes & 1 \\ \hline ReNe [43] & gantry & OLAT & 40 & ✗ & 20 objects & 50 \\ \hline Ours & light stage & pattern+OLAT &
\begin{tabular}{c} 13 pattern+ \\ 142 OLAT \\ \end{tabular} & ✓ & 64 objects & 72 \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Comparison between representative multi-illumination real-world datasets.** Env. stands for environment lights.
Figure 1: **Some example images in the proposed dataset.** The dataset contains images of various objects with diverse materials, captured under different views and illuminations. The leftmost column visualizes several different illumination patterns, with **red** and yellow indicating activated and deactivated lights. The name and material for each object are listed in the first and second rows. The materials are selected from the OpenSurfaces [3] dataset.
parameters. Moreover, we also provide high-quality object segmentation masks by designing an efficient semi-automatic mask labeling method.
We conduct baseline experiments on several tasks: (1) joint geometry-material-illumination estimation; (2) joint geometry-material estimation under known illumination; (3) photometric stereo reconstruction; (4) Novel view synthesis to showcase the ability to evaluate real objects on our dataset. To the best of our knowledge, by the time of this paper's submission, there are no other real datasets that can be used to perform the quantitative evaluation for relighting on real data.
In summary, our contributions are as follows:
* We capture over 108K images for real objects with diverse materials under multiple viewpoints and illuminations, which enables a more comprehensive analysis for inverse rendering tasks across various material types.
* The proposed dataset provides precise camera calibrations, lighting ground truth and accurate object segmentation masks.
* We evaluate and compare the performance of multiple state-of-the-art (SOTA) inverse rendering and novel view synthesis methods. We perform quantitive evaluation of relighting real object under unseen illuminations.
## 2 Related works
Inverse rendering.Inverse rendering has been a long-standing task in the fields of computer vision and graphics, which focuses on reconstructing shapes and materials from multi-view 2D images. A great amount of work [5; 14; 18; 25; 47; 34; 52; 54] has been proposed for this task. Some of them make use of learned domain-specific priors [5; 12; 2; 27]. Some other works rely on controllable capture settings to estimate the geometry and material, such as structure light [48], circular LED lights [55], collocated camera and flashlight [50; 5; 4], and so on.
Recently, a lot of works use neural representations to support inverse rendering reconstruction under unknown natural lighting conditions [20; 6; 52; 54; 7; 32; 51]. By combining the popular neural representations such as NeRF [30] or SDF [45; 49] with physically-based rendering model [8], they can achieve shape and reflectance reconstruction with image loss constrain. Although these works can achieve high-quality reconstruction, they can only evaluate relighting performance under novel illumination on synthetic data because of the lack of high-quality real object datasets.
Multi-illumination datasets.Multi-illumination observations intuitively provide more cues for computer vision and graphics tasks like inverse rendering. Some works have utilized the temporal variation of natural illumination, such as sunlight and outdoor lighting. These "in-the-wild" images are typically captured using web cameras [46; 41; 36] or using controlled camera setups [40; 24]. Another line of work focuses on indoor scenes, while indoor scenes generally lack a readily available source of illumination that exhibits significant variation. In this case, a common approach involves using flash and no-flash pairs [35; 13; 1]. Applications like denoising, mixed-lighting white balance, and BRDF capture benefits from these kinds of datasets. However, other applications like photometric stereo and inverse rendering usually require more than two images and more lighting conditions for reliable results, which these datasets often fail to provide.
## 3 Dataset construction
### Dataset overview
The OpenIllumination dataset contains over 108K images of 64 objects with diverse materials. Each object is captured by 48 DSLR cameras under 13 lighting patterns. Additionally, 20 objects are captured by 24 high-speed cameras under 142 OLAT setting.
Fig. 1 shows some images captured under different lighting patterns, while the images captured under OLAT illumination can be found in Fig. 5.
Our dataset includes a total of 24 diverse material categories, such as plastic, glass, fabric, ceramic, and more. Note that one object may possess several different materials, thus the number of materials is larger than the number of objects.
### Camera calibration
The accuracy of camera calibration highly affects the performance of most novel view synthesis and inverse rendering methods. Previous works [20; 53] typically capture images by handheld cameras and employ COLMAP [37] to estimate camera parameters. However, this approach heavily relies on the object's textural properties, which is challenging in instances where the object lacks texture or exhibits specular reflections from certain viewpoints. These challenges can obstruct accurate feature matching, consequently reducing the precision of camera parameter estimation. Ultimately, the reliability of inverse rendering outcomes is undermined, and finding out whether inaccuracies are caused by erroneous camera parameters or limitations of the inverse rendering method itself becomes a challenging problem. Leveraging the capabilities of our light stage, wherein camera intrinsics and extrinsic can be fixed when capturing different objects, we employ COLMAP to recover the camera parameters on a textured and low-specularity scene. For each subsequently captured object, we use this set of camera parameters instead of performing recalibration. The results of camera calibration are visualized in Fig. 2(b).
### Light calibration
In this section, we propose a chrome-ball-based lighting calibration method to obtain the ground-truth illumination which plays a critical role in the relighting evaluation.
Our data are captured in a dark room where a set of linear polarized LEDs are placed on a sphere uniformly as the only outer lighting source. Each light can be approximated by a Spherical Gaussian (SG), defined as the following form [44]:
\[G(\nu;\boldsymbol{\xi},\lambda,\boldsymbol{\mu})=\boldsymbol{\mu}\,e^{\lambda (\nu\cdot\boldsymbol{\xi}-1)}, \tag{1}\]
where \(\nu\in\mathbb{S}^{2}\) is the function input, representing the incident lighting direction to query, \(\boldsymbol{\xi}\in\mathbb{S}^{2}\) is the lobe axis, \(\lambda\in\mathbb{R}_{+}\) is the lobe sharpness, and \(\boldsymbol{\mu}\in\mathbb{R}_{+}^{n}\) is the lobe amplitude.
We utilize a chrome ball to estimate the 3D position of each light. Assuming the chrome ball is highly specular and isotropic, its position and radius are known, and cameras and lights are evenly distributed around the chrome ball. For each LED single light, at least one camera can capture the reflected light rays out from its starting location. The incident light direction can be computed via:
\[I=-T+2(I\cdot N)N, \tag{2}\]
Figure 2: (a) The capturing system contains 48 DSLR cameras (Canon EOS Rebel SL3), 24 high-speed cameras (HR-12000SC), and 142 controllable linear polarized LED. (b) The calibrated DSLR camera poses. (c) The reconstructed light positions.
where \(I\) is the incident light direction that goes out from the point of incidence, \(N\) is the normal of the intersection point on the surface, and \(T\) is the direction of the reflected light.
For each LED light, its point of incidence on the chrome ball can be captured by multiple cameras, and for each camera \(i\), we can compute an incident light direction \(I_{i}\), which should have the least distance from the LED light location \(p\). Therefore, to leverage information from multiple camera viewpoints, we seek to minimize the sum of distances between the light position and incident light directions across different camera views. This optimization is expressed as:
\[L(p)=\sum_{i}d(p,I_{i}),\|p\|=1, \tag{3}\]
where \(p\) represents the light position to be determined, \(d(p,I_{i})\) denotes the L2 distance between the light and the incident light direction corresponding to view \(i\), and the constraint \(\|p\|=1\) ensures that the lights lie on the same spherical surface as the cameras. The reconstructed light distribution, depicted in Fig. 2(c), closely aligns with the real-world distribution.
After estimating the 3D position for each light, we need to determine the lobe size for them. Since the lights in our setup are of the same type, we can estimate a global lobe size for all lights. By taking one OLAT image of the chrome ball as input, we flatten it into an environment map. Subsequently, we optimize the parameters of the Spherical Gaussians (SGs) model to minimize the difference between the computed environment map and the observed environment map.
Since all the lights have identical lighting intensities, and the lighting intensity can be of arbitrary scale because of the scale ambiguity between the material and lighting, we set the lighting intensity to 5 for all lights.
### Semi-automatic high-quality mask labeling
To obtain high-quality segmentation masks, we use Segment-Anything [22] (SAM) to perform instance segmentation. However, we find that the performance is not satisfactory. One reason is that the object categories are highly undefined. In this case, even combining the bounding box and point prompts cannot produce satisfactory results. To address this problem, we use multiple bounding-box prompts to perform segmentation for each possible part and then calculate a union of the masks as the final object mask. For objects with very detailed and thin structures, e.g. hair, we use an off-the-shelf background matting method [28] to perform object segmentation.
## 4 Baseline experiments
### Inverse rendering evaluation
In this section, we conduct experiments employing various learning-based inverse rendering methods on our dataset. Throughout these experiments, we carefully select 10 objects exhibiting a diverse range of materials, and we partition the images captured by DSLR cameras into training and testing sets, containing 38 and 10 views respectively.
**Baselines.** We validate six recent learning-based inverse rendering approaches assuming single illumination conditions: NeRD [6], Neural-PIL [7], PhySG [51], InvRender [54], rvdiffrec-mc [16], and TensoIR [20]. Moreover, we validate three of them [6; 7; 20] that support multiple illumination optimization.
**Joint geometry-material-illumination estimation.** For experiments under single illumination, we use images captured with all lights activated, while for multi-illumination, we select images taken under three different lighting patterns.
NeRD[6] is observed to exhibit high instability. In many cases, NeRD fails to learn a meaningful environment map. Neural-PIL [7] generates fine environment maps and produces high-quality renderings. However, the generated environment map incorporates the albedo of objects and fails to produce reasonable diffuse results in multi-illumination conditions. Both NeRD and Neural-PIL suffer from map fractures in roughness, normal, and albedo, providing visible circular cracks, which
we attribute to the overfitting of the environment map, where certain colors become embedded within it. PhySG [51] applies specular BRDFs allowing for a better approximate evaluation of light transport. PhySG shows commendable results on metal and coated materials, simulating a few highlights. However its geometry learning was inaccurate, and it performed poorly in objects with multiple specular parts, failing to reproduce any prominent highlights. InvRender [54] models spacially-varying indirection illumination and the visibility of direct illumination. However, its reconstructed geometry tends to lack detail and be over-smooth on some objects. \(\text{nvdiffrec-mc}\)[16] incorporates Monte Carlo integration and a denoising module during rendering to achieve a more efficient and stable convergence in optimization. It achieves satisfactory relighting results on most objects. But the quality of geometry detail as shown in the reconstructed normal map is affected by the grid resolution of DMTet [38]. TensoIR [20] also exhibits satisfactory performance. However, it still encounters challenges in generating good results for highly specular surfaces, as shown in the fourth row in Fig. 3. Moreover, since TensoIR models materials using a simplified version of Disney BRDF [8], which fixes the \(F_{0}\) in the fresnel term to be 0.04, its representation capabilities are limited, and certain materials such as metal and transparent plastic may not be accurately modeled, as
Figure 3: The object reconstruction on our dataset from three inverse rendering baselines under single illumination. Objects highlighted by **green** color are easier tasks in our dataset, while objects in **red** color are more difficult tasks that involve more complicated materials like metal and clear plastic.
illustrated in the fifth row in Fig. 3 and Tab. 2, where TensoIR only achieve about 22 PSNR on the translucent plastic cup.
Overall, all the methods struggle with modeling transparency or complex reflectance because of the relatively simple BRDF used in rendering. For concave objects, such as the metal bucket shown in Fig. 3, NeRF-based methods have difficulty learning the correct geometry. In addition, compared to single illumination, two of our baselines, NeRD and NeuralPIL show inferior performance under multi-illumination, and the baseline TensoIR maintains a high quality of the reconstruction.
ground truth represented as a combination of Spherical Gaussian functions. This enables us to evaluate the performance of relighting under novel illumination with the decomposed material and geometry.
Tab. 4 shows the relighting performance of TensoIR [20] on 10 objects. Fig. 4 shows the material decomposition and the relighting visualizations. In general, TensoIR performs better on diffuse objects than on metal and transparent objects.
### Photometric stereo
Photometric stereo (PS) is a well-established technique to reconstruct a 3D surface of an object [18]. The method estimates the shape and recovers surface normals of a scene by utilizing several intensity images obtained under varying illumination conditions with an identical viewpoint [17; 42]. By default, PS assumes a Lambertian surface reflectance, in which normal vectors and image intensities are linearly dependent on each other. During our capturing, we place circular polarizers over each light source, we also place a circular polarizer of the same sense in front of the camera to cancel out the specular reflections [29]. Fig. 5 shows the reconstructed albedo and normal map from the OLAT images in our dataset.
### Novel view synthesis
While our dataset was primarily proposed for evaluating inverse rendering approaches, the multi-view images in it can also serve as a valuable resource for evaluating novel view synthesis methods. In this section, we perform experiments utilizing several neural radiance field methods to validate the data quality of our dataset. We conduct experiments employing the vanilla NeRF [30], TensoRF [9], Instant-NGP [31], and NeuS [45]. The quantitative results, as presented in Tab. 5, demonstrate the exceptional quality of our data and the precise camera calibration, as evidenced by the consistently high PSNR scores attained.
\begin{table}
\begin{tabular}{c c c c c c c c c c c} \hline \hline Object & egg & stone & bird & box & pumpkin & hat & cup & sponge & banana & bucket \\ \hline Material & paper & stone & painted & coated & wooden & fabric & clear plastic & sponge & food & metal \\ \hline PSNR & 31.99 & 31.07 & 30.16 & 27.57 & 27.16 & 32.38 & 22.96 & 30.86 & 32.13 & 27.13 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Performance of relighting under novel illumination using TensoIR.
Figure 5: Results of photometric stereo using the OLAT images in our dataset.
### Ablation study
As depicted in Fig. 6(a), the utilization of handheld cameras in the capture process frequently gives rise to inconsistent illumination between different viewpoints because of the changing occlusion of light caused by the moving photographer, thereby breaching the static illumination assumption for most inverse rendering methods. Furthermore, using handheld cameras tends to inadequately ensure an extensive range of viewpoints, thereby frequently resulting in the incompleteness of the reconstructed objects. Conversely, our dataset delivers a superior range of viewpoints and maintains consistency across different objects, thereby producing a more complete reconstruction. This demonstrates the high quality of our dataset and establishes its suitability as an evaluation benchmark for real-world objects.
## 5 Limitation
There are several limitations and future directions to our work. **(1)** Since we use the light stage to capture the images in a dark room, the illumination is controlled strictly. Thus there exists a gap between the images in this dataset and in-the-wild captured images. **(2)** Although we use state-of-the-art methods for segmentation, the mask consistency across different views for smaller objects with fine details, such as hair, is not considered yet. **(3)** Due to the limited space, the sizes of the objects in the dataset are restricted to 10\(\sim\)20 cm, and the cameras are not highly densely distributed.
## 6 Conclusion
In this paper, we introduce a multi-illumination dataset OpenIllumination for inverse rendering evaluation on real objects. This dataset offers crucial components such as precise camera parameters, ground-truth illumination information, and segmentation masks for all the images. OpenIllumination provides a valuable resource for quantitatively evaluating inverse rendering and material decomposition techniques applied to real objects for researchers. By analyzing various state-of-the-art inverse
\begin{table}
\begin{tabular}{c c c c c c c c c c c} \hline \hline Object & egg & stone & bird & box & pumpkin & hat & cup & sponge & banana & bucket \\ \hline Material & paper & stone & painted & coated & wooden & fabric & clear plastic & sponge & food & metal \\ \hline NeRF [30] & 33.53 & 29.32 & 29.64 & 25.38 & 26.95 & 31.29 & **22.52** & 31.36 & 33.65 & 28.54 \\ \hline TensoRF [9] & 32.42 & 29.84 & 28.45 & **25.49** & 27.54 & 31.50 & 20.87 & 31.34 & 34.32 & 29.28 \\ \hline I-NGP [31] & **34.07** & **30.62** & 29.91 & 25.83 & **27.93** & **32.51** & 22.51 & **32.71** & **34.98** & 29.72 \\ \hline NeuS [45] & 33.43 & 29.78 & **30.00** & 25.47 & 27.83 & 31.93 & 22.13 & 32.44 & 34.17 & **29.99** \\ \hline \hline \end{tabular}
\end{table}
Table 5: **Novel-view-synthesis PSNR on NeRF, TensoRF, Instant-NGP, and NeuS.**
Figure 6: **(a)** Capturing using a handheld camera often introduces inconsistent illuminations. **(b)** Geometry reconstruction using data in our dataset delivers higher completion than using data captured by handheld cameras.
rendering pipelines using our dataset, we have been able to assess and compare their performance effectively. The release of both the dataset and accompanying code will be made available, encouraging further exploration and advancement in this field.
## References
* [1] Yagiz Aksoy, Changil Kim, Petr Kellnhofer, Sylvain Paris, Mohamed Elgharib, Marc Pollefeys, and Wojciech Matusik. A dataset of flash and ambient illumination pairs from the crowd. In _Proceedings of the European Conference on Computer Vision (ECCV)_, pages 634-649, 2018.
* [2] Jonathan T Barron and Jitendra Malik. Shape, illumination, and reflectance from shading. _IEEE transactions on pattern analysis and machine intelligence_, 37(8):1670-1687, 2015.
* [3] Sean Bell, Paul Upchurch, Noah Snavely, and Kavita Bala. Opensurfaces: A richly annotated catalog of surface appearance. _ACM Transactions on graphics (TOG)_, 32(4):1-17, 2013.
* [4] Sai Bi, Zexiang Xu, Pratul P. Srinivasan, Ben Mildenhall, Kalyan Sunkavalli, Milovs Havasan, Yannick Hold-Geoffroy, David J. Kriegman, and Ravi Ramamoorthi. Neural reflectance fields for appearance acquisition. _ArXiv_, abs/2008.03824, 2020.
* [5] Sai Bi, Zexiang Xu, Kalyan Sunkavalli, David Kriegman, and Ravi Ramamoorthi. Deep 3d capture: Geometry and reflectance from sparse multi-view images. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 5960-5969, 2020.
* [6] Mark Boss, Raphael Braun, Varun Jampani, Jonathan T Barron, Ce Liu, and Hendrik Lensch. Nerd: Neural reflectance decomposition from image collections. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_, pages 12684-12694, 2021.
* [7] Mark Boss, Varun Jampani, Raphael Braun, Ce Liu, Jonathan Barron, and Hendrik Lensch. Neural-pil: Neural pre-integrated lighting for reflectance decomposition. _Advances in Neural Information Processing Systems_, 34:10691-10704, 2021.
* [8] Brent Burley and Walt Disney Animation Studios. Physically-based shading at disney. In _ACM SIGGRAPH_, volume 2012, pages 1-7. vol. 2012, 2012.
* [9] Anpei Chen, Zexiang Xu, Andreas Geiger, Jingyi Yu, and Hao Su. Tensorf: Tensorial radiance fields. In _Computer Vision-ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23-27, 2022, Proceedings, Part XXIII_, pages 333-350. Springer, 2022.
* [10] Paul Debevec, Tim Hawkins, Chris Tchou, Haarm-Pieter Duker, Westley Sarokin, and Mark Sagar. Acquiring the reflectance field of a human face. In _Proceedings of the 27th annual conference on Computer graphics and interactive techniques_, pages 145-156, 2000.
* [11] Paul Debevec, Andreas Wenger, Chris Tchou, Andrew Gardner, Jamie Waese, and Tim Hawkins. A lighting reproduction approach to live-action compositing. _ACM Transactions on Graphics (TOG)_, 21(3):547-556, 2002.
* [12] Yue Dong, Guojun Chen, Pieter Peers, Jiawan Zhang, and Xin Tong. Appearance-from-motion: Recovering spatially varying surface reflectance under unknown lighting. _ACM Transactions on Graphics_, 33(6):193, 2014.
* [13] Elmar Eisemann and Fredo Durand. Flash photography enhancement via intrinsic relighting. _ACM transactions on graphics (TOG)_, 23(3):673-678, 2004.
* [14] Dan B Goldman, Brian Curless, Aaron Hertzmann, and Steven M Seitz. Shape and spatially-varying brdfs from photometric stereo. _IEEE Transactions on Pattern Analysis and Machine Intelligence_, 32(6):1060-1071, 2009.
* [15] Roger Grosse, Micah K Johnson, Edward H Adelson, and William T Freeman. Ground truth dataset and baseline evaluations for intrinsic image algorithms. In _2009 IEEE 12th International Conference on Computer Vision_, pages 2335-2342. IEEE, 2009.
* [16] Jon Hasselgren, Nikolai Hofmann, and Jacob Munkberg. Shape, light & material decomposition from images using monte carlo rendering and denoising. _arXiv preprint arXiv:2206.03380_, 2022.
* [17] Hideki Hayakawa. Photometric stereo under a light source with arbitrary motion. _JOSA A_, 11(11):3079-3089, 1994.
* [18] Carlos Hernandez, George Vogiatzis, and Roberto Cipolla. Multiview photometric stereo. _IEEE Transactions on Pattern Analysis and Machine Intelligence_, 30(3):548-554, 2008.
* [19] Rasmus Jensen, Anders Dahl, George Vogiatzis, Engin Tola, and Henrik Aanaes. Large scale multi-view stereopsis evaluation. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 406-413, 2014.
* [20] Haian Jin, Isabella Liu, Pejija Xu, Xiaoshuai Zhang, Songfang Han, Sai Bi, Xiaowei Zhou, Zexiang Xu, and Hao Su. Tensoir: Tensorial inverse rendering. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 165-174, 2023.
* [21] Dongyoung Kim, Jinwoo Kim, Seonghyeon Nam, Dongwoo Lee, Yeonkyung Lee, Nahyup Kang, Hyong-Euk Lee, ByungIn Yoo, Jae-Joon Han, and Seon Joo Kim. Large scale multi-illuminant (lsmi) dataset for developing white balance algorithm under mixed illumination. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_, pages 2410-2419, 2021.
* [22] Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C Berg, Wan-Yen Lo, et al. Segment anything. _arXiv preprint arXiv:2304.02643_, 2023.
* [23] Zhengfei Kuang, Kyle Olszewski, Menglei Chai, Zeng Huang, Panos Achlioptas, and Sergey Tulyakov. Neroic: Neural rendering of objects from online image collections. _ACM Transactions on Graphics (TOG)_, 41(4):1-12, 2022.
* [24] Jean-Francois Lalonde and Iain Matthews. Lighting estimation in outdoor image collections. In _2014 2nd international conference on 3D vision_, volume 1, pages 131-138. IEEE, 2014.
* [25] Jason Lawrence, Szymon Rusinkiewicz, and Ravi Ramamoorthi. Efficient brdf importance sampling using a factored representation. _ACM Transactions on Graphics (ToG)_, 23(3):496-505, 2004.
* [26] Min Li, Zhenglong Zhou, Zhe Wu, Boxin Shi, Changyu Diao, and Ping Tan. Multi-view photometric stereo: A robust solution and benchmark dataset for spatially varying isotropic materials. _IEEE Transactions on Image Processing_, 29:4159-4173, 2020.
* [27] Zhengqin Li, Zexiang Xu, Ravi Ramamoorthi, Kalyan Sunkavalli, and Manmohan Chandraker. Learning to reconstruct shape and spatially-varying reflectance from a single image. In _SIGGRAPH Asia 2018_, page 269. ACM, 2018.
* [28] Shanchuan Lin, Andrey Ryabtsev, Soumyadip Sengupta, Brian L Curless, Steven M Seitz, and Ira Kemelmacher-Shlizerman. Real-time high-resolution background matting. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 8762-8771, 2021.
* [29] Wan-Chun Ma, Tim Hawkins, Pieter Peers, Charles-Felix Chabert, Malte Weiss, Paul E Debevec, et al. Rapid acquisition of specular and diffuse normal maps from polarized spherical gradient illumination. _Rendering Techniques_, 2007(9):10, 2007.
* [30] Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. _Communications of the ACM_, 65(1):99-106, 2021.
* [31] Thomas Muller, Alex Evans, Christoph Schied, and Alexander Keller. Instant neural graphics primitives with a multiresolution hash encoding. _ACM Transactions on Graphics (ToG)_, 41(4):1-15, 2022.
* [32] Jacob Munkberg, Jon Hasselgren, Tianchang Shen, Jun Gao, Wenzheng Chen, Alex Evans, Thomas Mueller, and Sanja Fidler. Extracting Triangular 3D Models, Materials, and Lighting From Images. _arXiv:2111.12503_, 2021.
* [33] Lukas Murmann, Michael Gharbi, Miika Aittala, and Fredo Durand. A dataset of multi-illumination images in the wild. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_, pages 4080-4089, 2019.
* [34] Giljoo Nam, Joo Ho Lee, Diego Gutierrez, and Min H Kim. Practical SVBRDF acquisition of 3D objects with unstructured flash photography. In _SIGGRAPH Asia 2018_, page 267. ACM, 2018.
* [35] Georg Petschnigg, Richard Szeliski, Maneesh Agrawala, Michael Cohen, Hugues Hoppe, and Kentaro Toyama. Digital photography with flash and no-flash image pairs. _ACM transactions on graphics (TOG)_, 23(3):664-672, 2004.
* [36] Viktor Rudnev, Mohamed Elgharib, William Smith, Lingjie Liu, Vladislav Golyanik, and Christian Theobalt. Nerf for outdoor scene relighting. In _European Conference on Computer Vision (ECCV)_, 2022.
* [37] Johannes L Schonberger, Enliang Zheng, Jan-Michael Frahm, and Marc Pollefeys. Pixelwise view selection for unstructured multi-view stereo. In _Computer Vision-ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part III 14_, pages 501-518. Springer, 2016.
* [38] Tianchang Shen, Jun Gao, Kangxue Yin, Ming-Yu Liu, and Sanja Fidler. Deep marching tetrahedra: a hybrid representation for high-resolution 3d shape synthesis. In _Advances in Neural Information Processing Systems (NeurIPS)_, 2021.
* [39] Boxin Shi, Zhe Wu, Zhipeng Mo, Dinglong Duan, Sai-Kit Yeung, and Ping Tan. A benchmark dataset and evaluation for non-lambertian and uncalibrated photometric stereo. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_, pages 3707-3716, 2016.
* [40] Jessi Stumpfel, Andrew Jones, Andreas Wenger, Chris Tchou, Tim Hawkins, and Paul Debevec. Direct hdr capture of the sun and sky. In _ACM SIGGRAPH 2006 Courses_, pages 5-es. 2006.
* [41] Kalyan Sunkavalli, Fabiano Romeiro, Wojciech Matusik, Todd Zickler, and Hanspeter Pfister. What do color changes reveal about an outdoor scene? In _2008 IEEE Conference on Computer Vision and Pattern Recognition_, pages 1-8. IEEE, 2008.
* [42] Ariel Tankus and Nahum Kiryati. Photometric stereo under perspective projection. In _Tenth IEEE International Conference on Computer Vision (ICCV'05) Volume 1_, volume 1, pages 611-616. IEEE, 2005.
* [43] Marco Toschi, Riccardo De Matteo, Riccardo Spezialetti, Daniele De Gregorio, Luigi Di Stefano, and Samuele Salti. Relight my rerf: A dataset for novel view synthesis and relighting of real world objects. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 20762-20772, 2023.
* [44] Jiaping Wang, Peiran Ren, Minmin Gong, John Snyder, and Baining Guo. All-frequency rendering of dynamic, spatially-varying reflectance. In _ACM SIGGRAPH Asia 2009 papers_, pages 1-10. 2009.
* [45] Peng Wang, Lingjie Liu, Yuan Liu, Christian Theobalt, Taku Komura, and Wenping Wang. Neus: Learning neural implicit surfaces by volume rendering for multi-view reconstruction. _arXiv preprint arXiv:2106.10689_, 2021.
* [46] Yair Weiss. Deriving intrinsic images from image sequences. In _Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001_, volume 2, pages 68-75. IEEE, 2001.
* [47] Rui Xia, Yue Dong, Pieter Peers, and Xin Tong. Recovering shape and spatially-varying surface reflectance under unknown illumination. _ACM Transactions on Graphics_, 35(6):187, 2016.
* [48] Xianmin Xu, Yuxin Lin, Haoyang Zhou, Chong Zeng, Yaxin Yu, Kun Zhou, and Hongzhi Wu. A unified spatial-angular structured light for single-view acquisition of shape and reflectance. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 206-215, 2023.
* [49] Lior Yariv, Yoni Kasten, Dror Moran, Meirav Galun, Matan Atzmon, Basri Ronen, and Yaron Lipman. Multiview neural surface reconstruction by disentangling geometry and appearance. _Advances in Neural Information Processing Systems_, 33:2492-2502, 2020.
* [50] Kai Zhang, Fujun Luan, Zhengqi Li, and Noah Snavely. Iron: Inverse rendering by optimizing neural sdfs and materials from photometric images. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 5565-5574, 2022.
* [51] Kai Zhang, Fujun Luan, Qianqian Wang, Kavita Bala, and Noah Snavely. Physg: Inverse rendering with spherical gaussians for physics-based material editing and relighting. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 5453-5462, 2021.
* [52] Xiuming Zhang, Pratul P Srinivasan, Boyang Deng, Paul Debevec, William T Freeman, and Jonathan T Barron. Nerfactor: Neural factorization of shape and reflectance under an unknown illumination. _ACM Transactions on Graphics (TOG)_, 40(6):1-18, 2021.
* [53] Yuanqing Zhang, Jiaming Sun, Xingyi He, Huan Fu, Rongfei Jia, and Xiaowei Zhou. Modeling indirect illumination for inverse rendering. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 18643-18652, 2022.
* [54] Yuanqing Zhang, Jiaming Sun, Xingyi He, Huan Fu, Rongfei Jia, and Xiaowei Zhou. Modeling indirect illumination for inverse rendering. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 18643-18652, 2022.
* [55] Zhenglong Zhou, Zhe Wu, and Ping Tan. Multi-view photometric stereo with spatially varying isotropic materials. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_, pages 1482-1489, 2013.
Supplementary Material for "OpenIllumination: A Multi-Illumination Dataset for Inverse Rendering Evaluation on Real Objects "
###### Abstract
**URL and data cards**. The dataset can be viewed at [https://oppo-us-research.github.io/OpenIllumination](https://oppo-us-research.github.io/OpenIllumination) and downloaded from [https://huggingface.co/datasets/OpenIllumination/OpenIllumination](https://huggingface.co/datasets/OpenIllumination/OpenIllumination).
**Author statement**. We bear all responsibility in case of violation of rights. We confirm the CC BY (Attribution) 4.0 license for this dataset.
**Hosting, licensing, and maintenance plan.** We host the dataset on HuggingFace [2], and we confirm that we will provide the necessary maintenance for this dataset.
**DOI.** 10.57967/hf/1102.
**Structured metadata.** The metadata is at [https://huggingface.co/datasets/OpenIllumination/OpenIllumination](https://huggingface.co/datasets/OpenIllumination/OpenIllumination).
## 2 Capturing details
### Object masks
As mentioned in the main paper, our capturing process involves using a device similar to a light stage, which has a diameter of approximately 2 meters. The device consists of cameras and LED lights evenly distributed on the surface of a sphere, all oriented toward the center. To position the object roughly at the center, we utilize two types of supports, as illustrated in Fig. 1(a). However, due to the presence of camera angles that capture views from the bottom to the top, as depicted in Fig. 1(b), certain areas of the surface may be occluded by the supporting device. Consequently, these areas become invisible in these specific views while remaining visible in other views after applying the masking process. This introduces ambiguity to the density field network and leads to inferior performance.
To address this issue and eliminate density ambiguity, we incorporate certain parts of the supporting device in the training images. During the evaluation, we evaluate the PSNR using a separate set of masks that only contain the object. In the dataset, we utilize the _com_mask_, which combines the supporting device and object masks, during the training phase. For inference and evaluation, we employ the _obj_mask_, which represents only the object mask.
### Light pattern design
In addition to the One-Light-At-Time (OLAT) pattern, we have carefully designed 13 different light patterns for our dataset. These patterns involve lighting multiple LED lights either randomly or in a regular manner.
For the first 6 light patterns (001 to 006), we divide the 142 lights into 6 groups based on their spatial location. Each light pattern corresponds to activating one of these groups.
As for the remaining 7 light patterns (007 to 013), the lights are randomly illuminated, with the total number of chosen lights gradually increasing.
Fig. 2 illustrates the 13 light patterns present in our dataset.
### Chrome Ball
In order to perform light calibration, we need to determine the radius and center of the chrome ball in the world coordinate system. This information is crucial for calculating the surface normals at each point on the ball's surface. To ensure accurate intersection point computation, it is important to obtain the radius and position of the chrome ball on the same scale as the camera poses.
To achieve this, we propose using NeuS [3] to extract a mesh with a scale matching the camera poses. We provide multi-view images of the mirror ball as input to NeuS. However, since the mirror ball is highly reflective and difficult to reconstruct accurately using NeuS, we fill the foreground pixels of the mirror ball with black.
Finally, we fit a sphere to the extracted mesh to determine the location and radius of the mirror ball, which allows us to obtain the necessary information for light calibration.
### Camera parameters
During capturing, we set the camera ISO to 100, aperture to F16, and shutter speed to 1/5. We use Daylight mode for its white balance.
We did not perform extra color calibration for the same type of cameras. While it's acknowledged that certain inherent camera intrinsic differences and uncontrollable variables may result in occasional
Figure 1: **(a)** Two types of supporting devices used in our dataset. **(b)** We use the combined masks for training to eliminate density ambiguity.
Figure 2: 13 kinds of light patterns in our dataset, shown as an environment map.
e and the experimental results.
To further quantify the differences between different cameras, we designed a small experiment. We captured a 3D-printed cylinder, covered with a type of diffuse green paper. The visualization is in Fig. 3. The basic idea is to compute the difference in object surface colors across different cameras. This calculation serves as a rough measurement of the intrinsic differences among different cameras.
To reduce the impact of specular reflections, we use polarizers on the camera systems. In addition, we selected adjacent cameras to reduce the influence of view-dependent color variations. Our findings indicate that the differences between different cameras amount to approximately 1%.
As a result, we can observe that cameras of the same type after setting the same camera parameters already exhibit a high level of consistency without supplementary post-processing calibration procedures.
## 3 More details of evaluation results
### Code to reproduce the results in the paper
We use the open-source code repositories for the baselines in the paper.
* **NeRD**: [https://github.com/cgtuebingen/NeRD-Neural-Reflectance-Decomposition](https://github.com/cgtuebingen/NeRD-Neural-Reflectance-Decomposition)
* **Neural-PIL**: [https://github.com/cgtuebingen/Neural-PIL](https://github.com/cgtuebingen/Neural-PIL)
* **PhySG**: [https://github.com/Kai-46/PhySG](https://github.com/Kai-46/PhySG)
* **InvRender**: [https://github.com/zju3dv/InvRender](https://github.com/zju3dv/InvRender)
* **Nvdiffrec-mc**: [https://github.com/NVlabs/nvdiffrecmc](https://github.com/NVlabs/nvdiffrecmc)
* **TensoIR**: [https://github.com/Haian-Jin/TensoIR](https://github.com/Haian-Jin/TensoIR)
* **NeRF**: [https://github.com/KAIR-BAIR/nerfacc](https://github.com/KAIR-BAIR/nerfacc)
* **TensoRF**: [https://github.com/apchenstu/TensoRF](https://github.com/apchenstu/TensoRF)
* **instant-NGP**: [https://github.com/bennyguo/instant-nsr-pl](https://github.com/bennyguo/instant-nsr-pl)
* **NeuS**: [https://github.com/bennyguo/instant-nsr-pl](https://github.com/bennyguo/instant-nsr-pl)
Figure 3: **Example images of the cylinder.**
### Computational resources
We use a single GTX 2080 GPU for each object to run the baseline experiments.
### Relighting evaluation
We conducted an evaluation of all 64 objects in our dataset using TensoIR [1], which is one of the most recent state-of-the-art (SOTA) inverse rendering methods capable of multi-illumination optimization. For each object, we evaluated the performance of TensoIR under single illumination, multi-illumination, and relighting using novel illuminations. The evaluation results can be found in Tab. 1. Additionally, we include visualizations of the results for a selected number of objects in Fig. 4. As mentioned in the main paper, our dataset provides ground-truth information for the 142 linear polarized LED lights. This allows for the quantitative evaluation of the relighting quality. However, comparing the relighting results directly with the captures without aligning the albedo or light intensity between the two is impractical due to the ambiguity between them in the rendering equation. In practice, we train TensoIR under three different light patterns given their corresponding ground-truth illumination. During the evaluation, we used a different set of ground-truth illumination, along with the learned object's geometry and BRDF, to relight the object. We then compared the relit images with the captures under the new illumination to obtain our relighting evaluation metrics.
Tab. 1 presents the quantitative results of TensoIR's relighting performance on all 64 objects with various materials in our dataset. We used light patterns _009_, _011_, and _013_ for training, and the remaining light patterns for evaluation.
|
2309.11825 | Unambiguous measurement in an unshielded microscale magnetometer with
sensitivity below 1 pT/rHz | Cold atom magnetometers exploit a dense ensemble of quanta with long
coherence times to realise leading sensitivity on the micrometer scale.
Configured as a Ramsey interferometer, a cold atom sensor can approach atom
shot-noise limited precision but suffers from fringe ambiguity, producing gross
errors when the field falls outside a narrow predefined range. We describe how
Hilbert-demodulated optical magnetometry can be realised on cold atom sensors
to provide field measurements both precise and unambiguous. Continuous
reconstruction of the Larmor phase allows us to determine the dc magnetic field
unambiguously in an unshielded environment, as well as measure ac variation of
the field, in a single shot. The ac measurement allows us to characterize, and
then neutralise, line-synchronous magnetic interference, extending
reconstruction times. Using $1.6 \times 10^6$ $^{87}$Rb atoms in a volume of
$(68 \,\mathrm{\mu m})^3$, we measure a test field to be $ 86.0121261(4) \;
\mathrm{\mu T}$ in a single shot, achieving dc sensitivity of 380 fT in a
duration of 1000 ms. Our results demonstrate that Hilbert-demodulated optical
readout yields metrologically-significant sensitivity without the fringe
ambiguity inherent to Ramsey interferometry. | Hamish A. M. Taylor, Christopher C. Bounds, Alex Tritt, L. D. Turner | 2023-09-21T06:56:18Z | http://arxiv.org/abs/2309.11825v2 | # Unambiguous measurement in an unshielded microscale magnetometer with sensitivity below 1 pT/rHz
###### Abstract
Cold atom magnetometers exploit a dense ensemble of quanta with long coherence times to realise leading sensitivity on the micrometer scale. Configured as a Ramsey interferometer, a cold atom sensor can approach atom shot-noise limited precision but suffers from fringe ambiguity, producing gross errors when the field falls outside a narrow pre-defined range. We describe how Hilbert-demodulated optical magnetometry can be realised on cold atom sensors to provide field measurements both precise and unambiguous. Continuous reconstruction of the Larmor phase allows us to determine the dc magnetic field unambiguously in an unshielded environment, as well as measure a variation of the field, in a single shot. The ac measurement allows us to characterize, and then neutralise, line-synchronous magnetic interference, extending reconstruction times. Using \(1.6\times 10^{6}\)\({}^{87}\)Rb atoms in a volume of \((68\,\mu\mathrm{m})^{3}\), we measure a test field to be \(86.3031807(2)\,\mu\mathrm{T}\) in a single shot, achieving dc sensitivity of \(235\,\mathrm{\SIUnitSymbolMicro\SIUnitSymbolMicro\SIUnitSymbolMicro\SIUnitSymbolMicro \SIUnitSymbolMicro\SIUnitSymbolMicro\SIUnitSymbolMicro\SIUnitSymbolMicro \SIUnitSymbolMicro\SIUnitSymbolMicro\SIUnitSymbolMicro\SIUnitSymbolMicro \SIUnitSymbolMicro\SIUnitSymbolMicro\SIUnitSymbolMicro \SIUnitSymbolMicro\SIUnitSymbolMicro\SIUnitSymbolMicro \SIUnitSymbolMicro\SIUnitSymbolMicro\SIUnitSymbolMicro \SIUnitSymbolMicro\SIUnitSymbolMicro\SIUnitSymbolMicro \SIUnitSymbolMicro\SIUnitSymbolMicro\SIUnitSymbolMicro \SIUnitSymbolMicro\SIUnitSymbolMicro\SIUnitSymbolMicro \SIUnitSymbolMicro\SIUnitSymbolMicro\SIUnitSymbolMicro \SIUnitSymbolMicro\SIUnitSymbolMicro\SIUnitSymbolMicro \SIUnitSymbolMicro\SIUnitSymbolMicro \SIUnitSymbolMicro\SIUnitSymbolMicro\SIUnitSymbolMicro \SIUnitSymbolMicro\SIUnitSymbolMicro \SIUnitSymbolMicro\SIUnitSymbolMicro\SIUnitSymbolMicro \SIUnitSymbolMicro\SIUnitSymbolMicro\SIUnitSymbolMicro \SIUnitSymbolMicro\SIUnitSymbolMicro\SIUnitSymbolMicro \SIUnitSymbolMicro\SIUnitSymbolMicro\SIUnitSymbolMicro \SIUnitSymbolMicro\SIUnitSymbolMicro\SIUnitSymbolMicro \SIUnitSymbolMicro\SIUnitSymbolMicro\SIUnitSymbolMicro \SIUnitSymbolMicro\SIUnitSymbolMicro\SIUnitSymbolMicro \SIUnitSymbolMicro\SIUnitSymbolMicro\SIUnitSymbolMicro \SIUnitSymbolMicro\SIUnitSymbolMicro \SIUnitSymbolMicro\SIUnitSymbolMicro \SIUnitSymbolMicro\SIUnitSymbolMicro\SIUnitSymbolMicro \SIUnitSymbolMicro\SIUnitSymbolMicro \SIUnitSymbolMicro\SIUnitSymbolMicro \SIUnitSymbolMicro\SIUnitSymbolMicro \SIUnitSymbolMicro\SIUnitSymbolMicro \SIUnitSymbolMicro\SIUnitSymbolMicro \SIUnitSymbolMicro\SIUnitSymbolMicro \SIUnitSymbolMicro\SIUnitSymbolMicro \SIUnitSymbolMicro\SIUnitSymbolMicro \SIUnitSymbolMicro\SIUnitSymbolMicro\SIUnitSymbolMicro \SIUnitSymbolMicro\SIUnitSymbolMicro \SIUnitSymbolMicro\SIUnitSymbolMicro \SIUnitSymbolMicro\SIUnitSymbolMicro \SIUnitSymbolMicro\SIUnitSymbolMicro \SIUnitSymbolMicro\SIUnitSymbolMicro \SIUnitSymbolMicro\SIUnitSymbolMicro \SIUnitSymbolMicro\SIUnitSymbolMicro \SIUnitSymbolMicro\SIUnitSymbolMicro \SIUnitSymbolMicro\SIUnitSymbolMicro \SIUnitSymbolMicro\SIUnitSymbolMicro \SIUnitSymbolMicro\SIUnitSymbolMicro \SIUnitSymbolMicro\SIUnitSymbolMicro \SIUnitSymbolMicro\SISymbolMicro \SIUnitSymbolMicro\SIUnitSymbolMicro \SIUnitSymbolMicro\SIUnitSymbolMicro \SISymbolUnitSymbolMicro\SIUnitSymbolMicro \SISymbolUnitSymbolMicro\SIUnitSymbolMicro \SISymbolUnitSymbolMicro\SIUnitSymbolMicro \SISymbolUnitSymbolMicro\SIUnitSymbolMicro \SISymbolUnitSymbolMicro\SIUnitSymbolMicro \SISymbolUnitSymbolMicro\SIUnitSymbolMicro \SISymbolUnitSymbolMicro\SIUnitSymbolMicro \SISymbolUnitSymbolMicro\SIUnitSymbolMicro \SISymbolUnitSymbolMicro\SIUnitSymbolMicro \SISymbolUnitSymbolMicro\SIUnitSymbolMicro \SISymbolUnitSymbolMicro\SIUnitSymbolMicro \SISymbolUnitSymbolMicro\SIUnitSymbolMicro \SISymbolUnitSymbolMicro\SIUnitSymbolMicro \SISymbolUnitSymbolMicro\SIUnitSymbolMicro \SISymbolUnitSymbolMicro\SIUnitSymbolMicro \SISymbolUnitSymbolMicro\SIUnitSymbolMicro \SISymbolUnitSymbolMicro\SIUnitSymbolMicro \SISymbolUnitSymbolMicro\SIUnitSymbolMicro \SISymbolUnitSymbolMicro\SIUnitSymbolMicro \SISymbolUnitSymbolMicro\SIUnitSymbolMicro \SISymbolUnitSymbolMicro\SIUnitSymbolMicro \SISymbolUnitSymbolMicro\SIUnitSymbolMicro \SISymbolUnitSymbolMicro\SIUnitSymbolMicro \SISymbolUnitSymbolMicro\SIUnitSymbolMicro \SISymbolUnitSymbolMicro\SIUnitSymbolMicro \SISymbolUnitSymbolMicro\SIUnitSymbolMicro \SISymbolUnitSymbolMicro\SIUnitSymbolMicro \SISymbolUnitSymbolMicro\SIUnitSymbolMicro \SISymbolUnitSymbolMicro\SIUnitSymbolMicro \SISymbolUnitSymbolMicro\SIUnitSymbolMicro \SISymbolUnitSymbolMicro\SIUnitSymbolMicro \SISymbolUnitSymbolMicro\SIUnitSymbolMicro \SISymbolUnitSymbolMicro\SIUnitSymbolMicro \SISymbolUnitSymbolMicro\SIUnitSymbolMicro \SISymbolUnitSymbolMicro\SIUnitSymbolMicro \SISymbolUnitSymbolMicro\SIUnitSymbolMicro \SISymbolUnitSymbolMicro\SIUnitSymbolMicro \SISymbolUnitSymbolMicro\SIUnitSymbolMicro \SISymbolUnitSymbolMicro\SIUnitSymbolMicro \SISymbolUnitSymbolMicro\SIUnitSymbolMicro \SISymbolUnitSymbolMicro\SIUnitSymbolMicro \SISymbolUnitSymbolMicro\SIUnitSymbolMicro \SISymbolUnitSymbolMicro\SIUnitSymbolMicro \SISymbolUnitSymbolMicro\SIUnitSymbolMicro \SISymbolUnitSymbolMicro\SIUnitSymbolMicro \SISymbolUnitSymbolMicro\SIUnitSymbolMicro \SISymbolUnitSymbolMicro\SISymbolUnitSymbolMicro \SISymbolUnitSymbolMicro\SIUnitSymbolMicro \SISymbolUnitSymbolMicro\SISymbolUnitSymbolMicro \SISymbolUnitSymbolMicro\SIUnitSymbolMicro \SISymbolUnitSymbolMicro\SIUnitSymbolMicro \SISymbolUnitSymbolMicro\SIUnitSymbolMicro \SISymbolUnitSymbolMicro\SIUnitSymbolMicro \SISymbolUnitSymbolMicro\SIUnitSymbolMicro \SISymbolUnitSymbolMicro\SIUnitSymbolMicro \SISymbolUnitSymbolMicro\SIUnitSymbolMicro \SISymbolUnitSymbolMicro\SIUnitSymbolMicro \SISymbolUnitSymbolMicro\SISymbolUnitSymbolMicro \SISymbolUnitSymbolMicro\SISymbolUnitSymbolMicro \SISymbolUnitSymbolMicro\SISymbolUnitSymbolMicro \SISymbolUnitSymbolMicro\SISymbolUnitSymbolMicro \SISymbolUnitSymbolMicro\SISymbolUnitSymbolMicro \SISymbolUnitSymbolMicro\SISymbolUnitSymbolMicro \SISymbolUnitSymbolMicro\SISymbolUnitSymbolMicro \SISymbolUnitSymbolMicro\SISymbolUnitSymbolMicro \SISymbolUnitSymbolMicro \SISymbolUnitSymbolMicro\SISymbolUnitSymbolMicro \SISymbolUnitSymbolMicro\SISymbolMicro \SISymbolUnitSymbolMicro\SISymbolUnitSymbolMicro \SISymbolUnitSymbolMicro \SISymbolUnitSymbolMicro\SISymbolMicro \SISymbolUnitSymbolMicro\SISymbolMicro \SISymbolUnitSymbolMicro\SISymbolMicro \SISymbolUnitSymbolMicro\SISymbolMicro \SISymbolUnitSymbolMicro\SISymbolMicro \SISymbolUnitSymbolMicro\SISymbolMicro \SISymbolUnitSymbolMicro\SISymbolMicro \SISymbolUnitSymbolMicro\SISymbolMicro \SISymbolUnitSymbolMicro\SISymbolMicro \SISymbolUnitSymbolMicro\SISymbolMicro \SISymbolUnitSymbolMicro\SISymbolMicro \SISymbolUnitSymbolMicro\SISymbolMicro \SISymbolUnitSymbolMicro\SISymbolMicro \SISymbolUnitSymbolMicro \SISymbolUnitSymbolMicro\SISymbolMicro \SISymbolUnitSymbolMicro\SISymbolMicro \SISymbolUnitSymbolMicro\SISymbolMicro \SISymbolUnitSymbolMicro\SISymbolMicro \SISymbolUnitSymbolMicro\SISymbolMicro\SISymbolMicro \SISymbolUnitSymbolMicro \SISymbolMicro\SISymbolMicro\SISymbolMicro \SISymbolUnitSymbolMicro\SISymbolMicro \SISymbolUnitSymbolMicro\SISymbolMicro \SISymbolMicro\SISymbolMicro \SISymbolUnitSymbolMicro\SISymbolMicro \SISymbolMicro\SISymbolMicro\SISymbolMicro \SISymbolMicro\SISymbolMicro\SISymbolMicro \SISymbolMicro\SISymbolMicro\SISymbolMicro \SISymbolMicro\SISymbolMicro\SISymbolMicro \SISymbolMicro\SISymbolMicro \SISymbolMicro\SISymbolMicro\SISymbolMicro \SISymbolMicro\SISymbolMicro \SISymbolMicro\SISymbolMicro\SISymbolMicro \SISymbolMicro\SISymbolMicro\SISymbolMicro \SISymbolMicro\SISymbolMicro\SISymbolMicro \SISymbolMicro\SISymbolMicro \SISymbolMicro\SISymbolMicro\SISymbolMicro \SISymbolMicro\SISymbolMicro \SISymbolMicro\SISymbolMicro\SISymbolMicro \SISymbolMicro\SISymbolMicro\SISymbolMicro \SISymbolMicro\SISymbolMicro\SISymbolMicro \SISymbolMicro\SISymbolMicro \SISymbolMicro\SISymbolMicro \SISymbolMicro\SISymbolMicro \SISymbolMicro\SISymbolMicro\SISymbolMicro \SISymbolMicro\SISymbolMicro \SISymbolMicro\SISymbolMicro\SISymbolMicro \SISymbolMicro\SISymbolMicro \SISymbolMicro\SISymbolMicro \SISymbolMicro\SISymbolMicro \SISymbolMicro\SISymbolMicro\SISymbolMicro \SISymbolMicro\SISymbolMicro \SISymbolMicro\SISymbolMicro\SISymbolMicro \SISymbolMicro\SISymbolMicro \SISymbolMicro\SISymbolMicro\SISymbolMicro \SISymbolMicro\SISymbolMicro \SISymbolMicro\SISymbolMicro \SISymbolMicro\SISymbolMicro \SISymbolMicro\SISymbolMicro\SISymbolMicro \SISymbolMicro\SISymbolMicro \SISymbolMicro\SISymbolMicro \SISymbolMicro\SISymbolMicro \SISymbolMicro\SISymbolMicro \SISymbolMicro\SISymbolMicro \SISymbolMicro\SISymbolMicro \SISymbolMicro\SISymbolMicro \SISymbolMicro\SISymbolMicro \SISymbolMicro\SISymbolMicro \SISymbolMicro\SISymbolMicro \SISymbolMicro\SISymbolMicro \SISymbolMicro\SISymbolMicro \SISymbolMicro\SISymbolMicro \SISymbolMicro\SISymbolMicro \SISymbolMicro\SISymbolMicro \SISymbolMicro\SISymbolMicro \SISymbolMicro \SISymbolMicro\SISymbolMicro \SISymbolMicro\SISymbolMicro \SISymbolMicro\SISymbolMicro \SISymbolMicro\SISymbolMicro \SISymbolMicro \SISymbolMicro\SISymbolMicro \SISymbolMicro\SISymbolMicro \SISymbolMicro\SISymbolMicro \SISymbolMicro\SISymbolMicro \SISymbolMicro\SISymbolMicro \SISymbolMicro\SISymbolMicro \SISymbolMicro \SISymbolMicro\SISymbolMicro \SISymbolMicro \SISymbolMicro\SISymbolMicro \SISymbolMicro\SISymbolMicro \SISymbolMicro \SISymbolMicro\SISymbolMicro \SISymbolMicro\SISymbolMicro \SISymbolMicro \SISymbolMicro \SISymbolMicro\SISymbolMicro \SISymbolMicro \SISymbolMicro\SISymbolMicro \SISymbolMicro \SISymbolMicro\SISymbolMicro \SISymbolMicro \SISymbolMicro \SISymbolMicro\SISymbolMicro \SISymbolMicro \SISymbolMicro\SISymbolMicro \SISymbolMicro \SISymbolMicro\SISymbolMicro \SISymbolMicro \SISymbolMicro \SISymbolMicro\SISymbolMicro \SISymbolMicro \SISymbolMicro \SISymbolMicro\SISymbolMicro \SISymbolMicro \SISymbolMicro\SISymbolMicro \SISymbolMicro \SISymbolMicro\SISymbolMicro \SISymbolMicro \SISymbolMicro \SISymbolMicro \SISymbolMicro \SISymbolMicro \SISymbolMicro \SISymbolMicro \SISymbolMicro \SISymbolMicro \SISymbolMicro \SISymbolMicro\SISymbolMicro \SISymbolMicro \SISymbolMicro \SISymbolMicro \SISymbolMicro \SISymbolMicro\SISymbolMicro \SISymbolMicro \SISymbolMicro \SISymbolMicro \SISymbolMicro \SISymbolMicro \SISymbolMicro \SISymbolMicro \SISymbolMicro \SISymbolMicro \SISymbolMicro \SISymbolMicro \SISymbolMicro \SISymbolMicro \SISymbolMicro \SISymbolMicro \SISymbolMicro \SISymbolMicro \SISymbolMicro \SISymbolMicro \SISymbolMicro \SISymbolMicro \SISymbolMicro \SISymbolMicro\SISymbolMicro \SISymbolMicro
relevant couplings is given in Appendix A, but we will lay out an illustrative two-level system here. Choosing the quantization axis to be along the magnetic field \(\mathrm{B}(\mathrm{t})\hat{z}\), the Zeeman Hamiltonian is \(\hat{H}(t)=\gamma B(t)\frac{\hbar}{2}\hat{\sigma}_{z}\), where \(\hat{\sigma}_{z}\) is the Pauli-Z operator. Considering the time-dependent Schrodinger equation, \(-i\hbar\partial_{t}\psi=\hat{H}(t)\psi\), we may in general find the solution \(\psi(t)\) through the Magnus expansion [21], \(\psi(t)=\exp(\hat{\Omega}(t))\psi(0)\), where \(\hat{\Omega}(t)\) is given by
\[\hat{\Omega}(t)=\frac{1}{i\hbar}\int_{0}^{t}\hat{H}(t_{1})dt_{1}\\ +\frac{1}{2(i\hbar)^{2}}\int_{0}^{t}\int_{0}^{t_{1}}\left[\hat{H }(t_{1}),\hat{H}(t_{2})\right]dt_{1}dt_{2}+... \tag{1}\]
Of note is the fact that all terms but the first are zero for any Hamiltonian which commutes with itself at all t. For a spin-half system subject to the previously established Hamiltonian, the spin thus evolves as
\[\psi(t)=\begin{bmatrix}\exp\left(\frac{-i}{2}\int_{0}^{t}\omega(\tau)d\tau \right)&0\\ 0&\exp\left(\frac{i}{2}\int_{0}^{t}\omega(\tau)d\tau\right)\end{bmatrix}\psi(0), \tag{2}\]
where we define the Larmor frequency \(\omega(t)=\gamma B(t)\). The result of this evolution is the creation of a phase difference between the spin populations, the Larmor phase, given by
\[\phi(t)=\int_{0}^{t}\omega(\tau)d\tau. \tag{3}\]
In a Ramsey measurement, after some duration \(T\) of free evolution, the final phase is inferred by a rotation and projection of the spin onto the quantization axis, yielding \(\phi(T)=\arcsin(\langle\hat{\sigma}_{z}(T)\rangle)\). In the case of a linearly sensitive Ramsey measurement, in which the readout rotation is in quadrature with the initial rotation to maximize sensitivity to small field changes [22], the Ramsey phase exhibits wrapping for \(\left|\phi\right|>\frac{\pi}{2}\), and as such the measured \(\langle\hat{\sigma}_{z}\rangle\) no longer specifies the correct \(\phi\). Any such measurement where this phase cannot be constrained necessarily produces an ambiguous estimate [23].
In this work, we consider a stationary field \(B(t)=B_{0}+\epsilon_{B}(t)\), where \(\epsilon_{B}(t)\) is an additive white Gaussian noise process with variance \(\sigma_{B}^{2}\) and power spectral density \(S_{\mathrm{BB}}\), and \(B_{0}\) is the static field magnitude we seek to measure. For any finite measurement time, the average \(\bar{B}\) of the noise is almost never zero. Considering a measurement over an interrogation time \(\tau\) as a finite sample from this distribution, this average \(\bar{B}\) will itself take on a Gaussian distribution centred on zero, with standard deviation
\[\sigma_{\bar{B}}=\sqrt{\frac{S_{\mathrm{BB}}}{2\tau}}. \tag{4}\]
Thus for a Ramsey measurement of duration \(\tau\), the phase shift as a result of the magnetic noise \(\epsilon_{B}(t)\) will take on a Gaussian distribution centered on zero with standard deviation
\[\sigma_{\phi}=\gamma\sqrt{\frac{S_{\mathrm{BB}}\tau}{2}}. \tag{5}\]
Consequently, the noise spectral density of the field imposes a fundamental limit on the Ramsey interrogation time for unambiguous magnetometry, when measuring with an unshielded magnetometer.
From Eq. (5), we establish a critical time for \(n\)-sigma confidence of not exceeding \(\phi=\pi/2\),
\[\tau_{c}(n)=\frac{\pi^{2}}{2n^{2}\gamma^{2}S_{\mathrm{BB}}}. \tag{6}\]
In a typical laboratory noise environment, the noise amplitude spectral density \(s_{B}=\sqrt{S_{\mathrm{BB}}}\) is approximately \(100\) pT\(/\sqrt{\mathrm{Hz}}\)[24], and higher in close proximity to high-current power supplies. A \({}^{87}\)Rb magnetometer in such an environment has 2-sigma critical time \(\tau_{c}(2)=64\) ms, after which more than 5% of Ramsey measurements will fringe hop, leading to gross error of order \(1/\gamma\tau\) in inferring the static field \(B_{0}\). This is not merely the inherent non-linearity of a sinusoid, and quickly invalidates central value estimation even from an arbitrarily large number of measurements. Considering that the coherence time of ultracold atomic magnetometers can exceed several seconds [25], this makes Ramsey measurement fundamentally unsuited to unambiguous long-period measurements on such mangetometers in the absence of shielding from external noise. Shielding ultracold atomic magnetometers has proven formidably challenging given optical access requirements and is incompatible with applications of these magnetometers to mapping fields on macroscopic samples with inherent magnetic noise sources. It is this objective of unshielded magnetometry that motivates the development of our continuous phase reconstruction protocol.
## III Apparatus
To realize the protocol outlined in this work, we cool \({}^{87}\)Rb atoms to \(1\,\mu\)K, selectively catching atoms in the \(\left|F,m_{F}\right\rangle=\left|1,-1\right\rangle\) state to produce an optically trapped cloud with a radius of \(68\,\mu\)m containing \(1.6\times 10^{6}\) atoms. Three orthogonal pairs of coils provide control over the local magnetic field. The \(m_{F}\) states are energetically split by a strong axial bias field in an arbitrarily chosen axis, with transitions between these states controlled by resonant rf radiation. Stern-Gerlach measurement of the atomic spin is achieved by releasing the trap, applying a strong magnetic field gradient, and performing time-of-flight absorption imaging of the constituent \(m_{F}\) populations.
The atomic spin may alternatively be continuously measured by means of an off-resonant Faraday probe beam at the "magic wavelength" of \(790.03\) nm [26], focused to a \(150\)\(\mu\)m waist to provide approximately constant illumination intensity across the atomic cloud. Due
to the optical Faraday effect, this probe beam undergoes a polarization rotation proportional to the total spin projection onto the propagation direction, and this rotation is then measured measurement by a balanced polarimeter as shown in Fig. 1. The amplitude of this polarimeter signal depends on probe power, atom count, and beam alignment, and is typically normalised by the maximum recorded value. Typically, we use a bias field of order \(100\,\mu\)T perpendicular to the probe wavevector, maximizing the measurement of the transverse spin, oscillating at the Larmor frequency. The quadratic Zeeman shift is suppressed by the ac Zeeman shift of an off-resonant microwave field detuned from the \(|1,0\rangle\leftrightarrow|2,0\rangle\) clock transition, preventing the periodic decay and revival of the Larmor signal induced by quadratic Zeeman shifts [26]. The resultant polarimeter signal decays exponentially with lifetime \(530\,\)ms and remains detectable well past one second.
In order to perform dc and ac measurement of the local magnetic field, we produce a long-duration Faraday polarimeter recording of the free induction decay (FID) of Larmor precession. The initial spin eigenstate does not evolve, and no polarimeter signal is recorded. The photodetector and probe beam are switched on sequentially to characterize the electronic and optical noise spectra. The spin is then tipped into the transverse plane by a \(13\,\mu\)s duration resonant \(\pi/2\) pulse, initiating precession at Larmor frequency \(\omega(t)\approx 2\pi\times 604\,\)kHz. Any variation of the bias field appears as modulation of the Larmor frequency. While it is possible to estimate the instantaneous Larmor frequency as the numerical derivative of the instantaneous phase of the polarimeter signal, such a time-varying frequency estimate can only be averaged incoherently. In this work, we show that by reconstructing and working with the phase directly, we preserve the coherence of our measurement and exploit the favourable time scaling to achieve superior sensitivity using the long coherence time of our quantum sensor.
## IV Larmor phase reconstruction
We now consider the specific problem of estimating a stationary frequency \(\omega\) in the presence of noise by linear regression to \(\phi(t)=\omega t\)[27]. The reconstruction process begins with the polarimeter voltage signal
\[V(t)=A(t)\sin\left(\phi(t)+\phi_{0}\right)+\epsilon(t), \tag{7}\]
where \(A(t)\) is the instantaneous amplitude, \(\phi(t)\) is the instantaneous Larmor phase, \(\epsilon(t)\) is an additive white gaussian noise process with variance \(\sigma^{2}\) band-limited at the Nyquist frequency corresponding to a sampling rate \(f_{s}\). The reference phase \(\phi_{0}\) accounts for rf, atomic, and electrooptical delays between the local oscillator and digitisation of the polarimeter signal. The SNR at any given time is defined as \(\text{SNR}=A_{\text{rms}}^{2}/\sigma^{2}=A^{2}/2\sigma^{2}\)[27], not to be confused with the SNR defined in terms of minimum detectable spin projection specified in other literature on Faraday polarimetry [28; 26]. The polarimeter signal is then bandpass filtered around the Larmor frequency, giving
\[V_{pb}(t)=A(t)\sin\left(\phi(t)+\phi_{0}\right)+\epsilon_{pb}(t), \tag{8}\]
where \(A(t)\) and \(\phi(t)\) are unchanged under the assumption that the signal power is entirely contained within the passband. As this filtering is applied post-experiment, we can make use of a non-causal zero-phase filter which does not significantly alter the signal phase near the centre of the band. The purpose of this pre-filtering is to reduce the threshold SNR required for achieving the Cramer-Rao Lower Bound (CRLB) for phase-based frequency estimation by least-squares regression [27; 29], as will be described in Sec VII. This has a typical threshold \(\text{SNR}_{\text{thr}}\) of 6 dB. Other estimators have been considered [30; 31; 32], but their computational complexity makes them ill-suited for applications with high sample counts.
Considering a passband with equivalent noise bandwidth \(\Delta f_{\text{pb}}\), the threshold SNR is reduced by \(\Delta\text{SNR}_{\text{thr}}=10\log_{10}(2\Delta f_{\text{pb}}/f_{s})\) dB, and the maximum allowable equivalent noise bandwidth to maintain the CRLB is given by
\[\Delta f_{\text{pb}}=\frac{f_{s}}{2}\cdot\frac{\text{SNR}}{10^{3/5}}. \tag{9}\]
The passband of the filter transmits only a small fraction of the input power, which is overwhelmingly dominated by photon shot noise. The frequency of the Larmor precession can be easily identified in the polarimeter power spectrum by comparison to the pre-tip spectrum outlined in Sec. II, allowing us to correctly center the passband. The band-filtered signal \(V_{\text{pb}}\) is then used to produce an analytic signal representation of the polarimeter measurement,
\[V_{a}(t)=V_{pb}(t)+i\mathcal{H}\left[V_{m}athrmpb\right](t), \tag{10}\]
where \(\mathcal{H}\left[\cdot\right]\) represents the Hilbert transform. The details of the Hilbert transform and analytic signal representation are well-presented elsewhere [33]; its utility in this application is the conversion of a real signal with conjugate positive and negative frequency components to a complex-valued signal with only positive frequency components and a well-defined phase. Importantly, so long as the Larmor signal power is entirely captured within the passband, no signal information is lost or distorted in the conversion to the filtered analytic signal representation \(V_{a}(t)\).
The analytic representation is then recast in polar form as \(V_{a}(t)=V_{m}(t)e^{i\phi_{m}(t)}\), where \(V_{m}(t)=|V_{a}(t)|\) is the instantaneous amplitude envelope and \(\phi_{m}(t)=\arg(V_{a}(t))\) is the (wrapped) instantaneous phase of the signal. As our Nyquist frequency well exceeds our Larmor frequency, this phase may be trivially unwarped using NumPy's unwrap routine [34]. This unwrapped instantaneous phase is a measurement of the relative Larmor phase, and it is the evolution of this phase as governed by Eq. (3) that allows us to perform dc and ac
magnetometry with the system. The measurement and reconstruction workflow of our magnetometer is shown in Fig.1.
## V Ac Magnetometry
While our field model thus far has considered only a constant magnetic field with additive white noise, there are additional field contributions that must be considered in real environments. As an unshielded sensor, our apparatus is sensitive to magnetic fields from adjacent electronics. In our laboratory, low-frequency magnetic interference is dominated by line-synchronous oscillations at the power line fundamental of \(50\,\mathrm{Hz}\), and its odd harmonics. In a typical environment, the fundamental component has amplitude of order \(100\,\mathrm{nT}\)[24], with the \(150\,\mathrm{Hz}\) harmonic of order \(30\,\mathrm{nT}\), and higher harmonics correspondingly weaker. A common technique to mitigate this interference is to synchronize the measurement to the line cycle and limit interrogation times to be much shorter than the line period \(\tau\ll 20\,\mathrm{ms}\). This greatly limits the sensitivity of the magnetometer in cases such as ours where the coherence time exceeds one second. When interrogating for \(\tau\gg 20\,\mathrm{ms}\), we resolve this interference as frequency modulation of the Larmor carrier frequency, allowing us to extract interference amplitudes from the phase modulation. The reconstruction of the Larmor phase over a duration of \(320\,\mathrm{ms}\), or \(16\) line cycles, in the unshielded laboratory environment is shown in Fig. 2.
From our \(320\,\mathrm{ms}\) reconstruction, we extract the RMS amplitudes of the \(50\,\mathrm{Hz}\), \(150\,\mathrm{Hz}\), and \(250\,\mathrm{Hz}\) field harmonic components of \(41.92(3)\,\mathrm{nT}\), \(10.88(9)\,\mathrm{nT}\), and \(2.0(1)\,\mathrm{nT}\) respectively by least-squares regression of a harmonic interference model to the phase function. The ac sensitivity is quantified and discussed in Sec. VII. The phase reconstruction duration is limited by the declining polarimeter SNR as scattering of probe light removes atoms from the cloud. In theory, the reconstruction SNR can be improved by a reduction in sensor bandwidth through tightening the passband, initially chosen to be \(5\,\mathrm{kHz}\); however, there exists a minimum bandwidth below which we impinge on the signal band. The amplitude of the harmonic components determines the frequency deviation of the Larmor signal, and sets this minimum bandwidth, estimated at \(2.5\,\mathrm{kHz}\) peak-to-peak on previous work with this apparatus [35]. Strong field fluctuations lead to large variations in Larmor frequency, requiring a wide pass bandwidth admitting more photon shot noise, and thus bringing forward the time by which the polarimeter SNR is insufficient for accurate phase reconstruction.
Carson's rule for the spectral support of wideband frequency-modulated signals [36] establishes a minimum bandwidth of \(2.5\,\mathrm{kHz}\) given our measured amplitudes of field harmonics. The corresponding minimum SNR for least-squares phase estimation as defined by (9) is \(-22\,\mathrm{dB}\). This limits the reconstruction duration in the case of the measurement shown in Fig. 2 to approxi
Figure 1: Larmor precession measurement (top) and Larmor phase reconstruction (bottom): The Faraday spin-light interface (top) couples Larmor precession in field \(B_{0}\) to polarization rotation of a far-detuned probe beam focused onto the atoms. A balanced polarimeter (half-wave plate, Wollaston prism and differential photodetector) transduces the Larmor signal to a voltage \(V(t)\) digitized at \(5\,\mathrm{MSa/s}\) and \(16\) bits (bottom). The analytic signal representation \(V_{a}(t)\) is formed using the Hilbert transform \(\mathcal{H}\), the unwrapped argument of which is the phase \(\phi_{m}(t)\), our estimate of the Larmor phase \(\phi_{L}(t)\).
Figure 2: Continuous spin measurement: the polarimeter signal, a frequency-modulated sinusoid as seen in its spectrogram (a) is transformed into an analytic signal representation via the Hilbert transform. The phase function (b) extracted from this is shown with the linear term removed to make plain the frequency modulation. The full-bandwidth SNR of the polarimeter signal (c) decays exponentially over the reconstruction time of \(320\,\mathrm{ms}\), due to off-resonant scattering and spin dephasing.
mately \(500\,\mathrm{ms}\) before we can no longer achieve the CRLB through least-squares estimation. Additionally, once the SNR has fallen below this threshold, phase reconstruction may fail entirely due to phase-unwrapping errors, leading to discontinuities in the reconstruction. For this reason, it is imperative to reduce frequency modulation from magnetic interference to allow for the narrowest possible passband, and hence the longest possible measurement time.
## VI Feed-forward noise cancellation
As magnetic interference from electrical equipment is a common problem in metrology experiments, there already exists a body of literature discussing methods of suppressing it. These range from simple feed-forward control cancelling periodic noise using prior field recordings [37, 6] to complex feedback mechanisms combining measurements from several secondary sensors in the vicinity of the primary sensor [38, 39]. While feedback control is effective against periodic _and_ non-periodic noise, the aforementioned secondary sensor methods necessarily measure the field at some distance from the atoms, and struggle to neutralise the spatially-varying interference arising from multiple sources. On the other hand, using the atomic sensor as the input for feedback control [40] necessarily sacrifices limited quantum resources. While capable of providing heavy suppression of ac magnetic interference in stable environments, feed-forward noise cancellation in ultracold atoms to date has relied on performing a long series of calibration measurements [41, 6], making the system vulnerable to short-term drifts in interference amplitude and phase.
Acknowledging that the dominant contributions to the modulation are line-synchronous, we may use our measurement of the local magnetic environment to perform feed-forward noise cancellation by producing a complementary ac field with external control coils. This ac field is provided by small, single-turn shim coils placed coaxially with the bias coils. The amplitudes and phases of the modulation terms, extracted from the reconstructed phase evolution in our unshielded atomic cloud (Fig. 2(b)) calibrate the noise cancellation. Figure 3 shows the polarimeter signal, reconstructed phase, and SNR but now with feed-forward noise cancellation enabled, over almost one second of interrogation time. In comparison with Fig. 2, the line harmonics are now imperceptible in (a), and indeed there is no visible harmonic fluctuation in the retreived phase (b). This suppression of line harmonic modulation allows the pass bandwidth to be tightened from \(5\,\mathrm{kHz}\) to \(500\,\mathrm{Hz}\), permitting the reconstruction time to be extended out to the full \(984\,\mathrm{ms}\) while maintaining sufficient SNR to achieve the CRLB, as shown in (c). As a result, the phase retrieved from the narrower filter (black trace in (b)) shows no sign of phase unwrapping errors across the full reconstruction time, whereas the phase retrieved from the original filter (gray trace) manifests a series of such errors once the SNR falls below threshold beyond approximately \(700\,\mathrm{ms}\).
The efficacy of feed-forward noise cancellation depends almost entirely on the stability of the local magnetic environment. Changes in the power draw of high-current devices (such as motors, amplifiers, and HVAC equipment) cause shifts in the phase and amplitude of the line harmonics, leading to reduced noise cancellation until a new ac magnetometry calibration measurement is performed. The measurement and reconstruction process, from the beginning of trap loading to completion of post-processing, takes less than one minute, allowing rapid recalibration of noise cancellation in the event of a change in the magnetic environment.
We quantify the field stability by analysing the magnetic power spectral density \(S_{\mathrm{BB}}(f)\) in the sub-\(300\,\mathrm{Hz}\) band, estimated by periodogram of the derivative of the Larmor phase as per Eq. (3). Additionally, we define the equivalent RMS magnetic noise amplitude
\[\delta B_{\mathrm{rms}}=\sqrt{\int_{0}^{f_{\mathrm{max}}}S_{\mathrm{BB}}\,df}, \tag{11}\]
which for the laboratory field is measured to be \(44.4\,\mathrm{nT}\) for a \(f_{\mathrm{max}}\) of \(300\,\mathrm{Hz}\). Feed-forward noise cancellation reduces this by a full order of magnitude to \(\delta B_{\mathrm{rms}}=$4.4\,\mathrm{nT}$\) representing a \(20.1\,\mathrm{dB}\) reduction in interference in the
Figure 3: Continuous spin measurement with feed-forward noise cancellation: the spectrogram (a) now exhibits a stable carrier frequency without visible frequency modulation. The phase function residuals (middle) demonstrate random walk behaviour with discontinuities appearing at later times in the reconstruction due to phase-wrapping errors when using the wider bandwidth of \(5\,\mathrm{kHz}\) (grey), but successful reconstruction at the tighter \(500\,\mathrm{Hz}\) bandwidth (black). Frequency measurement by phase estimation achieves the Cramer-Rao bound so long as the SNR remains above the threshold for the respective bandwidths (bottom).
sub-300 Hz band; this greatly narrows the bandwidth occupied by the Larmor signal. Figure 4 shows the magnetic noise power spectral density \(S_{\rm BB}\), confirming that the laboratory environment is dominated by the 50, 150, and 250 Hz line interference harmonics.
Making use of feed-forward noise cancellation, the line-synchronous interference is suppressed to become indistinguishable from the noise background of \(250\,\mathrm{pT}/\sqrt{\mathrm{Hz}}\). This magnetic noise spectral density is marginally higher than values reported as typical [24]. The \(2\sigma\) critical time predicted by Eq. (6) for this magnetic noise spectral density is only \(10\,\mathrm{ms}\), limiting Ramsey measurement duration to a small fraction of the Zeeman coherence time of our ultracold atomic sensing platform.
## VII Phase retrieval dc magnetometry
We realise dc magnetometry by performing a Faraday polarimetry measurement of the FID under feed-forward noise cancellation. The Larmor phase is then reconstructed using the same method as described in Sec. III. With the nearly-complete removal of the line-synchronous modulation, it is expected that the Larmor phase will increase linearly, and as such we now perform a least-squares regression of the reconstructed phase function \(\phi_{m}(t)\) to the linear model \(\phi_{m}(t)=\gamma B_{\rm est}t+\phi_{\rm est}\). In the case of a truly static magnetic field, the phase residual is simply the white Gaussian photon and atom shot noise \(\epsilon_{pb}(t)\) of the polarimeter voltage signal \(V_{pb}(t)\) imputed as phase noise. However, in the case of an unshielded apparatus with appreciable fluctuations in the magnetic field, the phase noise has a colored spectrum, as shown in Fig. 5. This phase power spectral density can be decomposed into two terms, a colored noise term reflecting underlying noise in the measured magnetic field, and a white noise term as a result of imputing photon and atom shot noise as phase fluctuations, with the crossover frequency found at 600 Hz.
As shown in Fig. 4, the magnetic frequency noise spectrum is approximately white under feed-forward noise cancellation, corresponding to the red noise in the phase power spectrum. Considering this, we define the model
\[S_{\phi\phi}(f)=\frac{\gamma^{2}S_{\rm BB}}{4\pi^{2}f^{2}}+S_{\rm shot}, \tag{12}\]
where \(S_{\rm BB}\) is the magnetic power spectral density as defined previously, and \(S_{\rm shot}=1/(f_{s}\mathrm{SNR})\) as defined in Appendix B. Thus, the phase noise power asymptotically increases as the Fourier limit frequency tends to zero, and would be unbounded in the limit \(\tau\rightarrow\infty\). As interrogation time increases, commensurately more low-frequency noise contributes to the phase noise, becoming the dominant contribution beyond \(\tau\gg 1\,\mathrm{second}\). The sensitivity of the field estimate produced by this reconstruction is fundamentally limited by the regression error, resulting in a sensitivity
\[\delta B\sqrt{T}=\frac{2\,\delta\phi}{\gamma\tau}\sqrt{\frac{3}{f_{s}}}, \tag{13}\]
where \(\delta\phi\) is the RMS amplitude of the phase residuals; the derivation of this expression is shown in Appendix B. Considering the noise spectra shown in Fig. 5 with a measurement duration of 1 second and associated Fourier limit frequency of \(1\,\mathrm{Hz}\), the majority of the total phase
Figure 5: Phase power spectral density: the power spectrum of the Larmor phase residuals under feed-forward noise cancellation. Above the corner frequency of 600 Hz, the noise spectrum is white as a result of photon shot noise. Below this frequency, the noise floor is dominated by red phase noise due to approximately white environmental magnetic field noise.
Figure 4: Feed-forward noise cancellation eliminates line harmonics from the magnetic noise spectrum: the magnetic power spectral density is reconstructed from single-shot Faraday polarimeter signals with (black) and without (red) feed-forward noise cancellation. The Fourier-limited resolution is \(5\,\mathrm{Hz}\).
noise power is a result of detector noise, and as such we consider the detector noise limited regime identified in Appendix B. In this limit, in terms of SNR,
\[\delta B\sqrt{T}=\frac{1}{\gamma\tau}\sqrt{\frac{6}{f_{s}\,\mathrm{SNR}}}. \tag{14}\]
With the performance of our magnetometer characterised, we now perform precise dc magnetometry by continuous phase reconstruction under feed-forward noise cancellation. For our longest phase function reconstruction, spanning a sensing duration of \(\tau_{s}=984\)ms, we measure a field of \(86.3031807(2)\)\(\mu\)T, with a estimation-limited sensitivity of \(\delta B\sqrt{T}=235\)\(\mathrm{f\Gamma}/\sqrt{\mathrm{Hz}}\), and a detector-limited sensitivity of \(\delta B\sqrt{T}=180\)\(\mathrm{f\Gamma}/\sqrt{\mathrm{Hz}}\) calculated from Eq. (14) using the average SNR of -17.1 dB, in a sensing volume of 310 \(\mathrm{f\L}\).
With regards to ac sensing, the ac sensitivity can also be characterised by imputing an equivalent magnetic noise for a given amplitude of phase noise [20]. Considering this, we find the expression
\[\delta B_{ac}\sqrt{T}=\frac{2\pi f}{\gamma\sqrt{f_{s}\,\mathrm{SNR}}}. \tag{15}\]
Using the peak SNR of \(-11.1\,\mathrm{dB}\) achieved for measurement time \(\tau<10\,\mathrm{ms}\) this corresponds to a peak ac sensitivity of \(\delta B_{ac}\sqrt{T}=230\ \times\ f\,\mathrm{f\Gamma}/\sqrt{\mathrm{Hz}}\). Naturally, the pass bandwidth must exceed \(2f\), and as the probability of phase reconstruction errors increases with pass bandwidth, this establishes a maximum baseband sensing bandwidth, for our parameters approximately 25 kHz. Importantly, unlike dc sensitivity, the ac sensitivity of this sensor does not scale with increased measurement time.
## VIII Conclusion
We have developed a continuous phase reconstruction protocol on a microscale ultracold atomic sensor allowing precise, unambiguous dc measurement and ac measurement of magnetic field in a noisy environment. Measuring unshielded in Earth's field, the magnetometer resolves nine significant figures of the dc field magnitude in a single shot, undeterred by field drifts of up to several microtesla between shots. Additionally, we have demonstrated suppression of local magnetic interference by over 20 dB by feed-forward control. This sensor fulfills the requirements of rapid, calibration-free, unambiguous and unshielded magnetometry needed to realise magnetic micro-imaging of electrophysiological function, surface condensed-matter physics, and chemical structure on the cellular scale.
## IX Acknowledgements
This work was supported by an Australian Government Research Training Program (RTP) scholarship, and funded by the Australian Research Council under Linkage Project number LP200100082.
## Appendix A Spin-1 dynamics with microwave coupling
As outlined in Sec. II, our alkali spinor is a three-level spin-1 system with free evolution governed by the diagonal Hamiltonian \(\hat{H}(t)\). Owing to the precision of the measurement performed here, we consider this system in full generality using the Breit-Rabi equation for Zeeman energy eigenvalues in \(J=1/2\) states [42], given by
\[E_{F,m}(B)=\frac{-E_{\mathrm{hfs}}}{2(2I+1)}+g_{I}\mu_{B}mB\\ \pm\frac{E_{\mathrm{hfs}}}{2}\sqrt{1+\frac{4mx}{2I+1}+x^{2}}, \tag{10}\]
where \(x=((g_{J}-g_{I})\mu_{B}B)/E_{\mathrm{hfs}}\) is the dimensionless magnetic field magnitude, and \(F=I\pm J\) defines the signs, so that the minus is adopted for our \(F=1\) manifold, for \(I=3/2\) in \({}^{87}\)Rb. In the above equations, \(\mu_{B}\) is the Bohr magneton, \(E_{\mathrm{hfs}}\) is the ground-state hyperfine splitting, \(g_{J}\) and \(g_{I}\) are the fine structure, and nuclear Lande factors respectively, and \(m\) is the magnetic quantum number. In the absence of any other couplings, this results in an \(\mathrm{F}=1\) manifold Hamiltonian given by
\[\hat{H}=\begin{bmatrix}E_{1,+1}(B)&0&0\\ 0&E_{1,0}(B)&0\\ 0&0&E_{1,-1}(B)\end{bmatrix}, \tag{11}\]
where henceforth we suppress explicit time dependence. As the Hamiltonian is diagonal and thus commutes with itself at all times, we may define a Larmor frequency from the Magnus expansion as done in Sec. II, giving
\[\omega(t)=\frac{E_{1,+1}(B)-E_{1,-1}(B)}{2\hbar}, \tag{12}\]
and additionally define a quadratic Zeeman shift
\[q(t)=\frac{E_{1,+1}(B)+E_{1,-1}(B)-2E_{1,0}(B)}{2\hbar}. \tag{13}\]
Typically, the Breit-Rabi equation is only expanded to second order when considering weak-field dynamics, leading to
\[\omega=\frac{(5g_{i}-g_{j})\mu_{B}}{4\hbar}B+\mathcal{O}(B^{3})\approx\gamma_ {0}B, \tag{14}\]
and
\[q=\frac{(g_{i}-g_{j})^{2}\pi\mu_{B}^{2}}{8E_{\mathrm{hfs}}\hbar^{2}}B^{2}+ \mathcal{O}(B^{4})\approx q_{0}B^{2}, \tag{15}\]
where \(\gamma=2\pi\times-7.02369\,\mathrm{GHz/T}\) and \(q_{0}=2\pi\times 7.189\,\mathrm{GHz/T^{2}}\) are the zero-field gyromagnetic ratio and quadratic Zeeman shift respectively using known values [43]. Outside the low-field limit, or where high precision is required, the cubic term in Eq. (14) can also be included, resulting in a \(c_{0}=44.24\,\mathrm{GHz/T^{3}}\) shift to the Larmor frequency. If this term is included, we may no longer decompose the Larmor frequency in terms of a constant gyromagnetic ratio \(\gamma_{0}\). However, as the Larmor frequency remains a monotonic function of field, we may instead define a 'running' gyromagnetic ratio \(\gamma(B)\), defined such that
\[\omega(t)=\gamma(B)\times B. \tag{16}\]
During 'free' evolution of the spin, it interacts with two off-resonant radiation fields: the Faraday probe and the microwave driving. The scalar, vector, and tensor light-shifts as a result of the Faraday probe can be made arbitrarily small through use of a'magic-zero wavelength' and precise control of polarisation [26], leaving the off-resonant microwave coupling as the only additional term to consider in the Hamiltonian. A microwave source detuned from the \(|1,0\rangle\) and \(|2,0\rangle\) states cancels the quadratic Zeeman shift by inducing an ac Zeeman shift given by
\[q_{mw,0}=-\frac{\Omega_{\mathrm{mw}}^{2}}{4\Delta_{\mathrm{mw}}}, \tag{17}\]
where \(\Omega_{\mathrm{mw}}\approx 2\pi\times 2\,\mathrm{kHz}\) and \(\Delta_{\mathrm{mw}}\approx 2\pi\times 150\,\mathrm{kHz}\) are the microwave Rabi frequency, and detuning from the clock transition, respectively. In the limit \(\omega\gg\Delta_{mw}\), this shift is only substantial for the \(|1,0\rangle\leftrightarrow|2,0\rangle\) transition, however when we consider fields of order \(100\,\mu\mathrm{T}\), the Zeeman shift is of the same order as the detuning, resulting in appreciable ac Zeeman shifts for the \(m=\pm 1\) states. These shifts are given by
\[q_{mw,\pm 1}=-\frac{\hbar\Omega_{\mathrm{mw}}^{2}}{4\hbar(\Delta_{\mathrm{mw} }-(E_{2,\pm 1}-E_{2,0})+(E_{1,\pm 1}-E_{1,0}))}, \tag{18}\]
with shifts on the \(\pm 1\) states differing in both magnitude and direction. This leads to additional non-linear terms in \(E_{F,m}(B)\), which must be included at the field magnitude and desired measurement precision in this work. As such, we define new eigenenergies including the microwave shifts, by
\[E_{F,m}^{\prime}(B)=E_{F,m}(B)+q_{\mathrm{mw},m}, \tag{19}\]
and recompute the Larmor frequency
\[\omega(t)=\frac{E_{1,+1}^{\prime}(B)-E_{1,-1}^{\prime}(B)}{2\hbar}, \tag{20}\]
and similarly the quadratic Zeeman shift
\[q(t)=\frac{E_{1,+1}^{\prime}(B)+E_{1,-1}^{\prime}(B)-2E_{1,0}^{\prime}(B)}{2 \hbar}. \tag{21}\]
Away from the resonance \(\Delta_{\mathrm{mw}}=\omega\), the Larmor frequency is a monotonic function of B, allowing us to take the decomposition
\[\omega(t)=\gamma_{\Omega,\Delta}(B)\times B, \tag{22}\]
where \(\gamma_{\Omega,\Delta}(B)\) is the new running gyromagnetic ratio, which explicitly depends on the parameters of the microwave coupling as well as the field. The microwave frequency is locked to a precision reference, and contributes negligible error, and the Rabi frequency at the atoms is determined experimentally by nulling the total quadratic shift at a given B through a process of iterative optimisation.
Thus, with known \(\Omega_{\text{mw}}\), \(\Delta_{\text{mw}}\), and relevant atomic parameters, frequency estimation can provide accurate field estimation at precisions which require inclusion of the non-linear elements of the Zeeman splitting. The running gyromagnetic ratio depends only weakly on B, and thus the decomposition used in Eq. (16) illustrates the near-linear relation between field and Larmor frequency.
## Appendix B Sensitivity
In order to quantify the dc sensitivity of a regression error-limited field estimate from phase reconstruction, we define the sensitivity as
\[\delta B\sqrt{T}=\frac{\sigma_{\omega}\sqrt{\tau}}{\gamma}, \tag{17}\]
where \(\sigma_{\omega}\) is the standard error in the angular frequency estimate retrieved from least-squares regression of a time series \([t_{i},\,\phi_{i}]\) to a linear model over a sensing time \(\tau\). The standard error in such an estimate is a well-established result in regression theory, given by
\[\sigma_{\omega}=\sqrt{\frac{\sigma_{r}^{2}}{\sum_{i=0}^{N}(t_{i}-\bar{t})^{2}}}, \tag{18}\]
with the standard deviation about the regression \(\sigma_{r}\) equal to
\[\sigma_{r}=\sqrt{\frac{\sum_{i=0}^{N}(\phi_{i}-\hat{\phi}_{i})^{2}}{N-2}}, \tag{19}\]
where \(\hat{\phi}_{i}\) is the regression estimate for \(\phi_{i}\). The factor \(\text{N}-2\) is present due to the loss of two statistical degrees of freedom. In the limit of large samples,
\[\lim_{N\rightarrow\infty}\sigma_{r}=\delta\phi. \tag{20}\]
Additionally in this limit, we may take the continuum limit of the sum over \(t_{i}\), giving
\[\sum_{i=0}^{N}(t_{i}-\bar{t})^{2}=f_{s}\int_{0}^{\tau}(t-\frac{\tau}{2})^{2}dt =f_{s}\frac{\tau^{3}}{12}. \tag{21}\]
We may now rewrite Eq. (18) as
\[\sigma_{\omega}=\frac{2\,\delta\phi}{\tau^{\frac{3}{2}}}\sqrt{\frac{3}{f_{s}}}, \tag{22}\]
and equivalently rewrite Eq. (17) as
\[\delta B\sqrt{T}=\frac{2\,\delta\phi}{\gamma\tau}\sqrt{\frac{3}{f_{s}}}. \tag{23}\]
The residual phase variance \(\delta\phi^{2}\) is the sum of variance arising from shot noise in the detector and variance as a result of noise in the actual field,
\[\delta\phi^{2}=\delta\phi_{\text{shot}}^{2}+\delta\phi_{\text{field}}^{2}. \tag{24}\]
As previously discussed, shot noise is imputed as additive white Gaussian noise in our phase reconstruction, \(\delta\phi_{\text{shot}}^{2}=1/(2\text{ SNR})\)[27], while the field phase variance is given by
\[\delta\phi_{\text{field}}^{2}=\int_{\frac{1}{\tau}}^{\infty}\frac{\gamma^{2}S_ {\text{BB}}}{4\pi^{2}f^{2}}df=\frac{\gamma^{2}S_{\text{BB}}\tau}{4\pi^{2}}. \tag{25}\]
Thus,
\[\delta\phi=\sqrt{\frac{1}{2\text{ SNR}}+\frac{\gamma^{2}S_{\text{BB}}\tau}{4 \pi^{2}}}. \tag{26}\]
In the detector noise-limited regime, where \(\delta\phi_{\text{shot}}^{2}\gg\delta\phi_{\text{field}}^{2}\), this reduces to
\[\delta\phi=\sqrt{\frac{1}{2\text{ SNR}}}. \tag{27}\]
Combining this with Eq. (23), we find our governing intrinsic sensitivity equation
\[\delta B\sqrt{T}=\frac{1}{\gamma\tau}\sqrt{\frac{6}{f_{s}\text{ SNR}}}. \tag{28}\]
It can additionally be shown that this achieves the Cramer-Rao bound for phase-based frequency estimation. The Cramer-Rao bound on the variance of the unbiased estimator of angular frequency is [27]
\[\sigma_{\omega}^{2}=\frac{6}{\text{SNR}\,N(N^{2}-1)}\ \left(\frac{\text{ rad}}{\text{sample}}\right)^{2}, \tag{29}\]
which in the large sample limit is
\[\sigma_{\omega}^{2}=\frac{6f_{s}^{2}}{\text{SNR}\ N^{3}}. \tag{30}\]
Putting this in terms of previously defined quantities using \(N=f_{s}\tau\) and \(\omega=\gamma B\), this becomes
\[\sigma_{\hat{B}}^{2}=\frac{6}{\gamma^{2}f_{s}\text{ SNR }\tau^{3}}, \tag{31}\]
or as a sensitivity,
\[\sigma_{\hat{B}}\sqrt{\tau}=\frac{1}{\gamma\tau}\sqrt{\frac{6}{f_{s}\text{ SNR}}}, \tag{32}\]
exactly equivalent to Eq. (28). Such attainment of the Cramer-Rao bound for of phase-based frequency estimation, by least-squares regression in the high SNR limit, is an established result in information theory [27], shown here for completeness. |
2309.14785 | Bayesian inference to identify crystalline structures for XRD | Crystalline phase structure is essential for understanding the performance
and properties of a material. Therefore, this study identified and quantified
the crystalline phase structure of a sample based on the diffraction pattern
observed when the crystalline sample was irradiated with electromagnetic waves
such as X-rays. Conventional analysis necessitates experienced and
knowledgeable researchers to shorten the list from many candidate crystalline
phase structures. However, the Conventional diffraction pattern analysis is
highly analyst-dependent and not objective. Additionally, there is no
established method for discussing the confidence intervals of the analysis
results. Thus, this study aimed to establish a method for automatically
inferring crystalline phase structures from diffraction patterns using Bayesian
inference. Our method successfully identified true crystalline phase structures
with a high probability from 50 candidate crystalline phase structures.
Further, the mixing ratios of selected crystalline phase structures were
estimated with a high degree of accuracy. This study provided reasonable
results for well-crystallized samples that clearly identified the crystalline
phase structures. | Ryo Murakami, Yoshitaka Matsushita, Kenji Nagata, Hayaru Shouno, Hideki Yoshikawa | 2023-09-26T09:33:41Z | http://arxiv.org/abs/2309.14785v1 | # Bayesian inference to identify crystalline structures for XRD
###### Abstract
Crystalline phase structure is essential for understanding the performance and properties of a material. Therefore, this study identified and quantified the crystalline phase structure of a sample based on the diffraction pattern observed when the crystalline sample was irradiated with electromagnetic waves such as X-rays. Conventional analysis necessitates experienced and knowledgeable researchers to shorten the list from many candidate crystalline phase structures. However, the Conventional diffraction pattern analysis is highly analyst-dependent and not objective. Additionally, there is no established method for discussing the confidence intervals of the analysis results. Thus, this study aimed to establish a method for automatically inferring crystalline phase structures from diffraction patterns using Bayesian inference. Our method successfully identified true crystalline phase structures with a high probability from 50 candidate crystalline phase structures. Further, the mixing ratios of selected crystalline phase structures were estimated with a high degree of accuracy. This study provided reasonable results for well-crystallized samples that clearly identified the crystalline phase structures.
ARTICLE TEMPLATE
X-ray diffraction, Bayesian inference, model selection, automatic spectral analysis, replica exchange Monte Carlo method
## 1 Introduction
Crystalline phase structure is essential for understanding the performance and properties of a material. Therefore, this study identified and quantified the crystalline phase structure of a sample based on the diffraction pattern observed when the crystalline sample was irradiated with electromagnetic waves such as X-rays. The measurement of the diffraction patterns using X-rays as probes is known as X-ray diffraction (XRD). The crystal structure of a material can be understood by analyzing the diffraction peaks in the XRD data.
A typical XRD data analysis method involves a simple comparison of the measured XRD data with a database. This method first detects the diffraction peaks in the measured XRD data by the smoothed derivative[1, 2, 3]. Thereafter, the diffraction angles of the detected peaks are compared with those of the diffraction patterns registered in the database and the similarity to the diffraction patterns in the database is calculated. The diffraction patterns ranked by similarity are suggested by an analyst. Thus, in a typical analysis, the experience and knowledge of the researcher are crucial to
shorten the list from several candidate crystal structures. However, the typical diffraction pattern analysis is highly analyst-dependent and not objective. Additionally, there is no established method for discussing the confidence intervals of the analysis results. Consequently, the interpretation of the analysis results is highly dependent on the analysts. Diffraction pattern analysis methods have been proposed to solve such analytical problems.
In recent years, methods for diffraction-pattern analysis using Bayesian estimation have been proposed, allowing confidence intervals to be discussed[4]. In addition, black-box optimization methods have been proposed for hyperparameters that are subjectively determined by an analyst[5]. The proposed method is effective for solving several problems in diffraction pattern analyses. However, this has not been sufficiently discussed from the perspective of automatic estimation of the crystal structure contained in a measured sample from the diffraction pattern. Identifying the crystalline phase structures contained in a diffraction pattern is challenging because the number of candidate crystalline phase structures can be in the order of tens or hundreds, leading to combination explosions. Moreover, this problem requires considerable computational time because the crystal structure contains dozens of diffraction peaks. Despite these challenges, it is necessary to establish a method for identifying crystalline phase structures from diffraction patterns with confidence intervals (probability).
This study aimed to establish a method for the automatic estimation of crystalline phase structures from diffraction patterns. The proposed method decomposes the measured diffraction patterns and automatically selects crystalline phase structures using the diffraction patterns measured at each institute associated with the crystal structures or obtained via simulations as basis functions. The proposed method makes three main contributions to literature.
* Crystalline phase structures can be selected precisely and automatically.
* Posterior distributions can be estimated (confidence intervals can be discussed).
* A global solution is provided (no initial value dependence).
The proposed method, which extracts material descriptors corresponding to the crystal structure from measured diffraction patterns, is expected to play an important role in promoting the development of data-driven materials. Note that in this paper the term "crystal structure" refers specifically to the crystalline phase structure.
## 2 Concept
Figure 1 shows an observation process of XRD data and a conceptual diagram of the proposed method. We suppose a multitude of candidate crystal phases and structures \(\mathcal{F}\) when preparing the materials. The crystal structures contained in the material are selected by material synthesis, manufacturing processes, etc. This study treats the control variable dealing with crystal structure selection as the indicator variable \(\mathbf{g}\in\{0,1\}\). Ideally, the crystalline materials produced should have diffraction line spectra corresponding to the crystal phases and structures they contain. In practice, we observe diffraction peaks whose shapes are dependent on the profile parameters \(\mathbf{\Theta}\) that correspond to the measurement environment. We considered a situation wherein only the observed diffraction data \(\mathcal{D}\) and candidate crystal structures \(\mathcal{F}\) were provided.
This study aimed to inversely estimate the structural indicator \(\mathbf{g}\) and profile parameter set \(\mathbf{\Theta}\) from the observed diffraction data (XRD data) shown in Figure 1. The proposed method is a Bayesian inverse estimation method used to identify crystal
structures for XRD analysis.
## 3 Model
### Problem setting
The purpose is to estimate the profile parameters and the crystalline phase structures in the measured sample, considering the measured XRD data \(\mathcal{D}=\left\{(x_{i},y_{i})\right\}_{i=1}^{N}\) and the candidate crystal structure \(\mathcal{F}\). Here, \(x_{i}\in(0,180)\) and \(y_{i}\in\mathbb{N}\) denote the diffraction angle \(2\theta\)\([^{\circ}]\) and the diffraction intensity [counts], respectively.
The candidate crystal structure factor set \(\mathcal{F}\) is expressed as:
\[\mathcal{F} = \{\mathcal{F}_{k}\ |\ k\in\{1,2,...,K\}\}, \tag{1}\] \[\text{where}\ \mathcal{F}_{k} = \{(p_{m}^{(k)},I_{m}^{(k)})\ |\ m\in\{1,2,...,M_{k}\}\}\subset\mathcal{F}, \tag{2}\]
where \(K\in\mathbb{N}\) is the number of candidate crystal structures and \(\mathcal{F}_{k}\) is the \(k\)-th crystal structure factor. The elements of the crystal structure factor \(p_{m}^{(k)}\in(0,180)\) and \(I_{m}^{(k)}\in[0,1]\) are the diffraction angle (peak position) \([^{\circ}]\) and relative intensity of the \(m\)-th diffraction peak in \(\mathcal{F}_{k}\) for a crystal structure \(k\). Further, \(M_{k}\in\mathbb{N}\) denotes the number of peaks in \(\mathcal{F}_{k}\). In this study, the candidate crystal structure factor set \(\mathcal{F}\) is provided.
Figure 1: Observation process of XRD data and a conceptual diagram of the proposed method, that is, Bayesian inverse estimation to identify the crystalline phase structure and their known structures for XRD analysis[6].
### Profile function
XRD data can be represented by a profile function \(f_{\mathcal{F}}(x_{i};\Theta):\mathbb{R}\rightarrow\mathbb{R}_{0}^{+}\), which is a linear sum of the signal spectrum \(S_{\mathcal{F}}(x_{i};\Theta_{\mathrm{S}})\) and the background \(B(x_{i};\Theta_{\mathrm{B}})\):
\[y_{i} \approx f_{\mathcal{F}}(x_{i};\Theta), \tag{3}\] \[= S_{\mathcal{F}}(x_{i};\Theta_{\mathrm{S}})+B(x_{i};\Theta_{ \mathrm{B}}), \tag{4}\]
where \((x_{i},y_{i})\) denote the measured data points, the function \(S_{\mathcal{F}}(x_{i};\Theta_{\mathrm{S}})\) denotes the signal spectrum based on the candidate crystal structures \(\mathcal{F}\), and the function \(B(x_{i};\Theta_{\mathrm{B}})\) denotes the background. We set \(\Theta=\{\Theta_{\mathrm{S}},\Theta_{\mathrm{B}}\}\) as the profile parameter set. In addition, the sets \(\Theta_{\mathrm{S}}\) and \(\Theta_{\mathrm{B}}\) are the signal spectrum and background parameter sets, respectively.
The signal spectrum \(S_{\mathcal{F}}(x_{i};\Theta_{\mathrm{S}})\) is expressed as a linear sum of the profile function (peaks) \(C_{\mathcal{F}_{k}}(x_{i};\Theta_{\mathrm{S}}^{(k)}):\mathbb{R}\rightarrow \mathbb{R}_{0}^{+}\) in a crystal structure \(\mathcal{F}_{k}\) among the several candidates[7]:
\[S_{\mathcal{F}}(x_{i};\Theta_{\mathrm{S}}) = \sum_{k=1}^{K}h_{k}C_{\mathcal{F}_{k}}(x_{i};\Theta_{\mathrm{S}}^ {(k)}), \tag{5}\]
where \(h_{k}\in\mathbb{R}^{+}\) denotes the signal intensity of crystal structure factor \(\mathcal{F}_{k}\). The profile function \(C_{\mathcal{F}_{k}}(x_{i};\Theta_{\mathrm{S}}^{(k)})\) of candidate crystal structure \(k\) is defined as follows:
\[C_{\mathcal{F}_{k}}(x_{i};\Theta_{\mathrm{S}}^{(k)}) = \sum_{m=1}^{M_{k}}I_{m}^{(k)}V\left(x_{i};\rho_{mk},\Sigma_{k}, \Omega_{k},r_{k}\right), \tag{6}\] \[= \sum_{m=1}^{M_{k}}I_{m}^{(k)}\{(1-r_{k})G(x_{i};\rho_{mk},\Sigma_ {k})+r_{k}L(x_{i};\rho_{mk},\Omega_{k})\},\] (7) \[\mbox{where }\rho_{mk} = p_{m}^{(k)}+\mu_{k}, \tag{8}\]
where \(\mu_{k}\in\mathbb{R}andr_{k}\in[0,1]\) are the peak shift and Gauss-Lorentz ratio at the peak of crystal structure \(k\), respectively, \(\rho_{mk}\in\mathbb{R}\) is the peak position of the peak function, and the function \(V(x_{i}):\mathbb{R}\rightarrow\mathbb{R}_{0}^{+}\) is a pseudo-Voigt function[8]. In addition, \(G(x_{i}):\mathbb{R}\rightarrow\mathbb{R}_{0}^{+}\) and \(L(x_{i}):\mathbb{R}\rightarrow\mathbb{R}_{0}^{+}\) are Gaussian and Lorentz functions, respectively. \(\Sigma_{k}=\Sigma(x_{i};u_{k},v_{k},w_{k},\alpha_{k}):(0,180)\rightarrow\mathbb{ R}^{+}\) and \(\Omega_{k}=\Omega(x_{i};s_{k},t_{k},\alpha_{k}):(0,180)\rightarrow\mathbb{R}^{+}\) are the Gaussian and Lorentzian widths of the peak, respectively, as a function of the diffraction angle \(x_{i}\in(0,180)\). The width functions \(\Sigma_{k}(x_{i})\) and \(\Omega_{k}(x_{i})\) are expressed as
\[\Sigma(x_{i};u_{k},v_{k},w_{k},\alpha_{k}) = A(x_{i};\alpha_{k})\sqrt{u_{k}\tan^{2}\left(\frac{x_{i}}{2} \right)-v_{k}\tan\left(\frac{x_{i}}{2}\right)+w_{k}}, \tag{9}\] \[\Omega(x_{i};s_{k},t_{k},\alpha_{k}) = A(x_{i};\alpha_{k})\left\{s_{k}\sec\left(\frac{x_{i}}{2}\right)+ t_{k}\tan\left(\frac{x_{i}}{2}\right)\right\},\] (10) \[\mbox{where }A(x_{i};\alpha_{k}) = \left\{\begin{array}{ll}\alpha_{k}&(x_{i}\geq\rho_{k})\\ 1&(x_{i}<\rho_{k}),\end{array}\right.\] (11) \[= \mbox{sign}(x_{i}-\rho_{k})\frac{\alpha_{k}-1}{2}+\frac{\alpha_{k} +1}{2}, \tag{12}\]
where \(\{u_{k},v_{k},w_{k}\}\) and \(\{s_{k},t_{k}\}\) are the Gaussian and Lorentzian width parameter sets, respectively. Function \(A(x_{i};\alpha_{k}):\mathbb{R}\rightarrow\mathbb{R}\) is a function expressing the peak asymmetry, and \(\alpha_{k}\in\mathbb{R}^{+}\) is the asymmetry parameter for the peak function. Further, the function \(\mathrm{sign}(\cdot):\mathbb{R}\rightarrow\{-1,1\}\) is the sign function and the trigonometric function \(\mathrm{sec}(x)\) is \(\mathrm{sec}(x)=1/\cos(x)\).
The optimization parameter set for the signal spectrum \(S_{\mathcal{F}}(x_{i};\Theta_{\mathrm{S}})\) is expressed as:
\[\Theta_{\mathrm{S}} = \{\Theta_{\mathrm{S}}^{(k)}\ |\ k\in\{1,2,...,K\}\},\] \[\mathrm{where}\ \Theta_{\mathrm{S}}^{(k)} = \{(h_{k},\mu_{k},\alpha_{k},r_{k},u_{k},v_{k},w_{k},s_{k},t_{k})\}.\]
Background \(B(x_{i};\Theta_{\mathrm{B}}):\mathbb{R}\rightarrow\mathbb{R}\) is defined as follows:
\[B(x_{i};\Theta_{\mathrm{B}}) = aV(x_{i};0.0,\sigma_{\mathrm{bg}},\sigma_{\mathrm{bg}},r_{ \mathrm{bg}})+b, \tag{13}\]
where the background parameter set is \(\Theta_{\mathrm{B}}=\{a,\sigma_{\mathrm{bg}},r_{\mathrm{bg}},b\}\).
### Generation Model
We assume that the observed data \(\{(Y,X)\}=\{(x_{i},y_{i})\}_{i=0}^{N}\) are stochastically distributed owing to statistical noise in the measurement. Next, we consider the joint distribution \(P(Y,\Theta)\), which can be expanded to \(P(Y,\Theta)=P(\Theta|Y)P(Y)\). Using Bayes' theorem to swap the orders of \(Y\) and \(\Theta\), we can expand \(P(Y,\Theta)=P(Y|\Theta)P(\Theta)\). Hence, the posterior distribution \(P(\Theta|Y)\) is expressed as:
\[P(\Theta|Y)=\frac{P(Y|\Theta)P(\Theta)}{P(Y)}\propto P(Y|\Theta)P(\Theta), \tag{14}\]
where \(P(\Theta|Y)\) and \(P(\Theta)\) are the posterior and prior distributions, respectively, in the Bayesian inference. Further, \(P(Y|\Theta)\) is the conditional probability of \(Y\) given the model parameter set \(\Theta\), which is a probability distribution explained by error theory.
To derive \(P(Y|\Theta)\), we consider the observation process of \(\{(x_{i},y_{i})\}\) at the observation data points. Assuming that the observed data are independent of each other, the conditional probability of the observed data \(\{(Y,X)\}\) can be expressed as:
\[P(Y|\Theta)=\prod_{i=0}^{N}P(y_{i}|\Theta). \tag{15}\]
As XRD spectra are count data, the conditional probability \(P(y_{i}|\Theta)\) of the intensity \(y_{i}\) for the diffraction angle \(x_{i}\) follows a Poisson distribution \(\mathcal{P}(y_{i}|f_{\mathcal{F}}(x_{i};\Theta))\):
\[P(y_{i}|\Theta) = \mathcal{P}(y_{i}|f_{\mathcal{F}}(x_{i};\Theta)) \tag{16}\] \[= \frac{f_{\mathcal{F}}(x_{i};\Theta)^{y_{i}}\exp{(-f_{\mathcal{F}} (x_{i};\Theta))}}{y_{i}!}. \tag{17}\]
The cost function \(E(\Theta)\in\mathbb{R}\) is defined by the negative log-likelihood function
\(-\ln P(Y|\Theta)\) and is expressed as follows:
\[E(\Theta) = -\sum_{i=0}^{N}\ln P(y_{i}|\Theta), \tag{18}\] \[= -\sum_{i=0}^{N}\{y_{i}\ln f_{\mathcal{F}}(x_{i};\Theta)-f_{ \mathcal{F}}(x_{i};\Theta)-\ln y_{i}!\}. \tag{19}\]
Further, \(P(\Theta|Y)\) is expressed using the cost function \(E(\Theta)\) and the prior distribution \(P(\Theta)\) as follows:
\[P(\Theta|Y) \propto P(Y|\Theta)P(\Theta), \tag{20}\] \[= \exp\{\ln P(Y|\Theta)\}P(\Theta),\] (21) \[= \exp\{-E(\Theta)\}P(\Theta). \tag{22}\]
### Identification of crystalline phase structures
In the analysis of XRD spectra, the crystal structure contained in the measured sample is often unknown. Therefore, it is important to accurately estimate the true crystal structure contained in the candidate crystal structures \(\mathcal{F}\). We introduce an indicator vector \(\mathbf{g}=\{g_{k}\in\{0,1\}\ |\ k\in\{1,2,...,K\}\}\), which controls the existence of the crystal structure factors in Equation (5):
\[f_{\mathcal{F}}(x_{i};\mathbf{g},\Theta) = S_{\mathcal{F}}(x_{i};\mathbf{g},\Theta_{\mathrm{S}})+B(x_{i};\Theta _{\mathrm{B}}), \tag{23}\] \[S_{\mathcal{F}}(x_{i};\mathbf{g},\Theta_{\mathrm{S}}) = \sum_{k=1}^{K}g_{k}h_{k}C_{\mathcal{F}_{k}}(x_{i};\Theta_{ \mathrm{S}}^{(k)}), \tag{24}\]
where \(g_{k}=1\) indicates that the crystal structure factor \(\mathcal{F}_{k}\) is present in the sample. Conversely, \(g_{k}=0\) implies that it is absent.
We now consider the joint distribution \(P(\mathbf{g},Y,\theta)\). \(P(\mathbf{g},Y,\theta)\) can be expanded to \(P(\mathbf{g},Y,\theta)=P(Y|\mathbf{g},\Theta)P(\mathbf{g})P(\Theta)\). According to Bayes' theorem, the posterior distribution \(P(\mathbf{g},\Theta|Y)\) is expressed as:
\[P(\mathbf{g},\Theta|Y) \propto P(Y|\mathbf{g},\Theta)P(\mathbf{g})P(\Theta), \tag{25}\] \[= \exp{(-E(\mathbf{g},\Theta))}P(\mathbf{g})P(\Theta). \tag{26}\]
The cost function \(E(\mathbf{g},\Theta)\) that introduces the indicator vector \(\mathbf{g}\) is expressed as:
\[E(\mathbf{g},\Theta) = -\sum_{i=0}^{N}\ln P(y_{i}|\mathbf{g},\Theta), \tag{27}\] \[= -\sum_{i=0}^{N}\{y_{i}\ln f_{\mathcal{F}}(x_{i};\mathbf{g},\Theta)-f _{\mathcal{F}}(x_{i};\mathbf{g},\Theta)-\ln y_{i}!\}. \tag{28}\]
Using the joint distribution presented above, the indicator vector \(\mathbf{g}\) is estimated from
the marginal posterior distribution as follows:
\[P(\mathbf{g}|Y) = \int d\Theta P(\mathbf{g},\Theta|Y), \tag{29}\] \[= P(\mathbf{g})\int d\Theta\exp{(-E(\mathbf{g},\Theta))}P(\Theta), \tag{30}\]
We estimate the profile and background parameters using the posterior distribution \(P(\Theta|Y,\mathbf{g})\) on parameter set \(\Theta\).
## 4 Algorithm
### Replica Exchange Monte Carlo method -- REMC method
We perform posterior visualization and the maximum a posteriori (MAP) estimation through sampling from the posterior distribution. A popular sampling method is the Monte Carlo (MC) method, which may be bounded by local solutions for cases when the initial value is affected or the cost function landscape is complex.
Therefore, the replica exchange Monte Carlo (REMC) method[9, 10] was used to estimate the global solution. For sampling using the REMC method, a replica was prepared with the inverse temperature \(\beta\) introduced as follows:
\[P(\mathbf{g},\Theta|Y;\beta=\beta_{\tau}) = \exp{(-\beta_{\tau}E(\mathbf{g},\Theta))}P(\mathbf{g})P(\Theta), \tag{31}\]
where the inverse temperature \(\beta\) is \(0=\beta_{1}<\beta_{2}<\cdots<\beta_{\tau}<\beta_{T}=1\). For each replica, the parameters were sampled using the Monte Carlo method.
## 5 Technique
### Tricks for high speeds
This sub-section describes the techniques used to realize Bayesian inference of XRD spectra. In XRD spectral analysis, the number of candidate crystal structure factors \(\{\mathcal{F}_{k}\}_{k=1}^{K}\) and the number of peaks \(M_{k}\) for each crystal structure factor \(\mathcal{F}_{k}\) are enormous. Therefore, to calculate the cost function \(E()\) for each sample, multiple loops of \(\sum_{i=1}^{N}\sum_{k=1}^{K}\sum_{m=1}^{M_{k}}E(x_{i})\) must be computed. \(M_{k}\) is an immutable value because it is inherently determined by the crystal structure. Although reduction in the number of data \(N\) by downsampling is feasible, it is expected that the peak structure will be broken or the separation accuracy will be significantly reduced owing to sharp XRD peaks.
Herein, we focused on the number of candidate crystal structures \(K\). This study screened the candidate crystallographic structure factors. To calculate the cost function \(E()\), the crystal structures with \(g_{k}=0\) need not be calculated. In other words, only the selected crystal structures \(\{\mathcal{F}_{k}|g_{k}=1\}\) need to be considered. In the proposed method, we compute \(\sum_{i=1}^{N}\sum_{k\in\{\mathcal{F}_{k}|g_{k}=1\}}\sum_{m=1}^{M_{k}}E(x_{i})\) where \(n(\{\mathcal{F}_{k}|g_{k}=1\})<K\).
Figure 2: Supplementary diagram of the similarity calculation procedure.[(a) Observed XRD data \(\mathcal{D}\) and (b) crystal structure factor \(\mathcal{F}_{k}\)].
### Rough pre-screening
In this study, we screened candidate crystallographic structures as described in subsection 5.1. This subsection describes the screening procedure. The similarity between the observed XRD data \(\mathcal{D}\) and the crystal structure factor \(\mathcal{F}_{k}\) was calculated and screening was performed by thresholding the similarity.
Figure 2 presents a supplementary diagram of the similarity calculation procedure, where parts (a) and (b) show the observed XRD data \(\mathcal{D}\) and crystal structure factor \(\mathcal{F}_{k}\), respectively. We resampled the data points close to \(p_{m}^{(k)}\) of \(\mathcal{F}_{k}\) from the observed data \(\mathcal{D}\). The resampled data points are indicated by the red points in Figure. 2(a).
The resampled data points are denoted by the vector \(\boldsymbol{y}^{\prime}\in\mathbb{N}^{M_{K}}\). The intensity vector of the crystal structure is denoted by \(\boldsymbol{I}_{k}=(I_{1}^{(k)},I_{2}^{(k)},\cdots,I_{M_{k}}^{(k)})^{\top}\in \mathbb{R}^{M_{k}}\). Our method computed the similarity between the vectors \(\boldsymbol{y}^{\prime}\) and \(\boldsymbol{I}_{k}\) for each crystal structure \(k\). In this study, we used cosine similarity as the vector similarity.
## 6 Scope and Limitations
This section presents the two limitations of the proposed method.
* The proposed method cannot refine the structural parameters owing to only the crystal structure selection and profile parameters being used as probability variables. Therefore, for precise crystal structure analysis, Rietveld analysis[11; 12; 13; 14] must be performed with reference to the posterior distribution of the selected crystal structure and profile parameters.
* The proposed method automatically selects the crystal structure contained in the measurement sample from the candidate crystal structures. Therefore, crystal structures that are not included in the candidates or unknown crystal structures cannot be analyzed.
## 7 Configuration
### Configuration of prior distribution
We set the prior distribution over the parameter set \(\Theta_{\mathrm{S}}\) of the profile function as follows:
\[h_{k} \sim \mathcal{G}\left(k_{G}=4.00,\theta_{G}=\frac{y_{\min}-y_{\max}}{ 4}\right),\] \[\mu_{k} \sim \mathcal{N}(\mu_{N}=0.00,\sigma_{N}=0.05),\] \[\alpha_{k} \sim \mathcal{G}(k_{G}=5.00,\theta_{\alpha}=0.25),\] \[r_{k} \sim \mathcal{U}(u_{U}=0.00,l_{U}=1.00),\] \[u_{k} \sim \mathcal{G}(k_{G}=1.00,\theta_{G}=0.10),\] \[v_{k} \sim \mathcal{G}(k_{G}=1.00,\theta_{G}=0.10),\] \[w_{k} \sim \mathcal{G}(k_{G}=2.00,\theta_{G}=0.05),\] \[s_{k} \sim \mathcal{G}(k_{G}=2.00,\theta_{G}=0.05),\] \[t_{k} \sim \mathcal{G}(k_{G}=1.00,\theta_{G}=0.10).\]
In addition, we set the prior distribution of the background parameter \(\Theta_{\text{B}}\) as follows:
\[a \sim \mathcal{G}(k_{G}=2.00,\theta_{G}=y_{\text{max}}),\] \[\sigma_{bg} \sim \mathcal{G}(k_{\sigma}=2.00,\theta_{\sigma}=2.50),\] \[r_{bg} \sim \mathcal{U}(u_{U}=0.00,l_{U}=1.00),\] \[b \sim \mathcal{U}\left(u_{U}=y_{\text{min}}-\frac{\sqrt{y_{\text{min}} }}{2},l_{U}=y_{\text{min}}+\frac{\sqrt{y_{\text{min}}}}{2}\right).\]
where the probability distribution \(\mathcal{G}(k_{G},\theta_{G})\) is the gamma distribution and \(k_{G}\in\mathbb{R}^{+}\) and \(\theta_{G}\in\mathbb{R}^{+}\) are the shape and scale parameters, respectively. The probability distribution \(\mathcal{N}(\mu_{N},\sigma_{N})\) is a normal distribution, and \(\mu_{N}\in\mathbb{R}\) and \(\sigma_{N}\in\mathbb{R}^{+}\) are the mean and standard deviation, respectively. Whereas, the probability distribution \(\mathcal{U}(u_{U},l_{U})\) is a uniform distribution, with \(u_{U}\in\mathbb{R}\) and \(l_{U}\in\mathbb{R}\) being the maximum and minimum values, respectively. Further, the values \(y_{\text{min}}\in\mathbb{N},y_{\text{max}}\in\mathbb{N}\) are \(y_{\text{min}}=\min(\mathbf{y})\) and \(y_{\text{max}}=\max(\mathbf{y})\), where \(\mathbf{y}=(y_{1},y_{2},...,y_{N})^{\top}\).
### Configuration of the sampling algorithm
For the exchange MC simulation, we performed 1000 steps of calculations and rejected 1000 of them as burn-in. The inverse temperature was set as follows:
\[\beta_{\tau} = \left\{\begin{array}{ll}0&(\tau=0)\\ \eta^{\tau-T}&(\tau\neq 0),\end{array}\right. \tag{32}\] \[\text{where}\ \tau \in \{0,1,2,...,T\}, \tag{33}\]
where the proportion \(\eta\in\mathbb{R}^{+}\) was set to \(\eta=1.2\), and the number of temperatures \(T\in\mathbb{N}\) was set to \(T=64\). The exchange of parameter sets between replicates was performed at each step.
### Calculator Specification
The calculator specifications were AMD Ryzen Thread ripper 3990X (64 core, 128 thread), 256GB DDR4-3200/PC4-25600SD, Ubuntu 18.04.5 LTS. We performed sampling using the REMC method with 32 threads.
### Configuration of candidate crystal structures
We prepared 50 candidate crystal structures from the AtomWork[15], which is an inorganic material database containing data on the crystal structures, X-ray diffraction, properties, and state diagrams of inorganic materials extracted from scientific and technical literature. We selected 50 candidates based on the condition that they contained titanium (Ti) or oxygen (O) in their composition because this study analyzed the XRD data of the titanium dioxide TiO\({}_{2}\) samples. Table 1 lists the 50 prepared candidate crystal structures.
\begin{table}
\begin{tabular}{c c c|c c} \hline & & state & chemical & crystal \\ & composition & structure & composition & structure \\ \hline \hline
01 & TiO\({}_{2}\) & Ratile & TiO\({}_{2}\) & Anatase \\
02 & TiO\({}_{2}\) & Brookite & O\({}_{2}\) & O\({}_{2}\) \\
03 & Ti\({}_{3}\)O\({}_{5}\) & Ta\({}_{3}\)N\({}_{5}\) & Ti & Mg \\
04 & Ti & W & TiO\({}_{2}\) & Fe\({}_{2}\)N\({}_{0.94}\) \\
05 & Ti\({}_{2}\)O\({}_{5}\) & a & TiO\({}_{2}\) & CdI\({}_{2}\) \\
06 & TiO\({}_{2}\) & Al\({}_{2}\)O\({}_{3}\) & TiO\({}_{0.2}\) & Mg \\
07 & Ti\({}_{5}\)O\({}_{9}\) & Ti\({}_{5}\)O\({}_{9}\) & Ti\({}_{7}\)O\({}_{13}\) & Ti\({}_{7}\)O\({}_{13}\) \\
08 & Ti\({}_{9}\)O\({}_{17}\) & Ti\({}_{9}\)O\({}_{17}\) & Ti\({}_{4}\)O\({}_{7}\) & a \\
09 & Ti\({}_{4}\)O\({}_{7}\) & Ti\({}_{4}\)O\({}_{7}\) & Ti\({}_{4}\)O\({}_{7}\) & b \\
10 & TiO\({}_{2}\) & Fe\({}_{2}\)N\({}_{0.94}\) & TiO & NaCl \\
11 & Ti\({}_{4}\)O\({}_{5}\) & Ti\({}_{4}\)O\({}_{5}\) & TiO\({}_{2}\) & CdI\({}_{2}\) \\
12 & Ti\({}_{4}\)O\({}_{7}\) & Ti\({}_{4}\)O\({}_{7}\) & Ti\({}_{4}\)O\({}_{7}\) & a \\
13 & Ti\({}_{4}\)O\({}_{7}\) & b & Ti\({}_{5}\)O\({}_{9}\) & Ti\({}_{5}\)O\({}_{9}\) \\
14 & Ti\({}_{9}\)O\({}_{17}\) & Ti\({}_{9}\)O\({}_{17}\) & Ti\({}_{6}\)O\({}_{11}\) & Ti\({}_{6}\)O\({}_{11}\) \\
15 & Ti\({}_{7}\)O\({}_{13}\) & Ti\({}_{7}\)O\({}_{13}\) & Ti\({}_{8}\)O\({}_{15}\) & Ti\({}_{8}\)O\({}_{15}\) \\
16 & Ti\({}_{6}\)O\({}_{11}\) & Ti\({}_{6}\)O\({}_{11}\) & Ti\({}_{3}\)O & Ti\({}_{3}\)O \\
17 & TiO & TiO & Ti\({}_{0.84}\)O\({}_{0.84}\) & TiO \\
18 & Ti\({}_{4}\)O\({}_{5}\) & Ti\({}_{4}\)O\({}_{5}\) & Ti & Ti \\
19 & Ti\({}_{6}\)O & Ti\({}_{6}\)O & Ti\({}_{6}\)O & Ti\({}_{6}\)O \\
20 & TiO\({}_{2}\) & Mg & TiO\({}_{2}\) & ZrO\({}_{2}\)-b \\
21 & TiO\({}_{2}\) & MnO\({}_{2}\) & Ti\({}_{2}\)O\({}_{5}\) & b \\
22 & Ti\({}_{3}\)O\({}_{5}\) & V\({}_{3}\)O\({}_{5}\) & TiO & WC \\
23 & TiO\({}_{2}\) & VO\({}_{2}\)-b & TiO\({}_{2}\) & MnO\({}_{2}\) \\
24 & Ti\({}_{2}\)O\({}_{5}\) & b & Ti\({}_{2}\)O\({}_{5}\) & a \\
25 & TiO\({}_{2}\) & VO\({}_{2}\)-b & Ti\({}_{3}\)O\({}_{5}\) & V\({}_{3}\)O\({}_{5}\) \\ \hline \end{tabular}
\end{table}
Table 1: Fifty candidate crystal structures prepared from the AtomWork[15], which is the inorganic material database.
## 8 Results and discussion
### Fitting results in actual measurement data
We conducted a calculation experiment on the measured XRD data. The measurement sample was a mixture of multiple types of TiO\({}_{2}\): Anatase, Brookite, and Rutile. The mixture ratios were equal (1/1/1 wt. %). We prepared measurement samples such that the crystalline phases were homogeneous. Consequently, we measured the XRD data by using monochromatic X-rays of Cu K\({}_{\alpha 1}\). Further, a non-reflecting plate cut from a specific orientation of a single crystal of silicon was used as the sample plate. The diffraction angles \(2\theta\) were in the range of 10-60[\({}^{\circ}\)], with \(2\theta\) of \(\mathbf{x}=(10.00,10.02,10.04,...,60.00)^{\top}\).
Figure 3 presents the selection results for each temperature obtained using the REMC method. The x- and y-axes denote the candidate crystal structures and inverse temperature index \(\tau\), respectively. This figure visualizes the probability of indicator \(P(\mathbf{g}|Y;\beta=\beta_{\tau})\) [%]. The proposed method estimated the crystal structure of a sample from 50 candidates. Candidate crystal structures were obtained from AtomWork as described in Section 7.4. A high index corresponds to a lower temperature. The color scale indicates the probability of \(g_{k}=1\) calculated from the sampling frequency. The dark red color indicates the presence of a crystal structure in the measured sample. The result for the lowest temperature (\(\tau=64\)) shows that our method could select the true crystal structures, that is, Anatase, Brookite, and Rutile, with 100 [%] probability. A computational time of approximately 3 h was required to obtain this result. Therefore, our method can be used to estimate the crystal structures of a sample by analyzing the full diffraction profile using Bayesian inference. Thus, the contribution of our method is the simultaneous identification of profile parameters and crystal structures and the provision of their posterior distributions.
As shown in Figure 3, the selection probability of Brookite decreases at medium to high temperatures compared to those of Anatase and Rutile. This suggests that Brookite was more difficult to identify than Anatase and Rutile. In crystallography, Brookite is a low-temperature phase and is known to exhibit a poorer crystal structure than Rutile, which is high-temperature-stable. This difficulty in its determination is believed to originate from the low crystallinity of Brookite.
An analysis using all 50 candidates would require a considerable amount of time. Therefore, we performed prescreening using the cosine similarity described in Section 5. Figure 4 shows the cosine similarity between the measured XRD data \(\mathcal{D}\) and crystal structure factors \(\mathcal{F}\) during prescreening. In this figure, the red line denotes the threshold value, which was set to 0.5. The y-axis denotes cosine similarity. We performed the analysis using crystal structure factors with a cosine similarity greater than 0.5. This prescreening narrowed the list from 50 to 12 candidates. This is expected to result in significant reduction in the computational costs.
We analyzed the measured XRD data using the 12 candidates that were narrowed down by prescreening. Figure 5 presents the selection results for each temperature obtained using the REMC method. The x- and y-axes denote the candidate crystal structures and the index of the inverse temperature \(\tau\). The proposed method could select the true crystal structures of Anatase, Brookite, and Rutile with 100 [%] probability. The crystal structures and results of sampling all 50 candidates were successfully identified (shown in Figure 3). The computation required approximately 1 h, and prescreening reduced the computational cost by a factor of three. These results indicate that prescreening can effectively improve the efficiency of the calculations. However,
Figure 3: Selection results from 50 candidates for each temperature in the REMC method. The x- and the y-axes denote the candidate crystal structures and index of inverse temperature \(\tau\), respectively. This figure shows a visualization of the indicator probability \(P(\mathbf{g}|Y;\beta=\beta_{\tau})\) [%]. The large index corresponds to lower temperatures. The color scale denotes the sampling frequency of \(g_{k}=1\) on a log scale. The dark red indicates the presence of crystal structure in the measured sample.
Figure 4: Cosine similarity between the measurement XRD data \(\mathcal{D}\) and the crystal structure factors \(\mathcal{F}\) for prescreening. In this figure, the red line denotes the threshold value set at 0.5.
Figure 5: Selection result from 12 candidates for each temperature in the REMC method. The x- and the y-axes denote the candidate crystal structures and the index of the inverse temperature \(\tau\). This figure shows a visualization of the indicator probability \(P(\mathbf{g}|Y;\beta=\beta_{\tau})\) [%]. The large index corresponds to lower temperatures. The color scale denotes the sampling frequency of \(g_{k}=1\) on a log scale. The dark red color indicates the presence of crystal structure in the measured sample.
prescreening may exclude true crystal structures from the candidates.
Figure 6(a) shows the fitting results via the profile function in the measurement XRD data. In Figure 6(a), the black and red lines indicate the measured XRD data and the fitting profile functions, respectively. Figure 6(b) shows the peak components of the three crystal structures of TiO\({}_{2}\): Anatase, Brookite, and Rutile. The red, green, and blue lines indicate the peaks of Anatase, Brookite, and Rutile, respectively. As shown in this figure, the estimated profile function faciliated a good fit of the XRD data. The mean Poisson cost \(E(\hat{\Theta})\) was 5.026.
Figure 7 shows an expanded view of the posterior distribution of the peak used to determine its shape of the posterior distribution. In Figure. 7, the red, green, and blue histograms correspond to the posterior distributions of Anatase, Brookite, and Rutile, respectively. As evident, the posterior distribution of Rutile, which has the best crystallinity, exhibited a sharper shape than Anatase and Brookite. The shape of the posterior distribution was similar to that of a quadratic function, where the y-axis represents a logarithmic scale. This implies that the posterior distribution exhibits a Gaussian probability distribution shape. The MAP estimate of the ratio was Anatase : Brookite : Rutile = \(35.7\) : \(31.8\) : \(32.5\) [%]. Because the structural ratio of the
Figure 6: Result of profile analysis in the measurement XRD data using our method [(a): Fitting result via profile function in the measurement XRD data. In this figure, the black and the red lines indicate the measurement XRD data and the fitting profile functions, respectively. (b): Peak components in three crystal structures of TiO\({}_{2}\); Anatase, Brookite, and Rutile. The red, green, and blue lines indicate the peaks of Anatase, Brookite, and Rutile, respectively.]
preparation is Anatase : Brookite : Rutile = \(33.3\dot{3}:33.3\dot{3}:33.3\dot{3}\) [%], the proposed method is considered a reasonable estimation.
Figure 8 shows the posterior distribution of the profile parameters when analyzing the measurement XRD data using the proposed method. The red, green, and blue histograms represent the posterior distributions of Anatase, Brookite, and Rutile, respectively. Figure 8(a)-(d) show the peak height \(h\), peak shift \(\mu\), Gaussian-Lorentz ratio \(r\), and asymmetry parameter \(\alpha\), respectively. Figure 8(e) and (f) show the Gaussian width \(\Sigma(x_{i};u_{k},v_{k},w_{k},\alpha_{k})\) and the Lorentz width \(\Omega(x_{i};u_{k},v_{k},w_{k},\alpha_{k})\), where \(x_{i}\) is \(2\theta=60\) [\({}^{*}\)]. As indicated in part (a) of this figure, the height \(\mathbf{h}\) can be estimated with high precision using the proposed method. The figure shows that the peak shifts for all three crystal structures were positive (\(\mu=0.04\sim 0.06\)). This may be attributed to minute calibration deviations in the measurement device such as eccentricity and zero-point errors. As shown in parts (e) and (f) of this figure, the peak width of Rutile was narrow, indicating good crystallinity. Furthermore, we confirmed that Rutile with good crystallinity exhibited a sharp posterior distribution shape for most of the profile parameters. By contrast, Brookite with poor crystallinity, tended to exhibit a broad posterior distribution. This indicates that a structure with good crystallinity provides a highly precise estimation.
## 9 Conclusion
The knowledge of the probability that a sample contains a candidate crystal structure from full-range XRD data considering both the diffraction angles of the peaks and the profile functions, is essential. This study aimed at the Bayesian estimation of the structure contained in a sample from a large number of crystal structure candidates in the analysis of XRD data. Therefore, indicator vectors were introduced into the profile function and the XRD data were analyzed by sampling the posterior distribution using the REMC method. Consequently, we succeeded in identifying the true crystal struc
Figure 7: Expanded view of the posterior distribution of the peak area ratio when analyzing the measurement XRD data using the proposed method. The units for the axes are percentages [%].
Figure 8: Posterior distribution of the profile parameters in the measurement XRD data. The red, green, and blue histograms correspond to the posterior distribution of Anatase, Brookite, and Rutile, respectively. [(a): Peak height \(h\), (b): peak shift \(\mu\), (c): Gauss-Lorentz ratio \(r\), and (d): asymmetry parameter \(\alpha\)]. (e) and (f) are Gauss width \(\Sigma(x_{i};u_{k},v_{k},w_{k},\alpha_{k})\) and Lorentz width \(\Omega(x_{i};u_{k},v_{k},w_{k},\alpha_{k})\), where \(x_{i}\) is \(2\theta=60\) [\({}^{\circ}\)]. The black dot-dash line is a true parameter of the profile function.
tures of 50 candidates with high probability. The proposed method also estimated the mixing ratio of the selected crystal structures with high precision. In this study, we provide reasonable results that allow clearer identification of the crystal structure for more crystalline structures. Our method is a highly sensitive and probabilistic analysis method that can automatically identify crystal structures from full-range XRD data.
## Acknowledgment
This work was supported by MEXT KAKENHI under grant (number 18K05191); and JSPS KAKENHI under grant (number 19K12154).
|
2309.04596 | Learning Task Skills and Goals Simultaneously from Physical Interaction | In real-world human-robot systems, it is essential for a robot to comprehend
human objectives and respond accordingly while performing an extended series of
motor actions. Although human objective alignment has recently emerged as a
promising paradigm in the realm of physical human-robot interaction, its
application is typically confined to generating simple motions due to inherent
theoretical limitations. In this work, our goal is to develop a general
formulation to learn manipulation functional modules and long-term task goals
simultaneously from physical human-robot interaction. We show the feasibility
of our framework in enabling robots to align their behaviors with the long-term
task objectives inferred from human interactions. | Haonan Chen, Ye-Ji Mun, Zhe Huang, Yilong Niu, Yiqing Xie, D. Livingston McPherson, Katherine Driggs-Campbell | 2023-09-08T21:07:08Z | http://arxiv.org/abs/2309.04596v1 | # Learning Task Skills and Goals Simultaneously from Physical Interaction
###### Abstract
In real-world human-robot systems, it is essential for a robot to comprehend human objectives and respond accordingly while performing an extended series of motor actions. Although human objective alignment has recently emerged as a promising paradigm in the realm of physical human-robot interaction, its application is typically confined to generating simple motions due to inherent theoretical limitations. In this work, our goal is to develop a general formulation to learn manipulation functional modules and long-term task goals simultaneously from physical human-robot interaction. We show the feasibility of our framework in enabling robots to align their behaviors with the long-term task objectives inferred from human interactions.
## I Introduction
One of the core challenges in physical human-robot interaction (pHRI) for robotic manipulation is to estimate human goals and adapt the robot's interaction with the environment accordingly [1]. Learning to manipulate objects such as chopping or pouring is relatively easy for a child with parental guidance and feedback, but modeling and planning robot interactions with the environment to do the same can be difficult. Previous works have explored a variety of strategies for handling pHRI, including generating desired impedance, switching to gravity compensation to comply with human-applied force, or updating the objective function based on real-time interaction [2, 3].
These underlying approaches are accompanied by several limitations, such as restricting simple motion generation and lacking the capacity to synthesize intricate motions. In contrast, we introduce behavior primitives and propose a framework that allows robots to learn from human interaction while manipulating liquid or granular materials (see Fig. 1). By employing a parameterized action space, the autonomous agent can infer human intent to interact with the object and environment. The incorporation of behavior primitives enables the robot to generate complex behaviors, thereby facilitating operation in more general settings.
In this work, we propose a novel framework that identifies task goals and subsequently update the robot's behavior during interactions with the external environment. We aim to minimize human efforts (i.e., interaction time) to teach the robot to complete the task. To this end, we take a hierarchical approach to decompose the task into high-level task skills (also referred to as behavior primitives in this paper) and low-level parameters for each skill, which allows the robot to learn complex tasks such as pouring. Through the employment of hierarchical modeling, we allow robots to reject human disturbances, estimate high-level controller types, and infer parameters for low-level controllers. We employ a Bayesian inference framework to infer both the desired skills (_e.g._, shaking, tapping, and stopping) and the long-term task goal (_e.g._, pouring amount).
## II Methodology
### _Problem Formulation_
We model pHRI as a discrete system with forward dynamics function \(f\) in line with prior works [2, 3, 4]:
\[x_{r}^{t+1}=f(x_{r}^{t},u_{r}^{t}+u_{h}^{t}). \tag{1}\]
where \(x\in\mathcal{R}^{n\times 6}\) denotes the joint positions and velocities of the \(n\)-DOF robot, \(u_{r}\in\mathcal{R}^{6}\) is commanded velocity of the end effector, and \(u_{h}\in\mathcal{R}^{6}\) is the velocity of the end effector resulting from the wrench applied by the human at the time step \(t\). In the presence of human actions, the robot's trajectory is subject to deformation to conform to human corrections.
We assume that there is a task parameter \(\beta\), which captures the desired goal of humans in carrying out the task. The robot, lacking knowledge of the human's true target, relies on human interactions to gain information about this objective. For example, \(\beta\) could denote the desired quantity of liquids or powders to complete the pouring task, or the target size for chopping. The robot estimates \(\beta\) by filtering based on observations \(o^{0:t}=\{x_{r}^{t},x_{e}^{t},u_{r}^{t},u_{h}^{t}\}\), where \(x_{e}^{t}\) represents the state measurement of the environment. In our task setting, \(x_{e}^{t}\) denotes the measured poured amount. The robot updates a belief \(b^{t}(\beta)=P(\beta|o^{0:t})\) from the previous history of observations \(o^{0:t}\). Utilizing Bayes' theorem, we have:
\[P(\beta|o^{0:t})\propto P(o^{t}|\beta,o^{0:t-1})\,P(\beta|o^{0:t-1}) \tag{2}\]
We can expand the likelihood \(P(o^{t}|\beta,o^{0:t-1})\) as:
\[P(o^{t}|\beta,o^{0:t-1}) =P(x_{r}^{t},x_{e}^{t},u_{r}^{t},u_{h}^{t}|\beta,o^{0:t-1}) \tag{3}\] \[=P(u_{r}^{t}|\beta,o^{0:t-1},x_{r}^{t},x_{e}^{t},u_{h}^{t})\] \[P(u_{h}^{t}|\beta,o^{0:t-1},x_{r}^{t},x_{e}^{t})P(x_{r}^{t},x_{e }^{t}|\beta,o^{0:t-1})\]
We make three reasonable assumptions to justify the simplification of the likelihood \(P(o^{t}|\beta,o^{0:t-1})\). The first assumption is that the robot's action complies with the human during the interaction while being a deterministic mapping from \(\beta\) and \(x_{e}^{t}\) when there is no interaction. Thus,
Fig. 1: Experimental Setup. The participant grasps the robot’s end effector to perform the pouring actions, while the robot learns about the desired pouring amount and the pouring skills.
\(P(u_{r}^{t}|\beta,o^{0:t-1},x_{r}^{t},x_{e}^{t},u_{h}^{t})\) can be reduced in the equations 3. The second assumption is human's action follows the _Markov property_, so \(u_{h}^{t}\) is _independent_ of \(o^{0:t-1}\) conditioned on the task goal \(\beta\) and states \(x_{r}^{t+1},x_{e}^{t+1}\). The third assumption is that state transition dynamics are deterministic for both robot and environment, so \(P(x_{r}^{t+1},x_{e}^{t+1}|\beta,o^{1:t})\) disappears. We can then express the likelihood as:
\[P(o^{t}|\beta,o^{0:t-1})\propto P(u_{h}^{t}|\beta,x_{r}^{t},x_{e}^{t}) \tag{4}\]
Combining Equation 2 and Equation 4, we can now get the iterative posterior distribution over the task goal belief:
\[b^{t}(\beta)\propto P(u_{h}^{t}|\beta,x_{r}^{t},x_{e}^{t})\,b^{t-1}(\beta) \tag{5}\]
### _Approximate Inference over Task Goals_
**Observation Model.** We model that humans' actions reflect the difference between the current task progress and the desired task goal. For example, in the task of robot pouring, the human operator adjusts the robot's pouring speed based on the difference between the desired and actual amount of liquid. When the gap is large, the operator will adjust the robot to pour more aggressively, while for a smaller gap, the operator will adjust the robot to pour more conservatively. To formalize this relationship, we define a distance function \(\Delta(x_{e}^{t},\beta)\) to measure the discrepancy between the current progress \(x_{e}^{t}\) and the desired task goal \(\beta\). We then use a function \(g\) that maps the distance to a probability distribution over the space of possible actions, giving us:
\[p(u_{h}^{t}|\beta,x_{r}^{t},x_{e}^{t})\propto p(u_{h}^{t}|\beta,x_{e}^{t}) \propto g(\Delta(x_{e}^{t},\beta)). \tag{6}\]
In the case of robot pouring, the function \(g\) can be chosen to be a sigmoid function that maps the discrepancy to a probability of choosing a certain pouring speed, with higher discrepancy values corresponding to higher probabilities of aggressive pouring. In the event that the robot state \(x_{r}^{t}\) aligns with human expectations, it is expected that humans will refrain from taking action, denoted by \(u_{h}^{t}\) being zero.
**Approximating Task Goal Posterior.** Since there is no linear relationship between the observation model and the task goal, and given the cardinality of the task goal space \(|\mathcal{B}|\), we represent the belief as the density of a sampled distribution:
\[b^{t}(\beta)=\sum_{i=1}^{|\mathcal{B}|}w_{i}^{t}\delta(\beta_{i}) \tag{7}\]
where \(\delta(\beta_{i})\) is a delta function centered at \(\beta_{i}\). Using _importance sampling_, we can approximate a target distribution \(b^{t}(\beta)\) by drawing samples from a proposal distribution \(q(\beta^{t}|u_{h}^{0:t})\)[5]. The weight \(w_{i}^{t}\) can be represented as:
\[w_{i}^{t}\propto\frac{p(\beta_{i}|o^{0:t})}{q(b^{t}(\beta_{i})|o^{0:t})}. \tag{8}\]
The weight can be written recursively as:
\[w_{i}^{t}\propto w_{i}^{t-1}\frac{P(u_{h}^{t}|\beta,x_{r}^{t},x_{e}^{t})}{q(b ^{t}(\beta_{i})|b^{t-1}(\beta_{i}),o^{t})}. \tag{9}\]
Taking the proposal distribution \(q(b^{t}(\beta_{i})|b^{t-1}(\beta_{i}),o^{t})\) to be a deterministic value, we have:
\[w_{i}^{t}\propto w_{i}^{t-1}P(u_{h}^{t}|\beta,x_{r}^{t},x_{e}^{t}). \tag{10}\]
In practice, we want to ensure the weights are normalized, specifically to satisfy the condition of \(\sum_{i=0}^{|\mathcal{B}|}w_{i}^{t}=1\). The full algorithm is summarized in Algorithm 1.
```
Initialize: \(w_{i=0:|\mathcal{B}|}^{t=0}\leftarrow\frac{1}{|\mathcal{B}|}\) for\(t=0\) to \(T\)do \(u_{r}^{t}=B_{r}(\hat{q}_{r}^{t}-\hat{q}^{t})+K_{r}(q_{r}^{t}-q^{t})\) \(\eta=0\) if\(u_{h}^{t}=0\)then \(P(u_{h}^{t}|\beta_{i},x_{r}^{t},x_{e}^{t})\gets 1\) else \(P(u_{h}^{t}|\beta_{i},x_{r}^{t},x_{e}^{t})\gets P_{g(\Delta(x_{e}^{t},\beta_ {i}))}(u_{h}^{t})\) \(\triangleright\) (6) end for for\(i=1\) to \(|\mathcal{B}|\)do \(w_{i}^{t}\gets w_{i}^{t-1}p(u_{h}^{t}|\beta,x_{e}^{t})\)\(\eta\leftarrow\eta+w_{i}^{t}\) end for for\(i=1\) to \(|\mathcal{B}|\)do \(w_{i}^{t}\gets w_{i}^{t}/\eta\) end for \(u_{r}^{t}\gets Opt(\Delta(\beta_{cur},b^{t}(\beta)))\)
```
**Algorithm 1**Online Goal Learning from pHRI
## III Conclusion and Future Works
In this work, we introduce a novel framework that employs importance sampling within a Bayesian paradigm to minimize the human effort required in various daily scenarios that involve pHRI. We formalize how the robot can effectively infer task objectives (i.e., target pouring amount) and optimize the task skills (i.e., shaking, pouring, stopping) during the physical interaction. We show the analysis that how task goals can be inferred in complex daily manipulation tasks. We plan to conduct a user study to demonstrate the applicability of the proposed framework in a series of complex pouring tasks involving shaking and tapping motions. In the future, we also intend to evaluate the potential of our proposed approach in terms of its capability to generalize and its feasibility in practice with respect to a range of source containers and pouring materials (e.g., rice, beans, candies, cereals, and carrots).
|
2301.13838 | Image Shortcut Squeezing: Countering Perturbative Availability Poisons
with Compression | Perturbative availability poisons (PAPs) add small changes to images to
prevent their use for model training. Current research adopts the belief that
practical and effective approaches to countering PAPs do not exist. In this
paper, we argue that it is time to abandon this belief. We present extensive
experiments showing that 12 state-of-the-art PAP methods are vulnerable to
Image Shortcut Squeezing (ISS), which is based on simple compression. For
example, on average, ISS restores the CIFAR-10 model accuracy to $81.73\%$,
surpassing the previous best preprocessing-based countermeasures by $37.97\%$
absolute. ISS also (slightly) outperforms adversarial training and has higher
generalizability to unseen perturbation norms and also higher efficiency. Our
investigation reveals that the property of PAP perturbations depends on the
type of surrogate model used for poison generation, and it explains why a
specific ISS compression yields the best performance for a specific type of PAP
perturbation. We further test stronger, adaptive poisoning, and show it falls
short of being an ideal defense against ISS. Overall, our results demonstrate
the importance of considering various (simple) countermeasures to ensure the
meaningfulness of analysis carried out during the development of PAP methods. | Zhuoran Liu, Zhengyu Zhao, Martha Larson | 2023-01-31T18:31:20Z | http://arxiv.org/abs/2301.13838v2 | # Image Shortcut Squeezing:
###### Abstract
Perturbative availability poisoning (PAP) adds small changes to images to prevent their use for model training. Current research adopts the belief that practical and effective approaches to countering such poisons do not exist. In this paper, we argue that it is time to abandon this belief. We present extensive experiments showing that 12 state-of-the-art PAP methods are vulnerable to Image Shortcut Squeezing (ISS), which is based on simple compression. For example, on average, ISS restores the CIFAR-10 model accuracy to \(81.73\%\), surpassing the previous best preprocessing-based countermeasures by \(37.97\%\) absolute. ISS also (slightly) outperforms adversarial training and has higher generalizability to unseen perturbation norms and also higher efficiency. Our investigation reveals that the property of PAP perturbations depends on the type of surrogate model used for poison generation, and it explains why a specific ISS compression yields the best performance for a specific type of PAP perturbation. We further test stronger, adaptive poisoning, and show it falls short of being an ideal defense against ISS. Overall, our results demonstrate the importance of considering various (simple) countermeasures to ensure the meaningfulness of analysis carried out during the development of availability poisons. Our code is available at [https://github.com/liuzrcc/ImageShortcutSqueezing](https://github.com/liuzrcc/ImageShortcutSqueezing).
Machine Learning, ICML
## 1 Introduction
The ever-growing amount of data that is easily available online has driven the tremendous advances of deep neural networks (DNNs) (Schmidhuber, 2015; LeCun et al., 2015; He et al., 2016; Brown et al., 2020). However, online data may be proprietary or contain private information, raising concerns about unauthorized use. Availability poisoning is recognized as a promising approach to data protection and recently a large number of poisoning methods have been proposed that add perturbations to images which block training by acting as shortcuts (Shen et al., 2019; Huang et al., 2021; Fowl et al., 2021, 2021). As illustrated by Figure 1 (a)\(\rightarrow\)(b), the high test accuracy of a DNN model is substantially reduced by perturbative poisons.
Existing research has shown that poisons can be compromised to a limited extent by preprocessing-based-countermeasures, such as data augmentations (Huang et al., 2021; Fowl et al., 2021) and pre-filtering (Fowl et al., 2021, 2021). However, a widely adopted belief is that no approaches exist that are capable of effectively countering poisons. Adversarial training (AT) has been proven to be a strong countermeasure (Tao et al., 2021; Wen et al., 2023). However, it is not considered to be a practical one, since it requires a large amount of computation and also gives rise to a non-negligible trade-off in test accu
Figure 1: An illustration of our Image Shortcut Squeezing (ISS) for countering perturbative availability poisons. The high model accuracy is reduced by poisons but is then restored by our ISS. Results are reported for EM (Huang et al., 2021) poisons on CIFAR-10.
racy of the clean (non-poisoned) model (Madry et al., 2018; Zhang et al., 2019). Further, AT trained with a specific \(L_{p}\) norm is hard to generalize to other norms (Tramer & Boneh, 2019; Laidlaw et al., 2021).
In this paper, we challenge the belief that it is impossible to counter perturbative availability poisons both easily and effectively by demonstrating that they are vulnerable to simple compression. First, we categorize 12 poisoning methods into three categories with respect to the surrogate models they use during poison generation: slightly-trained (Feng et al., 2019; Huang et al., 2021; Yuan & Wu, 2021; Fu et al., 2021; van Vlijmen et al., 2022), fully-trained (Shen et al., 2019; Tao et al., 2021; Fowl et al., 2021; Chen et al., 2023), and surrogate-free (Wu et al., 2023; Yu et al., 2022; Sandoval-Segura et al., 2022). Then, we analyze perturbations/shortcuts that are learned with these methods and demonstrate that they are strongly dependent on features that are learned in different training stages of the model. Specifically, we find that the methods using a slightly-trained surrogate model prefer _low-frequency_ shortcuts, while those using a fully-trained model prefer _high-frequency_ shortcuts.
Building on this new understanding, we propose Image Shortcut Squeezing (ISS), a simple, compression-based approach to countering perturbative availability poisons. As illustrated by Figure 1 (b)\(\rightarrow\)(c), the low test accuracy of the poisoned DNN model is restored by our ISS to be close to the original accuracy. In particular, grayscale compression is used to eliminate low-frequency shortcuts, and JPEG compression is used to eliminate high-frequency shortcuts. We also show that our understanding of high vs. low frequency can also help eliminate surrogate-free poisons (Wu et al., 2023; Yu et al., 2022; Sandoval-Segura et al., 2022). Our ISS substantially outperforms previously studied data augmentation and pre-filtering countermeasures. ISS also achieves comparable results to adversarial training and has three main advantages: 1) generalizability to multiple \(L_{p}\) norms, 2) efficiency, and 3) low trade-off in clean model accuracy (see Section 4.2 for details).
We further test the performance of ISS against potentially stronger poisoning methods that are aware of ISS and can be adapted to it. We show that they are not ideal against our ISS. Overall, we hope our study can inspire more meaningful analyses of poisoning methods and encourage future research to evaluate various (simple) countermeasures when developing new poisoning methods.
In sum, we make the following main contributions:
* We identify the strong dependency of the perturbation frequency patterns on the surrogate model property. Based on this new insight, we show that 12 existing perturbative poisoning methods are indeed very vulnerable to simple image compression.
* We propose Image Shortcut Squeezing (ISS), a simple yet effective approach to countering perturbative poisons. ISS applies image compression operations, such as JPEG and grayscale, to poisoned images for restoring the model accuracy.
* We demonstrate that ISS outperforms existing data augmentation and pre-filtering countermeasures by a large margin and is comparable to adversarial training but is more generalizable to multiple \(L_{p}\) norms and more efficient.
* We explore stronger, adaptive poisons against our ISS and provide interesting insights into understanding poisons, e.g., about the model learning preference of different perturbations.
## 2 Related Work
### Perturbative Availability Poisoning
Perturbative availability poisoning has been extensively studied. TensorClog (TC) (Shen et al., 2019) optimizes the poisons by exploiting parameters of a pre-trained surrogate to cause gradient vanish. Deep Confuse (DC) (Feng et al., 2019) collects the training trajectories of a surrogate classifier for learning a poison generator, which is computationally intensive. Error-Minimizing (EM) poisons (Huang et al., 2021) minimizes the classification errors of images on a surrogate classifier with respect to their original labels in order to make them "unlearnable examples". The surrogate is also alternatively updated to mimic the model training dynamics during poison generation. Hypocritical (HYPO) (Tao et al., 2021) follows a similar idea to EM but uses a pre-trained surrogate rather than the above bi-level optimization. Targeted Adversarial Poisoning (TAP) (Fowl et al., 2021) also exploits a pre-trained model but minimizes classification errors of images with respect to incorrect target labels rather than original labels.
Robust Error-Minimizing (REM) (Fu et al., 2021) improves the poisoning effects against adversarial training (with a relatively small norm) by replacing the normally-trained surrogate in EM with an adversarially-trained model. Similar approaches (Wang et al., 2021; Wen et al., 2023) on poisoning against adversarial training are also proposed. The usability of poisoning is also validated in scenarios requiring transferability (Ren et al., 2023) or involving unsupervised learning (He et al., 2022; Zhang et al., 2022).
There are also studies focusing on revising the surrogate, e.g., Self-Ensemble Protection (Chen et al., 2023), which aggregates multiple training model checkpoints, and NTGA (Yuan & Wu, 2021), which adopts the generalized neural tangent kernel to model the surrogate as Gaussian Processes (Jacot et al., 2018). ShortcutGen (SG) (van Vlijmen et al., 2022) learns a poison generator based on a
randomly initialized fixed surrogate and shows its efficiency compared to the earlier generative method, Deep Confuse.
Different from all the above surrogate-based methods, recent studies also explore surrogate-free poisons (Evtimov et al., 2021; Yu et al., 2022; Sandoval-Segura et al., 2022). Intuitively, simple patterns, such as random noise (Huang et al., 2021) and semantics (e.g., MNIST-like digits) (Evtimov et al., 2021), can be used as learning shortcuts when they form different distributions for different classes. Very recent studies also synthesize more complex, linear separable patterns to boost the poisoning performance based on sampling from a high dimensional Gaussian distribution (Yu et al., 2022) and further refining it by introducing the autoregressive process (Sandoval-Segura et al., 2022). One Pixel Shortcut (OPS) specifically explores the model vulnerability to sparse poisons and shows that perturbing only one pixel is sufficient to generate strong poisons (Wu et al., 2023).
In this paper, we evaluate our Image Shortcut Squeezing (ISS) against 12 representative poisoning methods as presented above. In particular, we consider poisons constrained by different \(L_{p}\) norms.
### Adversarial Perturbations
Countering adversarial perturbations.Simple image compressions, such as JPEG, bit depth reduction, and smoothing, are effective for countering adversarial perturbations based on the assumption that they are inherently high-frequency noise (Dziugaite et al., 2016; Das et al., 2017; Xu et al., 2017). Other image transformations commonly used for data augmentations, e.g., resizing, rotating, and shifting, are also shown to be effective (Xie et al., 2018; Tian et al., 2018; Dong et al., 2019). However, such image pre-processing operations may be bypassed when the attacker is aware of them and then adapted to them (Carlini et al., 2019). Differently, adversarial training (AT) (Madry et al., 2018; Zhang et al., 2019) remains effective against adaptive attacks and is considered to be the most powerful defense so far. AT has also been proven to be a principled defense against perturbative poisons (Tao et al., 2021).
Adversarial perturbations for data protection.Besides (training-time) data poisoning, adversarial examples can also be used for data protection, but at inference time. Related research has explored person-related recognition (Oh et al., 2016, 2017; Sattar et al., 2020; Rajabi et al., 2021) and social media mining (Larson et al., 2018; Li et al., 2019; Liu et al., 2020). An overview of inference-time data protection in images is provided by (Orekondy et al., 2017).
In this paper, our ISS is inspired by compression-based techniques. We carry out a systematic analysis of compression-based countermeasures for PAP, including data augmentations and adversarial training.
## 3 Analysis of Perturbative Availability Poisons
### Problem Formulation
We formulate the problem of countering availability poisoning-based data protection in the context of image classification. There are two parties involved, the data _protector_ and _exploiter_. The data protector poisons their own images to prevent them from being used by the exploiter for training a well-generalizable classifier. Specifically, here the poisoning is achieved by adding imperceptible perturbations. The data exploiter is aware that their collected images may contain poisons and so apply countermeasures to ensure their trained classifier is still well-generalizable. The success of the countermeasure is measured by the accuracy of the classifier on clean test images, and the higher, the more successful.
Formally stated, the protector aims to make a classifier \(F\) generalize poorly on the clean image distribution \(\mathcal{D}\), from which the clean training set \(\mathcal{S}\) is sampled:
\[\max_{\mathbf{\delta}}\ \mathbb{E}_{(\mathbf{x},y)\sim\mathcal{D}}\bigg{[}\mathcal{L} \left(F(\mathbf{x};\mathbf{\theta}(\mathbf{\delta})),y\right)\bigg{]} \tag{1}\]
\[\text{s.t.}\ \mathbf{\theta}(\mathbf{\delta})=\operatorname*{argmin}_{\mathbf{\theta}} \sum_{(\mathbf{x}_{i},y_{i})\in\mathcal{S}}\mathcal{L}(F(\mathbf{x}_{i}+\mathbf{\delta}_{i };\mathbf{\theta}),y_{i}), \tag{2}\]
where \(\mathbf{\theta}\) represents the parameters of the classifier \(F\), and \(\mathcal{L}(\cdot;\cdot)\) is the cross-entropy loss, which takes as input a pair of model output \(F(\mathbf{x}_{i};\mathbf{\theta})\) and the corresponding label \(y_{i}\). \(\mathbf{\delta}\) denotes the additive perturbations with \(\epsilon\) as the \(L_{p}\) bound.
The exploiter aims to counter the poisons by applying a countermeasure \(C\) to restore the model accuracy even when it is trained on poisoned data \(\mathcal{P}\):
\[\min_{\mathbf{\theta}}\ \sum_{(\mathbf{x}_{i},y_{i})\in\mathcal{P}}\mathcal{L}(F(C(\mathbf{x}_ {i}+\mathbf{\delta}_{i});\mathbf{\theta}),y_{i}). \tag{3}\]
### Categorization of Existing Poisoning Methods
We carried out an extensive survey of existing poisoning methods, which allowed us to identify three categories of them regarding the type of their used surrogate classifiers. These three categories are: Generating poisons 1) with a slightly-trained surrogate, 2) with a fully-trained surrogate, and 3) in a surrogate-free manner. Table 1 provides an overview of this categorization. In the first category, the surrogate is at its early training stage. Existing methods in this category either fixes (Yuan and Wu, 2021; van Vlijmen et al., 2022) or alternatively updates (Feng et al., 2019; Huang et al., 2021; Fu et al., 2021) the surrogate during optimizing the poisons. In the second category, the surrogate has been fully trained. Existing methods in this category
fix the surrogate (Shen et al., 2019; Tao et al., 2021; Fowl et al., 2021; Chen et al., 2023) but in principle, it may also be possible that the model is alternatively updated. In the third category, no surrogate is used but the poisons are synthesized by sampling from Gaussian distributions (Yu et al., 2022; Sandoval-Segura et al., 2022) or optimized with a perceptual loss (Wu et al., 2023).
only contain one pixel and so can be treated as an extreme case of high-frequency patterns.
### Our Image Shortcut Squeezing
Based on the above new frequency-based interpretation, we propose Image Shortcut Squeezing (ISS), a simple, image compression-based countermeasure to eliminate perturbative poisons. We rely on different compression operations suitable for eliminating different types of perturbations. Overall, a specific compression operation is applied to the \(C(\cdot)\) in Eq. 3.
For low-frequency perturbations, where the differences across color channels are large, we propose to use grayscale transformation. We expect grayscale transformation to not sacrifice too much the test accuracy of a clean model because color information is known to contribute little to the DNNs' performance in differentiating objects (Xie & Richmond, 2018).
For high-frequency perturbations, we follow existing research on eliminating adversarial perturbations to use common image compression operations, such as JPEG and bit depth reduction (BDR) (Dziugaite et al., 2016; Das et al., 2017; Xu et al., 2017). We expect grayscale transformation to not sacrifice too much the test accuracy of a clean model because DNNs are known to be resilient to small amounts of image compression, e.g., JPEG with a higher quality factor than 10 (Dodge & Karam, 2016).
## 4 Experiments
In this section, we evaluate our Image Shortcut Squeezing (ISS) and other existing countermeasures against 12 representative perturbative poisoning methods. We focus our experiments on the basic setting in which the surrogate (if it is used) and target models are the same and the whole training set is poisoned. We also explore more challenging poisoning scenarios with unseen target models or partial poisoning (poisoning a randomly selected proportion or a specific class).
### Experimental Settings
**Datasets and models.** We consider three datasets: CIFAR-10 (Krizhevsky, 2009), CIFAR-100 (Krizhevsky, 2009), and a 100-class subset of ImageNet (Deng et al., 2009). If not mentioned specifically, on CIAFR-10 and CIFAR-100, we use 50000 images for training and 10000 images for testing. For the ImageNet subset, we select 20% images from the first 100 classes of the official ImageNet training set for training and all corresponding images in the official validation set for testing. If not mentioned specifically, ResNet-18 (RN-18) (He et al., 2016) is used as the surrogate model and target model. To study transferability, we consider target models with diverse architectures: ResNet34 (He et al., 2016), VGG19 (Simonyan & Zisserman, 2015), DenseNet121 (Huang et al., 2017), MobileNetV2 (Sandler et al., 2018), and ViT (Dosovitskiy et al., 2021).
**Training and poisoning settings.** We train the CIAFR-10 and CIFAR-100 models for 60 epochs and the ImageNet models for 100 epochs. We use Stochastic Gradient Descent (SGD) with a momentum of 0.9, a learning rate of 0.025, and cosine weight decay. We adopt a torchvision module1 for implementing Grayscale, JPEG, and bit depth reduction (BDR) in our Image Shortcut Squeezing (ISS). We consider 12 representative existing poisoning methods as listed in Table 1 under various \(L_{p}\) norm bounds. A brief description of 12 methods can be found in Appendix A. Specifically, we follow existing work and use \(L_{\infty}=8\), \(L_{2}=1.0\), and \(L_{0}=1\).
Footnote 1: [https://pytorch.org/vision/stable/_modules/torchvision/transforms/transforms.html](https://pytorch.org/vision/stable/_modules/torchvision/transforms/transforms.html)
### Evaluation in the Common Scenario
We first evaluate our ISS against 12 representative poisoning methods in the common scenario where the surrogate and target models are the same and the whole training dataset is poisoned. Experimental results on CIFAR-10 shown in Table 2 demonstrate that ISS can substantially restore the clean test accuracy of poisoned models in all cases. Consistent with our new insight in Section 3.3, grayscale yields the best performance in countering methods that rely on low-frequency perturbations with large color differences (see more results by other color compression methods on EM in Appendix C). In contrast, JPEG and BDR are the best against methods that rely on high-frequency perturbations. Additional results for other hyperparameters of JPEG and BDR in Table 11 of Appendix B show that milder settings yield worse results.
Our ISS also outperforms other countermeasures. Specifically, data augmentations applied to clean models increase test accuracy but they are not effective against poisons. Image Smoothing is sometimes effective, e.g., median filtering performs the best against OPS as expected since it is effective against impulsive noise. Adversarial training (AT) achieves comparable performance to our ISS for \(L_{\infty}\) and \(L_{2}\) norms but much worse performance for the \(L_{0}\) norm. This verifies the higher generalizability of our ISS to unseen norms. It is worth noting that the ISS training time is only \(\frac{1}{7}\) of the AT training time on CIFAR-10. The efficiency of our ISS becomes more critical when the dataset is larger and the image resolution is higher. Additional experimental results for \(L_{\infty}=16\) shown in Table 3 confirm the general effectiveness of our ISS.
We further conduct experiments on CIFAR-100 and ImageNet. Note that for CIFAR-100, we only test the poisoning methods that include CIFAR-100 experiments in their original work. For ImageNet, the poison optimization process is very time-consuming, especially for NTGA (Yuan and Wu, 2021) and Deep Confuse (Feng et al., 2019). Therefore, following the original work, these two methods are tested with only two classes. Note that such time-consuming poisoning methods are not good candidates for data protection in practice. Experimental results on CIFAR-100 and ImageNet shown in Table 4 and Table 5 confirm the general effectiveness of our ISS.
### Evaluation in Challenging Scenarios
**Partial poisoning.** In practical scenarios, it is common that only a proportion of the training data can be poisoned. Therefore, we follow existing work (Fowl et al., 2021; Huang et al., 2020) to test such partial poisoning settings. We poison a certain proportion of the training data and mix it with the rest clean data for training the target model.
Specifically, we test two partial poisoning settings: first, randomly selecting a certain proportion of the images, and, second, selecting a specific class. In the first setting, as shown in Table 6, the poisons are effective only when a very large proportion of the training data is poisoned. For example, on average, even when 80% of data are poisoned, the model accuracy is only reduced by about 10 %. In the second setting, we choose to poison all training samples from class automobile on CIFAR-10. Table 7 demonstrates that almost all poisoning methods are very effective in the full poisoning setting.
In both settings, our ISS is effective against all poisoning methods.
**Transferability to unseen models.** In realistic scenarios, the protector may not know the details of the target model.
\begin{table}
\begin{tabular}{l|c c c c c c} \hline \hline Poisons & w/o & Cutout & CutMix & Mixup & Gray & JPEG \\ \hline Clean & 77.44 & 76.72 & 80.50 & 78.56 & 71.79 & 57.79 \\ \hline EM & 7.25 & 6.70 & 7.03 & 10.68 & **67.46** & 56.01 \\ REM & 9.37 & 12.46 & 10.40 & 15.05 & **57.27** & 55.77 \\ TC & 57.52 & 60.56 & 59.19 & 59.77 & 47.93 & 58.94 \\ TAP & 9.00 & 10.30 & 8.73 & 19.16 & 8.84 & **83.77** \\ SEP & 3.21 & 3.21 & 3.98 & 7.49 & 2.10 & **58.18** \\ \hline LSP & 3.06 & 4.43 & 6.12 & 5.61 & 44.62 & **53.49** \\ AR & 3.01 & 2.85 & 3.49 & 2.19 & 24.99 & **57.87** \\ \hline \hline OPS & 23.78 & **57.98** & 56.03 & 22.71 & 32.62 & 54.92 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Additional results on CIFAR-100.
\begin{table}
\begin{tabular}{l|l c c c c c c c c|c} \hline \hline \multirow{2}{*}{Norm} & Poisons/Countermeasures & w/o & Cutout & CutMix & Mixup & Gaussian & Mean & Median & BDR & Gray & JPEG & AT \\ \cline{2-13} & Clean (no poison) & 94.68 & 95.10 & 95.50 & 95.01 & 94.17 & 45.32 & 85.94 & 88.65 & 92.41 & 85.38 & 84.99 \\ \hline & DC (Feng et al., 2019) & 16.30 & 15.14 & 17.99 & 19.39 & 17.21 & 19.57 & 15.82 & 61.10 & **93.07** & 81.84 & 78.00 \\ & NTG (Yuan and Wu, 2021) & 42.46 & 42.07 & 27.16 & 43.03 & 42.84 & 37.49 & 42.91 & 62.50 & **74.32** & 69.49 & 70.05 \\ & EM (Huang et al., 2021) & 21.05 & 20.63 & 26.19 & 32.83 & 12.41 & 20.60 & 21.70 & 36.46 & **39.01** & 81.50 & 84.80 \\ & REM (Huang et al., 2021) & 25.44 & 26.54 & 29.02 & 34.48 & 27.44 & 25.35 & 31.57 & 40.77 & **92.84** & 82.28 & 82.99 \\ & SG (Yuan and Wu, 2021) & 33.05 & 24.12 & 29.46 & 39.66 & 31.92 & 46.87 & 49.53 & 70.14 & **86.42** & 70.49 & 76.38 \\ & TC (Shine et al., 2019) & 88.70 & 86.70 & 88.43 & 88.19 & 82.58 & 72.25 & 84.27 & 84.85 & 79.73 & 85.29 & 84.53 \\ & HYPO (Tao et al., 2021) & 71.54 & 70.60 & 67.54 & 72.54 & 72.46 & 40.27 & 65.53 & 83.50 & 61.86 & **85.45** & 84.91 \\ & TAP (Fowl et al., 2021) & 8.17 & 10.04 & 10.73 & 19.14 & 9.26 & 12.81 & 32.75 & 45.99 & 9.11 & **83.87** & 83.31 \\ & SEP (Chiri et al., 2023) & 3.85 & 4.47 & 94.14 & 15.59 & 3.96 & 14.43 & 35.65 & 47.43 & 3.57 & **84.37** & 84.12 \\ \hline \hline \multirow{2}{*}{\(L_{2}=1.0\)} & LSP (Yu et al., 2022) & 19.07 & 19.87 & 20.89 & 26.99 & 19.25 & 28.85 & 29.85 & 66.19 & 82.47 & **83.01** & 84.59 \\ & AR (Sandoval-Segura et al., 2022) & 13.28 & 12.07 & 12.39 & 13.25 & 15.45 & 45.15 & 70.96 & 31.54 & 34.04 & **85.15** & 83.17 \\ \hline \hline \(L_{0}=1\) & OPS (Wu et al., 2023) & 36.55 & 67.94 & 76.40 & 45.06 & 19.29 & 23.50 & **85.16** & 53.76 & 42.44 & 82.53 & 14.41 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Clean test accuracy (%) of models trained on CIFAR-10 poisons and with our Image Shortcut Squeezing (Gray and JPEG) vs. other countermeasures. Note that TC is known to not work well under small norms, e.g., our \(L_{\infty}=8\)(Fowl et al., 2021). Hyperparameters for different countermeasures can be found in Appendix B.
\begin{table}
\begin{tabular}{l|c c c c c} \hline \hline Poisons & w/o & Cutout & CutMix & Mixup & Gray & JPEG & AT \\ \hline Clean & 94.68 & 95.10 & 95.50 & 95.01 & 92.41 & 88.65 & 84.99 \\ \hline EM & 16.33 & 14.0 & 13.41 & 20.22 & 60.85 & **63.44** & 61.58 \\ REM & 24.89 & 25.0 & 22.85 & 29.51 & 42.85 & **76.59** & 80.14 \\ HYP & 58.3 & 54.22 & 48.26 & 57.27 & 45.38 & **85.07** & 84.90 \\ TAP & 10.98 & 10.96 & 9.46 & 17.97 & 6.94 & **84.19** & 83.35 \\ SEP & 3.84 & 8.90 & 15.79 & 9.27 & 5.70 & **84.35** & 84.07 \\ \hline LSP & 11.97 & 14.17 & 17.98 & 20.38 & **41.35** & 40.02 & 80.22 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Additional results on CIFAR-10 with larger perturbation norms: \(L_{2}=12.0\) for LSP and \(L_{\infty}=16\) for the rest.
\begin{table}
\begin{tabular}{l|c c c c c} \hline \hline Poisons & w/o & Cutout & CutMix & Mixup & Gray & JPEG \\ \hline Clean & 62.04 & 61.14 & 65.100 & 64.32 & 58.24 & 58.20 \\ \hline EM & 31.52 & 30.42 & 42.98 & 21.44 & 49.78 & **49.88** \\ REM & 11.12 & 11.62 & 12.50 & 17.62 & **44.70** & 18.16 \\ TAP & 24.64 & 23.00 & 18.72 & 28.62 & 24.30 & **44.74** \\ LSP & 26.32 & 27.64 & 17.22 & 2.5 & **31.42** & 30.78 \\ \hline NTGA & 70.79 & 63.42 & 70.53 & 68.42 &
In this case, the transferability of the poisons is desirable. Table 8 demonstrates that all poisoning methods achieve high transferability to diverse model architectures and our ISS is effective against all of them. It is also worth noting that there is no clear correlation between the transferability and the similarity between the surrogate and target models. For example, transferring from ResNet-18 to ViT is not always harder than to other CNN models.
### Adaptive Poisons to ISS
In the adversarial example literature, image compression operations can be bypassed when the attacker is adapted to them (Shin Song, 2017; Carlini et al., 2019). Similarly, we evaluate strong adaptive poisons against our ISS using two poisoning methods, EM (\(L_{\infty}\)) and LSP (\(L_{2}\)). We assume that the protector can be adapted to grayscale and/or JPEG in our ISS. Specifically, for EM, we add a differentiable JPEG compression module (Shin Song, 2017) and/or a differentiable grayscale module into its bi-level poison optimization process. For LSP, we increase the patch size to 16\(\times\)16 to decrease high-frequency features so that JPEG will be less effective, and we make sure the pixel values are
\begin{table}
\begin{tabular}{l|c|c c c c c c} \hline \hline \multicolumn{1}{c|}{Poisons} & ISS & 0.1 & 0.2 & 0.4 & 0.6 & 0.8 & 0.9 \\ \hline \multirow{3}{*}{DC} & w/o & 94.29 & 94.26 & 93.20 & 91.66 & 87.19 & 80.14 \\ & Gray & 92.73 & 92.57 & 92.37 & 91.51 & 90.49 & 89.50 \\ & JPEG & 84.89 & 85.26 & 84.43 & 83.61 & 83.02 & 82.69 \\ \hline \multirow{3}{*}{EM} & w/o & 94.37 & 93.63 & 92.62 & 91.07 & 86.63 & 79.57 \\ & Gray & 92.60 & 92.62 & 92.52 & 92.23 & 90.96 & 89.69 \\ & JPEG & 84.61 & 84.79 & 84.96 & 84.86 & 84.93 & 84.40 \\ \hline \multirow{3}{*}{REM} & w/o & 94.39 & 94.56 & 94.37 & 94.43 & 94.19 & 81.39 \\ & Gray & 92.63 & 92.81 & 92.78 & 92.82 & 92.73 & 86.62 \\ & JPEG & 84.64 & 85.53 & 84.82 & 85.37 & 85.38 & 82.44 \\ \hline \multirow{3}{*}{SG} & w/o & 94.47 & 94.40 & 93.46 & 91.21 & 87.75 & 83.40 \\ & Gray & 92.81 & 92.65 & 91.90 & 90.65 & 88.44 & 85.26 \\ & JPEG & 84.94 & 84.61 & 84.11 & 82.66 & 80.76 & 79.38 \\ \hline \multirow{3}{*}{TC} & w/o & 93.81 & 94.09 & 93.70 & 93.59 & 93.02 & 91.47 \\ & Gray & 91.98 & 92.38 & 92.03 & 91.96 & 91.03 & 87.71 \\ & JPEG & 85.24 & 85.01 & 85.23 & 85.28 & 85.23 & 84.37 \\ \hline \multirow{3}{*}{HPO} & w/o & 93.94 & 94.43 & 93.34 & 92.56 & 90.64 & 89.35 \\ & Gray & 92.59 & 92.39 & 91.37 & 90.06 & 88.03 & 86.37 \\ & JPEG & 85.61 & 85.18 & 85.39 & 85.21 & 85.25 & 85.10 \\ \hline \multirow{3}{*}{TAP} & w/o & 94.09 & 93.94 & 92.75 & 91.27 & 88.42 & 85.98 \\ & Gray & 92.62 & 91.94 & 90.73 & 89.26 & 85.93 & 83.18 \\ & JPEG & 85.24 & 84.42 & 84.86 & 84.98 & 84.51 & 84.36 \\ \hline \multirow{3}{*}{SEP} & w/o & 94.12 & 93.45 & 92.76 & 91.22 & 87.82 & 85.01 \\ & Gray & 92.57 & 92.04 & 91.09 & 89.25 & 86.31 & 82.95 \\ & JPEG & 85.27 & 85.27 & 85.25 & 84.71 & 84.07 & 84.80 \\ \hline \multirow{3}{*}{LSP} & w/o & 94.69 & 94.42 & 92.81 & 91.38 & 88.07 & 82.26 \\ & Gray & 93.12 & 92.56 & 92.67 & 92.20 & 90.78 & 89.65 \\ & JPEG & 85.01 & 84.58 & 84.88 & 83.49 & 83.27 & 81.67 \\ \hline \multirow{3}{*}{AR} & w/o & 94.66 & 94.38 & 93.82 & 91.80 & 88.42 & 82.36 \\ & Gray & 92.85 & 92.69 & 92.53 & 91.24 & 89.88 & 85.35 \\ & JPEG & 85.37 & 84.75 & 85.35 & 85.35 & 85.07 & 87.27 \\ \hline \multirow{3}{*}{OPS} & w/o & 94.47 & 94.11 & 92.61 & 91.49 & 87.19 & 82.65 \\ & Gray & 92.65 & 92.27 & 91.36 & 89.34 & 85.24 & 81.37 \\ \cline{1-1} & JPEG & 84.75 & 84.88 & 84.55 & 83.98 & 82.87 & 81.33 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Clean test accuracy (%) of CIFAR-10 target models under different poisoning proportions. TC is tested with \(L_{\infty}=26\).
\begin{table}
\begin{tabular}{l|c c c c} \hline \hline \multicolumn{1}{c|}{Poisons} & w/o & Gray & JPEG & DDR \\ \hline DC & 1.60 & 69.00 & 88.30 & 52.20 \\ NTGA & 51.70 & 94.20 & 90.40 & 75.30 \\ EM & 0.10 & 48.60 & 94.30 & 9.60 \\ REM & 0.80 & 34.40 & 90.40 & 2.50 \\ SG & 27.75 & 88.39 & 78.59 & 70.05 \\ TC & 0.50 & 0.20 & 92.50 & 37.20 \\ HYPO & 4.00 & 3.00 & 94.90 & 56.80 \\ TAP & 0.00 & 0.10 & 93.90 & 38.10 \\ SEP & 0.00 & 94.70 & 15.50 \\ LSP & 67.30 & 86.90 & 95.10 & 83.20 \\ AR & 97.70 & 97.60 & 94.60 & 95.10 \\ OPS & 28.90 & 28.50 & 93.60 & 72.10 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Partial poisoning for class automobile on CIFAR-10. TC is tested with \(L_{\infty}=26\).
\begin{table}
\begin{tabular}{l|c|c c c c c} \hline \hline \multicolumn{1}{c|}{Poisons} & ISS & \(\rightarrow\) R34 & \(\rightarrow\) V19 & \(\rightarrow\) D121 & \(\rightarrow\) M2 & \(\rightarrow\) V/T \\ \hline \multirow{3}{*}{DC} & w/o & 18.06 & 16.59 & 16.05 & 17.81 & 24.09 \\ & Gray & 83.13 & 80.32 & 83.93 & 78.78 & 44.83 \\ & JPEG & 82.64 & 80.34 & 83.38 & 80.30 & 53.35 \\ \hline \multirow{3}{*}{NTGA} & w/o & 40.19 & 47.13 & 16.67 & 40.75 & 31.82 \\ & Gray & 71.84 & 76.89 & 64.07 & 62.28 & 58.25 \\ & JPEG & 67.00 & 72.17 & 73.76 & 70.18 & 53.00 \\ \hline \multirow{3}{*}{EM} & w/o & 29.96 & 34.70 & 30.61 & 30.10 & 18.84 \\ & Gray & 86.97 & 87.03 & 84.84 & 82.81 & 63.28 \\ & JPEG & 84.21 & 82.46 & 84.86 & 82.20 & 56.33 \\ \hline \multirow{3}{*}{REM} & w/o & 25.88 & 29.04 & 28.31 & 24.08 & 32.22 \\ & Gray & 75.20 & 77.99 &
the same for three channels to bypass grayscale.
Table 9 demonstrates that for EM, the adaptive grayscale poisons are effective against grayscale, but adaptive JPEG noises are not effective against JPEG. As hinted by (Shin and Song, 2017), using an ensemble of JPEG with different quality factors might be necessary for better adaptive poisoning. For LSP, we observe that even though adaptive LSP is more effective against the combination of JPEG and grayscale than the other two individual compressions, it is insufficient to serve as a good adaptive protector. On the other hand, adaptive LSP also fails against the model without ISS, indicating that the additional operations (grayscale and larger patches) largely constrain its poisoning effects.
Given that the protector may have full knowledge of our ISS, we believe that better-designed adaptive poisons can bypass our ISS in the future.
### Further Analyses
**Relative Model Preference of different poisons.** We explore the relative model preference of low-frequency vs. high-frequency poisons. This scenario is practically interesting because the same online data might be poisoned by different methods. Inspired by the experiments on the model preference of MNIST vs. CIFAR data in (Shah et al., 2020), we simply add up the EM and TAP perturbations for each image. The perturbation norm is doubled accordingly. For example, for perturbations with \(L_{\infty}=8\), the composite perturbations range from \(-16\) to \(16\). We train a model (using the original image labels) on the composite perturbations of EM and TAP and test it on either EM or TAP perturbations.
As shown in Figure 4, the model converges fast and reaches a high test accuracy on EM perturbations but not on the TAP perturbations. It indicates that TAP perturbations are less preferred than EM perturbations by the model during training.
**ISS for both training and testing** Our ISS only applies to the training data for removing the poisons. However, in this case, it may cause a possible distribution shift between the training and test data. Here we explore such a shift by comparing ISS with another variant that applies compression to both the training and test data. Table 10 demonstrates that in most cases, these two versions of ISS do not lead to substantial differences.
## 5 Conclusion
In this paper, we challenge the common belief that there are no practical and effective countermeasures to perturbative availability poisoning (PAP). Specifically, we show that 12 state-of-the-art poisoning methods can be substantially countered by Image Shortcut Squeezing (ISS), which is based on simple compression. ISS outperforms other previously studied countermeasures, such as data augmentations and adversarial training. Our in-depth investigation leads to a new insight that the property of PAP perturbations depends on the type of surrogate model used during poison generation. We also show the ineffectiveness of adaptive poisons to ISS. We hope that further studies could consider various (simple) countermeasures during the development of new poisoning methods.
\begin{table}
\begin{tabular}{l|c c|c c} \hline \hline Poisons & Gray-TT & Gray & JPEG-TT & JPEG \\ \hline Clean & **92.62** & 92.41 & 79.56 & **85.38** \\ \hline DC & 83.79 & **93.07** & 79.41 & **81.84** \\ NTGA & 65.42 & **74.32** & 62.84 & **69.49** \\ EM & 90.75 & **93.01** & 78.96 & **81.50** \\ REM & 73.38 & **92.84** & 79.39 & **82.28** \\ SG & **88.26** & 86.42 & 72.96 & **79.49** \\ TC & **76.41** & 75.88 & 79.42 & **83.69** \\ HYPO & **75.20** & 61.86 & 79.63 & **85.60** \\ TAP & **9.53** & 9.11 & 78.65 & **83.87** \\ SEP & 2.93 & **3.57** & 79.28 & **84.37** \\ LSP & **76.23** & 75.77 & 68.73 & **78.69** \\ AR & 68.95 & **69.37** & 79.26 & **85.38** \\ OPS & **46.53** & 42.44 & 76.87 & **82.53** \\ \hline \hline \end{tabular}
\end{table}
Table 10: Clean test accuracy for ISS (Gray and JPEG), which applies compression only to training data, and another variant that applies compression to both training and test data (denoted with suffix-TT).
Figure 4: Relative model preference of different poisons.
\begin{table}
\begin{tabular}{l|c c c c|c} \hline \hline Poisons & w/o & Gray & JPEG & G\&J & \\ \hline EM & 21.05 & 93.01 & 81.50 & 83.06 & 69.66 \\ EM-GRAY & 17.81 & **16.60** & 76.71 & 74.16 & **46.32** \\ EM-JPEG & **17.11** & 89.18 & 83.11 & 82.85 & 68.06 \\ EM-G\&J & 48.93 & 46.29 & **69.48** & **66.26** & 57.74 \\ \hline LSP & 19.07 & 82.47 & 83.01 & 79.05 & 65.90 \\ LSP-G\&J & 93.01 & 90.34 & 84.38 & **82.13** & 87.47 \\ \hline \hline \end{tabular}
\end{table}
Table 9: Clean test accuracy of four different target models under EM poisoning and its adaptive variants on CIFAR-10. Results are reported for \(L_{\infty}=8\) and Table 12 in Appendix reports results of EM for \(L_{\infty}=16\), which follow the same pattern. |
2309.14975 | AirExo: Low-Cost Exoskeletons for Learning Whole-Arm Manipulation in the
Wild | While humans can use parts of their arms other than the hands for
manipulations like gathering and supporting, whether robots can effectively
learn and perform the same type of operations remains relatively unexplored. As
these manipulations require joint-level control to regulate the complete poses
of the robots, we develop AirExo, a low-cost, adaptable, and portable dual-arm
exoskeleton, for teleoperation and demonstration collection. As collecting
teleoperated data is expensive and time-consuming, we further leverage AirExo
to collect cheap in-the-wild demonstrations at scale. Under our in-the-wild
learning framework, we show that with only 3 minutes of the teleoperated
demonstrations, augmented by diverse and extensive in-the-wild data collected
by AirExo, robots can learn a policy that is comparable to or even better than
one learned from teleoperated demonstrations lasting over 20 minutes.
Experiments demonstrate that our approach enables the model to learn a more
general and robust policy across the various stages of the task, enhancing the
success rates in task completion even with the presence of disturbances.
Project website: https://airexo.github.io/ | Hongjie Fang, Hao-Shu Fang, Yiming Wang, Jieji Ren, Jingjing Chen, Ruo Zhang, Weiming Wang, Cewu Lu | 2023-09-26T14:48:29Z | http://arxiv.org/abs/2309.14975v2 | # Low-Cost Exoskeletons for Learning Whole-Arm Manipulation
###### Abstract
While humans can use parts of their arms other than the hands for manipulations like gathering and supporting, whether robots can effectively learn and perform the same type of operations remains relatively unexplored. As these manipulations require joint-level control to regulate the complete poses of the robots, we develop _AirExo_, a low-cost, adaptable, and portable dual-arm exoskeleton, for teleoperation and demonstration collection. As collecting teleoperated data is expensive and time-consuming, we further leverage _AirExo_ to collect cheap in-the-wild demonstrations at scale. Under our in-the-wild learning framework, we show that with only 3 minutes of the teleoperated demonstrations, augmented by diverse and extensive in-the-wild data collected by _AirExo_, robots can learn a policy that is comparable to or even better than one learned from teleoperated demonstrations lasting over 20 minutes. Experiments demonstrate that our approach enables the model to learn a more general and robust policy across the various stages of the task, enhancing the success rates in task completion even with the presence of disturbances. Project website: airexo.github.io.
## I Introduction
Robotic manipulation has emerged as a crucial field within the robot learning community and attracted significant attention from researchers. With the steady advancement of technologies such as deep learning, robotic manipulation has evolved beyond conventional grasping [9, 32] and pick-and-place tasks [31, 42], encompassing a diverse array of complex and intricate operations [2, 3, 6, 10].
Most of the current robotic manipulation research focuses on interacting with the environment solely with the end-effectors of the robots, which correspond to the hands of human beings. However, as humans, we can also use other parts of our arms to accomplish or assist with various tasks in daily life. For example, holding objects with lower arms, closing fridge door with elbow, _etc_. In this paper, we aim to investigate and explore the ability of robots to effectively execute such tasks. To distinguish from the classical manipulation involving end-effectors, we refer to these actions as **whole-arm manipulation**. Since most whole-arm manipulation tasks require the coordinated collaboration of both limbs, we formalize them into the framework of the bimanual manipulation problem.
While whole-arm manipulation is natural and simple for humans, it can become challenging for robots. First, whole-arm manipulation usually implies extensive contact with the surrounding environment and collision risks during manipulation. Second, whole-arm manipulation necessitates precise movement of the entire robot pose, as opposed to the conventional methods of only reaching the end-effector pose at the destination. An intuitive approach to address these two challenges is to adapt joint-level control for robots. To enable that, we adopt a joint-level imitation learning schema, wherein joint-level control is needed when collecting the robot demonstration.
Recently, Zhao _et al._[45] introduced an open-source low-cost ALOHA system which exhibits the capability to perform joint-level imitation learning through real-world teleoperated data. ALOHA system leverages two small, simple and modular bimanual robots ViperX [36] and WidowX [39] that are almost identical to each other, to establish a leader-follower framework for teleoperation. Due to the limited payload of the robots, they focus more on fine-grained manipulation.
Fig. 1: The methodology of our in-the-wild learning framework with low-cost exoskeletons _AirExo_. It empowers the human operator to not only control the dual-arm robots for collecting teleoperated demonstrations but also directly record in-the-wild demonstrations. Besides commonly-used teleoperated demonstrations, our proposed learning framework also leverages the extensive and cheap in-the-wild demonstrations in policy learning, resulting in a more general and robust policy compared to training with even more teleoperated demonstrations.
Besides, their hardwares cannot be seamlessly adapted to other robots commonly employed for laboratory research or industrial purposes. Similarly, while several literatures [8, 14, 16, 18, 44] also designed special exoskeletons for certain humanoid robots or robot arms, the cross-robot transferability of their exoskeletons remain a challenge.
To address the above issues, we develop _AirExo_, an _open-source_, _low-cost_, _robust_ and _portable_ dual-arm exoskeleton system that can be quickly modified for different robots. All structural components of _AirExo_ are _universal_ across robots and can be fabricated entirely through 3D printing, enabling easy assembly even for non-experts. After calibration with a dual-arm robot, _AirExo_ can achieve precise joint-level teleoperations of the robot.
Contributed to its portable property, _AirExo_ enables _in-the-wild data collection for dexterous manipulation without needing a robot_. Humans can wear the dual-arm exoskeleton system, conduct manipulation in the wild, and collect demonstrations at scale. This breakthrough capability not only simplifies data collection but also extends the reach of whole-arm manipulation into unstructured environments, where robots can learn and adapt from human interactions. The one-to-one mapping of joint configurations also reduces the barriers of transferring policies trained on human-collected data to robots. Experiments show that with our in-the-wild learning framework, the policy can become more sample efficient for the expensive teleoperated demonstrations, and can acquire more high-level knowledge for task execution, resulting in a more general and robust strategy. The source code, data and exoskeleton models are released at the project website.
## II Related Works
### _Imitation Learning_
Imitation learning has been widely applied in robot learning to teach robots how to perform various tasks by observing and imitating demonstrations from human experts. One of the simplest methods in imitation learning is behavioral cloning [26], which learns the policy directly in a supervised manner without considering intentions and outcomes. Most approaches parameterize the policy using neural networks [2, 5, 30, 43, 45], while non-parametric VINN [25] leverages the weighted \(k\)-nearest-neighbors algorithm based on the visual representations extracted by BYOL [13] to generate the action from the demonstration database. This simple but effective method can also be extended to other visual representations [21, 22, 24, 28] for robot learning.
In the context of imitation learning for bimanual manipulation, Xie _et al._[40] introduced a paradigm to decouple the high-level planning model into the elemental movement primitives. Several literature have focused on designing special frameworks to solve specific tasks, such as knot tying [17, 33], banana peeling [16], culinary activities [20], and fabric folding [38]. Addressing the challenge of non-Markovian behavior observed in demonstrations, Zhao _et al._[45] utilized the notion of action chunking as a strategy to enhance overall performance.
### _Teleoperation_
Demonstration data play a significant role in robotic manipulation, particularly in the methods based on imitation learning. For the convenience of subsequent robot learning, these demonstration data are typically collected within the robot domain. A natural approach to gather such demonstrations is human teleoperation [23], where a human operator remotely controls the robot to execute various tasks.
Teleoperation methods can be broadly categorized into two classes based on their control objectives: one aimed at manipulating the end-effectors of the robots [2, 7, 10, 15, 29, 43] and one focused on regulating the complete poses of the entire robots, such as exoskeletons [8, 14, 16, 34, 44] and a pair of leader-follower robots [45]. For whole-arm manipulation tasks, we need to control the full pose of the robots, which makes exoskeletons a relatively favorable option under this circumstance.
### _Learning Manipulation in the Wild_
Despite the aforementioned teleoperation methods allow us to collect robotic manipulation data, the robot system is usually expensive and not portable, posing challenges to collect demonstration data at scale. To address this issue, previous research has explored the feasibility of learning from interactive human demonstrations, _i.e._ in-the-wild learning for robotic manipulation [1, 4, 18, 27, 32, 41]. In contrast to the costly robot demonstrations, in-the-wild demonstrations are typically cheap and easy to obtain, allowing us to collect a large volume of such demonstrations conveniently.
Typically, there are two primary domain gaps for learning manipulation in the wild: (1) the gap between human-operated images and robot-operated images, and (2) the gap between human kinematics and robot kinematics. The former gap can be solved through several approaches: by utilizing specialized end-effectors that match the end-effectors of the robots [18, 41]; by initially pre-training with in-the-wild data and subsequently fine-tuning with robot data [32]; or by applying special image processing technique to generate agent-agnostic images [1]. The latter gap is currently addressed by applying structure from motion algorithms [32, 41], adopting a motion tracking system [27], or training a pose detector [1, 37] to extract the desired poses. However, these methods are not suitable for whole-arm dexterous manipulation, since motion tracking usually focuses on the end-effector, and pose detector is vulnerable to visual occlusions and does not map to the robot kinematics.
Thus, in this paper we develop a low-cost and portable exoskeleton to serve as a bridge between human motion and robot motion. It can be applied not only to the teleoperation of robots but also as a powerful tool for learning manipulation in the wild.
## III AirExo: An Open-Source, Portable, Adaptable, Inexpensive and Robust Exoskeleton
### _Exoskeleton_
From the preceding discussions in Sec. I, we summarize the following 5 key design objectives of an exoskeleton: (1)
affordability; (2) adaptability; (3) portability; (4) robustness and (5) maintenance simplicity. Based on these design objectives, we develop _AirExo_ as follows.
In this paper, we employ two Flexiv Rizon arms [11] for experiments. As a result, the structural design of _AirExo_ is predominantly tailored to their specifications. Meanwhile, to ensure its universality, it can be easily modified for use with other robotic arms like UR5 [35], Franka [12] and Kuka [19], as depicted in Fig. 2.
Based on the morphology of our robot system, _AirExo_ is composed of two symmetrical arms, wherein the initial 7 degree-of-freedoms (DoFs) of each arm correspond to the DoFs of the robotic arm, and the last DoF corresponds to the end-effector of the robotic arm. Here, we design a two-finger gripper with 1 DoF as an optional end-effector for each arm. Overall, _AirExo_ is capable of simulating the kinematics of the robot across its entire workspace, as well as emulating the opening and closing actions of the end-effectors.
According to design objective (3), to improve the wearable experience for operators and concurrently enhance task execution efficiency, we dimension _AirExo_ to be 80% of the robot's size, based on the length of the human arm. In the end-effector of the exoskeleton, we design a handle and a scissor-like opening-closing mechanism to simulate the function of a two-fingered gripper, while also facilitating gripping actions by the operator. The two arms of the exoskeleton are affixed to a base, which is mounted on a vest. This allows the operator to wear it stably, and evenly distributing the weight of the exoskeleton across the back of the operator to reduce the load on the arms, thereby enabling more flexible arm motions. Additionally, an adjustable camera mount can be installed on the base for image data collection during operations.
The joints of _AirExo_ adapt a dual-layer structure, with the outer case divided into two parts: the portion proximate to the base is referred to as the _pre-joint_, while the other half is called the _post-joint_. As illustrated in Fig. 2(a), these two components are connected via a metal _damping pivot_, and their outer sides are directly linked to the connecting rod. _AirExo_ primarily achieves high-precision and low-latency motion capture through the _angle encoders_ (with a resolution of 0.08 degrees), whose bases are affixed to the _pre-joints_. The pivots of the encoders are connected to the _post-joint_ through a _limiter_, which is comprised of a dual-layer disc and several steel balls to set the angle limit for each joint. The dual-layer joint structure ensures that the encoders remain unaffected by bending moments during motions, rotating synchronously with the joints, which safeguards the encoders and reduces failures effectively. This aligns with the design objective (4) and (5).
Except the fasteners, damping pivots, and electronic components, all other components of _AirExo_ are fabricated using PLA plastic through 3D printing. The material has a high strength and a low density, thereby achieving a lightweight but robust exoskeleton. The prevalence of 3D-printed components allows the exoskeleton to be easily adapted to different robots. This adaptation entails adjusting the dimensions of certain components based on the target robot's specifications and subsequently reprinting and installing them, without modifying the internal structure. _AirExo_ costs approximately $600 in total (16 encoders of $30 each; 3D printing materials, mechanical parts and wires $120), which is in accordance with the design objective (1).
For more details about _AirExo_, including models and the installation guide, please refer to our project website.
### _Calibration and Teleoperation_
Since _AirExo_ shares the same morphology with the dual-arm robot except for the scale, the calibration process can be performed in a quite straightforward manner. After positioning the robot arms at a specific location like a fully extended position, and aligning the exoskeleton to match the robot posture, we can record the joint positions \(\{q_{i}^{(c)}\}_{i=1}^{d}\) and the encoder readings \(\{p_{i}^{(c)}\}_{i=1}^{d}\) of _AirExo_, where \(d\) denotes the DoFs. Consequently, during teleoperation, we only need to fetch the encoder readings \(\{p_{i}\}_{i=1}^{d}\) and transform them into the corresponding joint positions \(\{q_{i}\}_{i=1}^{d}\) using Eqn. (1), and let the robot moves to the desired joint positions:
\[q_{i}=\min\left(\max\left(q_{i}^{(c)}+k_{i}(p_{i}-p_{i}^{(c)}),q_{i}^{\min} \right),q_{i}^{\max}\right), \tag{1}\]
where \(k_{i}\in\mathbb{R}\) is the coefficient controlling direction and scale, and \(q_{i}^{\min},q_{i}^{\max}\) denote the joint angle limits of the robotic arms. Typically, we set \(k=\pm 1\), representing the consistency between the encoder direction of the exoskeleton and the joint direction of the robot. For grippers, we can directly map the angle range of the encoders to the opening and closing range of the grippers for teleoperation.
After calibration, the majority of angles within the valid range of the robot arms can be covered by the exoskeleton. Given that the workspaces of most tasks fall within this coverage range, we can teleoperate the robot using the exoskeleton conveniently and intuitively. If a special task \(t\) needs a wider operation range, we can simply scale the
Fig. 2: _AirExo_ models for different types of robots. Notice that the internal structure of the joints is standardized, only the linkages are altered to accommodate different robotic arm configurations.
exoskeleton range using coefficients \(k_{i}\), and apply task-specific joint constraint \([q_{i}^{\text{,}\text{min}},q_{i}^{\text{,}\text{max}}]\) instead of original kinematic constraint in Eqn. (1) for better performance.
### _In-the-Wild Learning with AirExo_
For in-the-wild whole-arm manipulation learning, we install a camera (or cameras under multi-camera settings) on the camera mount of _AirExo_ in roughly the same position(s) as the camera(s) on the robot. Using this configuration, images from both teleoperated demonstrations and in-the-wild demonstrations exhibit a relatively similar structure, which is advantageous for policy learning.
Our approach to learn whole-arm manipulation in the wild with _AirExo_ is illustrated in Fig. 3. As we discussed in Sec. II-C, _AirExo_ serves as a natural bridge for the kinematic gap between humans and robots. To address the domain gap between images, our approach involves a two-stage training process. In the first stage, we pre-train the policy using in-the-wild human demonstrations and actions recorded by the exoskeleton encoders. During this phase, the policy primarily learns the high-level task execution strategy from the large-scale and diverse in-the-wild human demonstrations. Subsequently, in the second stage, the policy undergoes fine-tuning using teleoperated demonstrations with robot actions to refine the motions based on the previously acquired high-level task execution strategy.
As previously discussed in Section III-A, we resize the exoskeleton to ensure its wearability. Some concerns may arise regarding whether this scaling adjustment could impact the policy learning process. Here, we argue that it has a minimal effect on our learning procedure. Firstly, the core kinematic structure, essential for our learning framework, remain unaffected by the resizing. Thus human demonstrations preserve the fundamental dynamics of the system. Secondly, our approach does not impose strict alignment requirements between human demonstration images and robot images. We find that similar visual-action pairs collected by our exoskeleton effectively support the pretraining stage, without demanding precise visual matching between human and robot demonstrations.
We use the state-of-the-art bimanual imitation learning method ACT [45] for policy learning. Our experiments demonstrate that it can indeed learn the high-level strategy through the pre-training process and significantly enhance the evaluation performance of the robot and the sample efficiency of the expensive teleoperated demonstrations.
## IV Experiments
In this section, we conduct experiments on 2 whole-arm tasks to evaluate the performance of the proposed learning method. All demonstration data are collected by _AirExo_.
Fig. 4: Definition of _Gather Balls_ task. The goal is to gather the balls into the central triangular area, which is highlighted in light blue. The red dashed arrows denote the motions of the robot arms. We use sponge padding to envelop the external surface of the robot arms to diminish the mechanical failures arising from contacts. Note the action multimodality allows accomplishing the task either along the blue arrow or the orange arrow.
Fig. 3: Overview of learning whole-arm manipulations in the wild with _AirExo_. First, we use in-the-wild demonstrations and exoskeleton actions that are transformed into the robot’s domain to pre-train the policy, which corresponds to learning the high-level strategy of task execution. Then, we use teleoperated demonstrations and robot actions to fine-tune the policy, which corresponds to learning fine-grained motion based on the learned high-level strategy.
### **Gather Balls**: Setup
#### Iv-A1 Task
Two clusters of cotton balls are randomly placed on both sides of the tabletop (40 balls per cluster). The goal is to gather these balls into the designated central triangular area using both arms. The process of this contact-rich task is illustrated in Fig. 4.
#### Iv-A2 Metrics
We consider the percentage of balls being allocated within the central triangular area as the task completion rate \(c\) (if a ball is precisely on the line, it is considered a half), including both the completion rates of the left arm and the right arm. Simultaneously, task success is defined as the task completion rate exceeding a certain threshold \(\delta\). In this experiment, we set \(\delta=40\%,60\%,80\%\). We also record the collision rate to gauge the precision of the operations.
#### Iv-A3 Methods
We employ VINN [25] and its variants that alter the visual representations [21, 22, 28] as non-parametric methods. Other methods include ConvMLP [43], BeT [30] and ACT [45]. All of them are designed for joint-space control or can be easily adapted for joint-space control. We apply our proposed learning approach to ACT for learning from in-the-wild demonstrations. For all methods, we carefully select the hyper-parameters to ensure better performance.
#### Iv-A4 Protocols
The evaluation is conducted on a workstation equipped with an Intel Core i9-10980XE CPU. The time limit is set as 60 seconds per trial. Given that all methods can operate at approximately 5Hz, resulting in a total of 300 steps for the evaluation, the time constraint proves sufficient for the task. We conduct 50 consecutive trials to ensure stable and accurate results, calculating the aforementioned metrics.
### **Gather Balls**: Results and Analyses
The experimental results on the _Gather Balls_ task are shown in Tab. I. When using 50 teleoperated demonstrations as training data, VINN performs the best among all non-parametric methods, while ACT excels among all parametric methods. Notice that despite BeT performing well in the state-based simulation environments [30], it appears to struggle in real-world environments, causing collisions. This may be due to the absence of an appropriate state extractor to process images and extract states. When using only 10 teleoperated demonstrations for training, the performance of both VINN and ACT degrades inevitably. However, after applying our in-the-wild learning framework, with the assistance of in-the-wild demonstrations, ACT can achieve the same level of performance as 50 teleoperated demonstrations with just 10 teleoperated demonstrations. This demonstrates that our learning framework with in-the-wild demonstrations makes the policy more sample-efficient for teleoperated demonstrations.
We then delve into the experimental results to provide more insights about why and how our learning framework works. When analyzing the failure cases of different methods
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline \multicolumn{3}{c}{**\# Demos**} & \multirow{2}{*}{**Method**} & \multicolumn{3}{c}{**Completion Rate \(c\) (\%) \(\uparrow\)**} & \multicolumn{3}{c}{**Success Rate (\%) \(\uparrow\)**} & \multicolumn{2}{c}{**Collision**} \\ \cline{3-10}
**Teleoperated** & & **In-the-Wild** & & **Overall** & **Left** & **Right** & \(c\geq 80\) & \(c\geq 60\) & \(c\geq 40\) & **Rate (\%) \(\downarrow\)** \\ \hline
50 & - & VIP [21] + NN & 27.74 & 0.02 & 55.45 & 0 & 0 & 36 & 0 \\
50 & - & VC-1 [22] + NN & 52.54 & 32.53 & 72.55 & 4 & 42 & 74 & 0 \\
50 & - & MVP [28] + NN & 55.10 & 58.55 & 62.00 & 12 & 62 & 76 & 0 \\
50 & - & VINN [25] & **76.88** & 75.73 & 78.03 & **58** & **84** & 94 & 0 \\ \hline
50 & - & ConvMLP [43] & 15.56 & 2.35 & 28.78 & 0 & 0 & 2 & 4 \\
50 & - & BeT [30] & 24.66 & 7.38 & 41.95 & 0 & 2 & 32 & 22 \\
50 & - & ACT [45] & 75.61 & 94.63 & 56.60 & 54 & 70 & **100** & 0 \\ \hline \hline
10 & - & VINN [25] & 68.68 & 60.28 & 77.08 & 36 & 76 & 88 & 0 \\
10 & - & ACT [45] & 64.31 & 91.95 & 36.68 & 24 & 60 & **96** & 0 \\ \hline
10 & 50 & ACT [45] & 73.76 & 88.83 & 58.70 & **62** & 72 & 88 & 0 \\
10 & 100 & ACT [45] & **75.15** & 75.63 & 74.68 & 56 & **80** & 88 & 0 \\ \hline \hline \end{tabular}
\end{table} TABLE I: Experimental results on the _Gather Balls_ task.
Fig. 5: Analyses of methods on the _Gather Balls_ task. Here we define the overall completion rate over 80% as success. **(a)** We analyze the failure causes of each method in every trial. **(b)** We amortize the inaccuracy (both) rate evenly into the inaccuracy (left) and inaccuracy (right) rates, and draw a comparison plot of failure modes for different methods. \((x,y)\) means the policy is trained with \(y\) in-the-wild demonstrations and then \(x\) teleoperated demonstrations. The dashed lines represent contour lines with the same success rate, and the regions with light blue background imply a more balanced policy between left and right arms. **(c)**\(t\)-SNE visualizations of the ground-truth actions and the policy actions w/wo in-the-wild learning on the validation set.
in the experiments in Fig. 5(a), we find that the ACT policy trained solely on teleoperated demonstrations exhibits an issue of imbalance between accuracies of two arms, with better learning outcomes for the left arm. This imbalance becomes more pronounced as the number of teleoperated demonstrations decreases to 10. With the help of the in-the-wild learning stage, the policy becomes more balanced between two arms even with fewer teleoperated demonstrations, as shown in Fig. 5(b). From Fig. 5(c), we also observe that the policy focuses more on learning the motions of the right arm when cooperated with in-the-wild learning, as highlighted in red dashed circles, while keeping the accurate action predictions on the left arm. We believe that this is attributed to the extensive, diverse, and accurate in-the-wild demonstrations provided by _AirExo_, enabling the policy to acquire high-level strategy knowledge during the pre-training stage. Consequently, in the following fine-tuning stage, it can refine its actions based on the strategy, thus avoiding learning actions blindly from scratch.
### **Grasp from the Curtained Shelf**: Setup and Results
#### Iv-C1 Task
A cotton toy is randomly placed in the center of a shelf with curtains. The goal is to grasp the toy and throw it into a bin. To achieve it, the robot needs to use its right arm to push aside the transparent curtain first, and maintain this pose during the following operations. The process of this multi-stage task is illustrated in Fig. 6.
#### Iv-C2 Metrics, Methods, and Protocols
We calculate the average success rate at the end of each stage as metrics. Based on the experimental results on the _Gather Balls_ task, we select VINN [25] and ACT [45] as methods in experiments, as well as ACT equipped with our in-the-wild learning framework. The evaluation protocols are the same as the _Gather Balls_ task, except that the time limit is 120 seconds (about 400 steps) and the number of trials is 25.
#### Iv-C3 Results
The results are given in Tab. II. Similar to the results of the _Gather Balls_ task, as the number of training teleoperated demonstrations is reduced, both VINN and ACT experience a decrease in success rates, especially in the later "throw" stage. However, after training with our in-the-wild learning framework, ACT exhibits a significant improvement in success rates in the "grasp" and "throw" stages. It achieves even higher success rates, surpassing those obtained with the original set of 50 teleoperated demonstrations lasting more than 20 minutes, using only 10 such demonstrations lasting approximately 3 minutes. This highlights that our proposed in-the-wild framework indeed enables the policy to learn a better strategy, effectively enhancing the success rates in the later stages of multi-stage tasks.
#### Iv-C4 Robustness Analysis
We design three kinds of disturbances in the robustness experiments to explore whether in-the-wild learning improves the robustness of the policy. The results shown in Tab. III demonstrate that our in-the-wild learning framework can leverage diverse in-the-wild demonstrations to make the learned policy more robust and generalizable to various environmental disturbances.
## V Conclusion
In this paper, we develop _AirExo_, an open-source, low-cost, universal, portable, and robust exoskeleton, for both joint-level teleoperation of the dual-arm robot and learning whole-arm manipulations in the wild. Our proposed in-the-wild learning framework decreases the demand for the resource-intensive teleoperated demonstrations. Experimental results show that policies learned through this approach gain a high-level understanding of task execution, leading to improved performance in multi-stage whole-arm manipulation tasks. This outperforms policies trained from scratch using even more teleoperated demonstrations. Furthermore, policies trained in this framework exhibit increased robustness in the presence of various disturbances. In the future, we will investigate how to better address the image gap between in-the-wild data in the human domain and teleoperated data in the robot domain, enabling robots to learn solely through in-the-wild demonstrations with _AirExo_, thus further reducing the learning cost.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline \multicolumn{2}{c}{**\# Demos**} & \multicolumn{5}{c}{**Success Rate (\%) \(\uparrow\)**} \\ \cline{2-9}
**Teleoperated In-the-Wild** & \multicolumn{5}{c}{Reach in} & Push aside & Approach & Grasp & Throw \\ \hline \hline
50 & - & VINN [25] & **100** & 96 & 92 & 60 & 48 \\
50 & - & ACT [45] & **100** & **100** & **100** & **84** & **84** \\ \hline \hline
10 & - & VINN [25] & **100** & 84 & 84 & 60 & 44 \\
10 & - & ACT [45] & **100** & **100** & 96 & 72 & 44 \\ \hline
10 & 50 & ACT [45] & **100** & **100** & 96 & 76 & 76 \\
10 & 100 & ACT [45] & **100** & **100** & **100** & **92** & **88** \\ \hline \hline \end{tabular}
\end{table} TABLE II: Experimental results on the _Grasp from the Curtained Shelf_ task.
\begin{table}
\begin{tabular}{c c c} \hline \hline
**Disturbance** & **w/wo In-the-Wild** & **Success Rate \(\uparrow\)** \\ \multicolumn{2}{c}{**Learning**} & **\# Success / \# Total** \\ \hline \multirow{2}{*}{Novel Object} & ✗ & 4 / 8 \\ & ✔ & **7** / 8 \\ \hline Different & ✗ & 2 / 8 \\ Background & ✔ & **6** / 8 \\ \hline Visual & ✗ & 4 / 8 \\ Distractors & ✔ & **8** / 8 \\ \hline \hline \end{tabular}
\end{table} TABLE III: Results of the robustness experiments on the _Grasp from the Curtained Shelf_ task.
Fig. 6: Definition of the _Grasp from the Curtained Shelf_ task. The robot needs to (a) reach in its right arm to the transparent curtain and (b) push aside the curtain, then (c) approach the object with its left arm, (d)
## VI Acknowledgement
We would like to thank Yuanyuan Jia for her help on duplicating _AirExo_ to different robotic arms, and Chen Wang at Stanford University for insightful discussion.
_Author contributions:_ H. Fang set up the robot platform, implemented the tele-operation, trained the policy, and wrote the paper. H.-S. Fang initiated the project, devised the experiments, partly designed the exoskeleton, and wrote the paper. Y. Wang designed and implemented the exoskeleton. J. Ren designed and implemented the first version of exoskeleton. J. Chen assisted with data collection and network training. R. Zhang implemented the encoder reading program for the exoskeleton. W. Wang and C. Lu supervised the project and provided hardware and resource support.
|
2309.05878 | Reaction coordinate flows for model reduction of molecular kinetics | In this work, we introduce a flow based machine learning approach, called
reaction coordinate (RC) flow, for discovery of low-dimensional kinetic models
of molecular systems. The RC flow utilizes a normalizing flow to design the
coordinate transformation and a Brownian dynamics model to approximate the
kinetics of RC, where all model parameters can be estimated in a data-driven
manner. In contrast to existing model reduction methods for molecular kinetics,
RC flow offers a trainable and tractable model of reduced kinetics in
continuous time and space due to the invertibility of the normalizing flow.
Furthermore, the Brownian dynamics-based reduced kinetic model investigated in
this work yields a readily discernible representation of metastable states
within the phase space of the molecular system. Numerical experiments
demonstrate how effectively the proposed method discovers interpretable and
accurate low-dimensional representations of given full-state kinetics from
simulations. | Hao Wu, Frank Noé | 2023-09-11T23:59:18Z | http://arxiv.org/abs/2309.05878v1 | # Reaction coordinate flows for model reduction of molecular kinetics
###### Abstract
In this work, we introduce a flow based machine learning approach, called reaction coordinate (RC) flow, for discovery of low-dimensional kinetic models of molecular systems. The RC flow utilizes a normalizing flow to design the coordinate transformation and a Brownian dynamics model to approximate the kinetics of RC, where all model parameters can be estimated in a data-driven manner. In contrast to existing model reduction methods for molecular kinetics, RC flow offers a trainable and tractable model of reduced kinetics in continuous time and space due to the invertibility of the normalizing flow. Furthermore, the Brownian dynamics-based reduced kinetic model investigated in this work yields a readily discernible representation of metastable states within the phase space of the molecular system. Numerical experiments demonstrate how effectively the proposed method discovers interpretable and accurate low-dimensional representations of given full-state kinetics from simulations.
+
Footnote †: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: author: Corresponding author: author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author:: Corresponding author: author: Corresponding author: Corresponding: author: Corresponding author: Corresponding author: author: Corresponding: author: Corresponding author:: Corresponding author: Corresponding author:: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: author: Corresponding author: Corresponding: author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: author: Corresponding: author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author:: author: Corresponding author: Corresponding author: Corresponding author: author: Corresponding: author: Corresponding author: author: Corresponding author:: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding: author: Corresponding author: Corresponding author: Corresponding author:: Corresponding author: author: Corresponding author: Corresponding author:: Corresponding author: Corresponding author: Corresponding author: Corresponding: author: Corresponding: author: Corresponding author: Corresponding author: Corresponding author: Corresponding: author: Corresponding author: Corresponding author: Corresponding: author: Corresponding: author: Corresponding author: Corresponding: author: Corresponding author: Corresponding author: Corresponding: author: Corresponding: Corresponding author: Corresponding author: Corresponding: author: Corresponding author: Corresponding: author: Corresponding author: Corresponding author: Corresponding: author: Corresponding author: Corresponding author: Corresponding author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding author: Corresponding: author: Corresponding author: Corresponding: author: Corresponding author: Corresponding author: Corresponding author: Corresponding: author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding: author: Corresponding: Corresponding author: Corresponding author: Corresponding: author: Corresponding author: Corresponding: author: Corresponding author: Corresponding author: Corresponding: author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding: author: Corresponding author: Corresponding author: Corresponding author: Corresponding: author: Corresponding author: Corresponding: author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding: author: Corresponding author: Corresponding: author: Corresponding author: Corresponding author: Corresponding author: Corresponding: author: Corresponding author: Corresponding: Corresponding author: Corresponding: author: Corresponding author: Corresponding: author: Corresponding author: Corresponding: Corresponding author: Corresponding author: Corresponding: author: Corresponding author: Corresponding: author: Corresponding author: Corresponding author: Corresponding: author: Corresponding: Corresponding author: Corresponding author: Corresponding: author: Corresponding: Corresponding author: Corresponding author: Corresponding: author: Corresponding: author: Corresponding author: Corresponding: author: Corresponding: Corresponding author: Corresponding author: Corresponding: Corresponding author: Corresponding: Corresponding author: Corresponding: Corresponding author: Corresponding: author: Corresponding: Corresponding author: Corresponding: Corresponding author: Corresponding author: Corresponding: Corresponding author: Corresponding: author: Corresponding author: Corresponding: Corresponding author: Corresponding: author: Corresponding author: Corresponding: author: Corresponding: Corresponding author: Corresponding: author: Corresponding: Corresponding author: Corresponding author: Corresponding: author: Corresponding: Corresponding author: Corresponding: author: Corresponding: Corresponding author: Corresponding: Corresponding author: Corresponding: author: Corresponding: Corresponding author: Corresponding author: Corresponding: Corresponding author: Corresponding: author: Corresponding: Corresponding author: Corresponding: author: Corresponding: Corresponding author: Corresponding: author: Corresponding: author: Corresponding author: Corresponding: author: Corresponding: Corresponding author: Corresponding: Corresponding: author: Corresponding: Corresponding author: Corresponding: author: Corresponding: author: Corresponding: Corresponding author: Corresponding: Corresponding author: Corresponding: author: Corresponding: author: Corresponding: Corresponding author: Corresponding: author: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: author: Corresponding: Corresponding author: Corresponding: Corresponding: author: Corresponding: Corresponding author: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: author: Corresponding: Corresponding: author: Corresponding:: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author:: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author:: Corresponding: author: Corresponding: Corresponding: author: Corresponding: author: Corresponding: Corresponding: author: Corresponding:: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author:: Corresponding: Corresponding: author: Corresponding:: Corresponding: author: Corresponding:: author: Corresponding: Corresponding: author: Corresponding:: author: Corresponding: Corresponding: author: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author:: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author:: Corresponding: author: Corresponding: Corresponding: author:: Corresponding: author:: Corresponding: author:: Corresponding: author:: Corresponding: author:: Corresponding: author:: Corresponding: author:: Corresponding: Corresponding: author:: Corresponding:: author: Corresponding:: author:: Corresponding: Corresponding: author:: Corresponding: Corresponding: author:: Corresponding:: author:: Corresponding: author:: Corresponding:: author:: Corresponding:: author: Corresponding:: author:: Corresponding: author:: Corresponding:: author:: Corresponding:: author:: Corresponding: Corresponding: author:: Corresponding: author:: Corresponding:: author:: Corresponding: author:: Corresponding: author:: Corresponding: author::: Corresponding: author:: Corresponding: author:: Corresponding:: author:: Corresponding:: author:: Corresponding: author::: Corresponding: author::: Corresponding:: author:: Corresponding:: author:: Corresponding: author::: Corresponding: author:: Corresponding:: author:: Corresponding: author::: Corresponding: author::: Corresponding: author::: Corresponding: author::: Corresponding: author:: Corresponding: author::: Corresponding: author::: Corresponding:: author:: Corresponding:: author:: Corresponding:: author:: Corresponding: author::: Corresponding: author::: Corresponding: author::: author:: Corresponding:: author::: Corresponding:: author::: Corresponding: author::: Corresponding: author:::: author:: Corresponding:: author::: Corresponding: author::: Corresponding:: author::: Corresponding: author:::: Corresponding:: author:::: Corresponding:: author:::: Corresponding: author::: author::: Corresponding:: author:::: Corresponding:: author::::: Corresponding:: author
NF and Brownian dynamics can be obtained via maximum likelihood estimation from simulation data. RC flow stands out as an ideal choice for molecular kinetics model reduction, distinguished by several unique traits:
* It introduces a scalable approach to model reduction that extends beyond the traditional framework of identifying slow variables. In contrast to conventional methods, which solely focus on locating slow variables, RC flow enables the collaborative description of multiple dominant components of molecular kinetics through more effective and small-sized RCs.
* RC flow enables simultaneous training of coordinate transformations and reduced kinetics, and the model of reduced kinetics can be specified according to practical requirements.
* The Brownian dynamics-based reduced kinetic model considered in this work can furnish a clear and low-dimensional depiction of metastable states within the phase space of the molecular system.
Several numerical tests are presented to illustrate the effectiveness of the RC flow.
## II Background
### Reaction coordinate
In MD simulations, the time evolution of a molecular system can be considered as a Markov process \(\{\mathbf{x}_{t}\}\) in a high-dimensional configuration space contained in \(\mathbb{R}^{D}\), and the molecular kinetics can be fully described by the transition density \(p_{\tau}^{x}(\mathbf{x},\mathbf{y})=\mathbb{P}(\mathbf{x}_{t+\tau}=\mathbf{y}| \mathbf{x}_{t}=\mathbf{x})\), where \(\mathbf{x}_{t}\) denotes the system configuration at time \(t\). Furthermore, we assume in this paper that the MD simulation is performed without applying any external force and the time-reversibility
\[\mu^{x}(\mathbf{x})p_{\tau}^{x}(\mathbf{x},\mathbf{y})=\mu^{x}(\mathbf{y})p_{ \tau}^{x}(\mathbf{y},\mathbf{x})\]
is fulfilled, where \(\mu^{x}(\mathbf{x})\) represents the equilibrium distribution of the configuration.
If we select RC as \(\mathbf{z}_{t}=\Phi(\mathbf{x}_{t})\in\mathbb{R}^{d}\) with \(d\ll D\) and model the kinetics embedded in the RC space by the transition density \(p_{\tau}^{x}(\mathbf{z}_{t},\mathbf{z}_{t+\tau})=\mathbb{P}(\mathbf{z}_{t+ \tau}|\mathbf{z}_{t})\), the full-state kinetics can be reconstructed from the reduced kinetics as
\[\hat{p}_{\tau}^{x}(\mathbf{x}_{t},\mathbf{x}_{t+\tau})=\int p_{\tau}^{z}( \mathbf{z}_{t},\mathbf{z})\mathbb{P}\left(\mathbf{x}_{t+\tau}|\Phi(\mathbf{x}_ {t+\tau})=\mathbf{z}\right)\mathrm{d}\mathbf{z}, \tag{1}\]
and the relationship between \(\mu^{x}\) and the equilibrium distribution \(\mu^{z}\) of RC is provided by
\[\mu^{x}(\mathbf{x})=\int\mu^{z}(\mathbf{z})\mathbb{P}\left(\mathbf{x}|\Phi( \mathbf{x})=\mathbf{z}\right)\mathrm{d}\mathbf{z}, \tag{2}\]
where \(\mathbb{P}\left(\mathbf{x}|\Phi(\mathbf{x})=\mathbf{z}\right)\) denotes the conditional distribution of the configuration \(\mathbf{x}\) for given RC \(\mathbf{z}\).
The relationship between the variables and transition densities is shown in the diagram below:
It can be observed from the diagram that model parameters of the coordinate transformation \(\Phi\) and the reduced transition density \(p_{\tau}^{x}\) can be fitted by maximizing the likelihood of \(\hat{p}_{\tau}^{x}\) for given simulation trajectories. A major difficulty of such a model reduction procedure arises from the density \(\mathbb{P}(\mathbf{x}|\Phi(\mathbf{x})=\mathbf{z})\) in (1,2), which is a probability distribution over the level set \(\{\mathbf{x}|\Phi(\mathbf{x})=\mathbf{z}\}\) of \(\Phi\) and is generally intractable for conventional probabilistic models especially for nonlinear \(\Phi\). In what follows, we will address this difficulty by using normalizing flows.
### Normalizing flow
Normalizing flows (NFs) are a specific class of neural networks for modeling distributions of high-dimensional data [30; 41]. For a random variable \(\mathbf{x}\), we can train an NF \(F\) so that \(\bar{\mathbf{z}}=F(\mathbf{x})\) is distributed according to a tractable distribution, e.g., standard Gaussian distribution \(\mathcal{N}(\mathbf{0},\mathbf{I})\), where \(F\) is a bijective function consisting of a sequence of invertible transformations. After training, we can approximate the density of \(\mathbf{x}\) as
\[\mathbb{P}(\mathbf{x})=\mathcal{N}\left(\bar{\mathbf{z}}|\mathbf{0},\mathbf{I }\right)\left|\det\left(\frac{\partial F(\mathbf{x})}{\partial\mathbf{x}} \right)\right| \tag{3}\]
according to the change of variables formula and draw samples from the density by
\[\bar{\mathbf{z}} \sim \mathcal{N}(\mathbf{0},\mathbf{I}),\] \[\mathbf{x} = F^{-1}(\mathbf{z}), \tag{4}\]
where \(\mathcal{N}\left(\cdot|\mathbf{a},\mathbf{\Sigma}\right)\) denotes the Gaussian density function with mean \(\mathbf{a}\) and covariance matrix \(\mathbf{\Sigma}\). There have been many types of NF models available in literature [42; 43; 44; 45; 46]. For most models, the Jacobian determinant of \(F\) and the inverse \(F^{-1}\) in Eqs. (3, 4) can be efficiently calculated due to the specifically designed network structures.
## III Reaction coordinate flow
### Model architecture
In this section, we develop a RC flow for model reduction of MD simulation data, which can identify low-dimensional RC and model reduced kinetics simultaneously. The key idea
is to utilize an NF \(F\) to decompose the configuration \(\mathbf{x}_{t}\) into RC and noise parts as the following:
\[F(\mathbf{x}_{t})=(\mathbf{z}_{t},\mathbf{v}_{t}), \tag{5}\]
where the RC \(\mathbf{z}_{t}\) is governed by a low-dimensional kinetic model and the noise \(\mathbf{v}_{t}\overset{\text{iid}}{\sim}\mathcal{N}(\mathbf{0},\mathbf{I})\) is uninformative in predicting future evolution of configurations. A diagram illustrating the model architecture is shown in Fig. 1.
From the decomposition (5) and the invertibility of \(F\), we can obtain the coordinate transformation
\[\Phi(\mathbf{x})=\left[F(\mathbf{x})\right]_{1,\ldots,d}\]
and an analytical expression of the conditional distribution of \(\mathbf{x}\) over the \(\mathbf{z}\)-level set
\[\mathbb{P}\left(\mathbf{x}|\Phi(\mathbf{x})=\mathbf{z}\right) = \mathbb{P}\left(\mathbf{z},\mathbf{v}|\Phi(\mathbf{x})=\mathbf{z }\right)\left|\det\left(\frac{\partial F(\mathbf{x})}{\partial\mathbf{x}} \right)\right| \tag{6}\] \[= \delta_{\mathbf{z}}\left(\Phi(\mathbf{x})\right)S(\mathbf{x}),\]
where
\[S(\mathbf{x})=\mathcal{N}(\mathbf{v}|\mathbf{0},\mathbf{I})\left|\det\left( \frac{\partial F(\mathbf{x})}{\partial\mathbf{x}}\right)\right|\]
and \(\delta_{\mathbf{z}}\) denotes the Dirac delta distribution centered at \(\mathbf{z}\). By substituting (6) into (1,2), we can recover the full-state thermodynamics and kinetics from the reduced ones as
\[\hat{p}_{\tau}^{z}(\mathbf{x}_{t},\mathbf{x}_{t+\tau}) = p_{\tau}^{z}(\Phi(\mathbf{x}_{t}),\Phi(\mathbf{x}_{t+\tau}))S( \mathbf{x}_{t+\tau}) \tag{7}\] \[\mu^{x}(\mathbf{x}) = \mu^{z}(\Phi(\mathbf{x}))S(\mathbf{x}) \tag{8}\]
In RC flow, the reduced kinetics \(p_{\tau}^{z}\) can be modeled by any kind of kinetic model according to the practical requirements. In this paper, we focus on the Brownian dynamics
\[\mathrm{d}\mathbf{z}_{t}=-\nabla V(\mathbf{z}_{t})\mathrm{d}t+\sqrt{2\beta^{-1 }}\mathrm{d}W_{t}, \tag{9}\]
which is widely used in computational chemistry and physics for characterizing diffusion processes in complicated energy landscapes [47; 48; 49]. Here \(W_{t}\) denotes standard Brownian motion, \(V\) is potential function, \(\beta\) is inverse temperature, which we take to be 1 without loss of generality (see Appendix A), and the asymptotic steady-state probability distribution of \(\mathbf{z}_{t}\) is [50]
\[\mu^{z}(\mathbf{z})\propto\exp\left(-V(\mathbf{z})\right). \tag{10}\]
The closed-form of the transition density \(p_{\tau}^{z}\) of the Brownian dynamics (9) is generally unavailable, but there are many numerical methods that can produce numerical approximations effectively (see survey in [51]). In our experiments, we select the importance sampling method proposed in [52], which is outlined in Appendix E.
_Remark 1_.: Conditional NF can also be used to model \(p_{\tau}^{z}\), as demonstrated in [40], avoiding the complex numerical computations associated with SDEs. Nevertheless, this approach sacrifices interpretability in reduced kinetics and presents challenges in obtaining the reduced potential of \(\mathbf{z}_{t}\).
Here, we model \(F\) by RealNVP [44]. For \(\mu^{z}\), which is a low-dimensional density and usually multimodal, we select Gaussian mixture model (GMM) in the form of
\[\hat{\mu}^{z}(\mathbf{z})=\sum_{i=1}^{K^{d}}\frac{w(\mathbf{c}_{i})}{\sum_{j=1 }^{K^{d}}w(\mathbf{c}_{j})}\mathcal{N}\left(\mathbf{z}|\mathbf{c}_{i},\mathrm{ diag}\left(\mathbf{\sigma}(\mathbf{c}_{i})\right)^{2}\right) \tag{11}\]
according to our numerical experience. In (11), centers \(\mathbf{c}_{1},\ldots,\mathbf{c}_{K^{d}}\) are kept as grid points of a regular grid in the \(d\)-dimensional RC space (see Line 9 in Algorithm 1), and \(w(\mathbf{c})\in\mathbb{R},\mathbf{\sigma}(\mathbf{c})\in\mathbb{R}^{d}\) are considered as functions of centers, which are both represented by multilayer perceptrons with exponential output activation functions. The drift term in the reduced kinetics (9) can then be calculated as
\[-\nabla V=\nabla\log\mu^{z}.\]
_Remark 2_.: RC flow can also be characterized by parametric models of \(F\) and \(V\). But in this case, the calculation of
\[\hat{\mu}^{x}(\mathbf{x})=\mu^{z}(\Phi(x))S(\mathbf{x})=\frac{\exp\left(-V( \Phi(\mathbf{x}))\right)S(\mathbf{x})}{\int\exp\left(-V(\mathbf{z})\right) \mathrm{d}\mathbf{z}}\]
involves an intractable integral over the RC space.
### Loss functions
In order to identify RC that can characterize thermodynamics and kinetics of the molecular system, we consider the following two loss functions for training of RC flow:
**Kinetic loss:**: To recover full-state kinetics from the RC flow, we use the negative log-likelihood (NLL) loss of the estimated transition density
\[\mathcal{L}_{\text{kin}}=-\frac{1}{T-\tau}\sum_{t=1}^{T-\tau}\log\hat{p}_{\tau} ^{x}(\mathbf{x}_{t},\mathbf{x}_{t+\tau}) \tag{12}\]
for a given simulation trajectory \(\{\mathbf{x}_{1},\ldots,\mathbf{x}_{T}\}\), where \(\hat{p}_{\tau}^{x}\) is defined by (7). In the case of multiple trajectories, \(\mathcal{L}_{\text{kin}}\) can be defined as the average NLL over all transition pairs with lag time \(\tau\) within all trajectories.
**Equilibrium loss:**: An ideal RC flow also allows an accurate estimation of the steady-state probability distribution, and the estimation error can be achieved using the loss
\[\mathcal{L}_{\mathrm{eq}}=-\frac{1}{T^{s}}\sum_{t=1}^{T^{s}}\log\hat{\mu}^{x}( \mathbf{x}_{t}^{s}), \tag{13}\]
where \(\hat{\mu}^{x}\) is given by (8) and \(\{\mathbf{x}_{1}^{s},\ldots,\mathbf{x}_{T^{s}}^{s}\}\) are sampled from the global equilibrium (e.g., from enhanced sampling simulations). In numerical experiments of this paper, MD simulation trajectories are long enough and achieve equilibrium, so we simply set \(\{\mathbf{x}_{1}^{s},\ldots,\mathbf{x}_{T^{s}}^{s}\}=\{\mathbf{x}_{1},\ldots, \mathbf{x}_{T}\}\).
Therefore, leaving aside numerical details, we can solve the following problem with weight \(\alpha>0\) to find optimal parameters of the RC flow:
\[\min_{F,\hat{\mu}^{z}}\mathcal{L}=\mathcal{L}_{\mathrm{kin}}+\alpha\mathcal{L }_{\mathrm{eq}}, \tag{14}\]
### Pre-processing and training
It is well-known that the performance of training neural networks can be significantly improved by normalizing the data scales and decoupling the correlation between features of the data. In a previous study [31], it was also reported that whitening transformation can enhance the performance of learning Boltzmann distributions by normalizing flows (NFs). In this paper, we perform data pre-processing by TICA [6; 7] for large-sized systems and let \(\mathbf{x}_{t}^{\mathrm{TICA}}=\mathbf{W}(\mathbf{x}_{t}-\mathbf{\bar{x}})\), where \(\mathbf{\bar{x}}\) denotes the mean value of the original configuration and \(W\) denotes the transformation matrix given by TICA. If the features of the original configuration \(x_{t}\) are linearly independent, then \(W\) is invertible, and \(F(\mathbf{W}(\mathbf{x}-\mathbf{\bar{x}}))\) is still an invertible function with respect to \(\mathbf{x}\). This data pre-processing approach offers several advantages. First, the features of \(\mathbf{x}_{t}^{\mathrm{TICA}}\) are orthonormal with an identity covariance matrix, making them nearly standard Gaussian distributed. Second, the TICA components are sorted according to their relaxation time scales, with the slowest and most important components arranged in the first dimensions. Therefore, \(\mathbf{x}_{t}^{\mathrm{TICA}}\) is already close to an ideal combination of reaction coordinate and noise after the TICA transformation, which simplifies the learning problem. Further details on the TICA can be found in Appendix B.
Based on the above analysis, Algorithm 1 provides a summary of the training process of RC flow. In the algorithm, we first simplify the reduced kinetics as a Brownian motion with \(\nabla V(\mathbf{z})\equiv 0\) and initialize \(F\) by minimizing \(\mathcal{L}_{\mathrm{kin}}\). Next, we obtain a rectangle area \(\prod_{t=1}^{d}[\mathrm{LB}_{i},\mathrm{UB}_{i}]\) that covers all RCs of training data, locate the \(K^{d}\) centers of the GMM of \(\mu^{z}\) uniformly in the area, and initialize the weights and variance matrices by minimizing \(\mathcal{L}\). Last, all parameters of \(F\) and \(\mu^{z}\) are simultaneously optimized with objective function \(\mathcal{L}\). In our experiments, all the optimization problems involved in the algorithm are solved by the mini-batch Adam algorithm [53].
Notice the pre-processing and pre-training steps are included in the algorithm to stabilize the training, and can be replaced with other heuristic strategies according to practical requirements.
_Remark 3_.: In this paper, the transformation coefficients \(\mathbf{W},\mathbf{\bar{x}}\) obtained by TICA remain unchanged during the training procedure. It is important to note that some recently proposed normalizing flow models (e.g., KRnet [46]) incorporate trainable linear layers with invertible transformation matrices. The application of such NFs to the RC flow framework requires further investigation and evaluation.
```
1if data pre-processing is required then
2 Perform TICA transformation \[\mathbf{x}_{t}:=\mathbf{W}(\mathbf{x}_{t}-\mathbf{\bar{x}})\text{ for all }t.\]
3 end if
4if pre-training is required then
5 Perform pre-training (optional): Let \(\mu^{z}(\mathbf{z})\propto 1\), i.e., \(\nabla V(\mathbf{z})\equiv 0\), and train \(F\) with \(\mathcal{L}_{\mathrm{kin}}\) defined in (12).
6 Calculate \((\mathbf{z}_{t},\mathbf{v}_{t})=F(\mathbf{x}_{t})\) for \(t=1,\ldots,T\).
7 Let \[\mathrm{LB}_{i} = \min\{i\text{th element of }\mathbf{z}_{t}|t=1,\ldots,T\},\] \[\mathrm{UB}_{i} = \max\{i\text{th element of }\mathbf{z}_{t}|t=1,\ldots,T\}.\] for \(i=1,\ldots,d\).
8 Let \[\{\mathbf{e}_{1},\ldots,\mathbf{e}_{K^{d}}\}=\prod_{i=1}^{d}\left\{\mathrm{ LB}_{i}+\frac{k(\mathrm{UB}_{i}-\mathrm{LB}_{i})}{K-1}|k=0,\ldots,K-1\right\}\]
9 Train \(w(\mathbf{e}),\boldsymbol{\sigma}(\mathbf{e})\) with \(\mathcal{L}\) defined in (14) while keeping \(F\) fixed.
10 end if
11Train \(F\) and \(w(\mathbf{e}),\boldsymbol{\sigma}(\mathbf{e})\) simultaneously with \(\mathcal{L}\). Return \(F(\cdot)\) (or \(F(\mathbf{W}(\cdot-\mathbf{\bar{x}}))\) if Line 2 is implemented).
```
**Algorithm 1**Training algorithm of RC flow
## IV Analysis
### Spectral analysis
It is well known that the Brownian dynamics (9) is a time-reversible Markov process and its transition density can be decomposed into a set of relaxation processes as [1; 2]
\[p_{\tau}^{z}(\mathbf{z}_{t},\mathbf{z}_{t+\tau})=\sum_{i=0}^{\infty}r_{i}^{z}( \mathbf{z}_{t})\cdot e^{-\frac{\tau}{t}}r_{i}^{z}(\mathbf{z}_{t+\tau})\mu^{z}( \mathbf{z}_{t+\tau}), \tag{15}\]
for all \(\tau>0\), where \(t_{0}=\infty>t_{1}\geq t_{2}\geq\ldots\) are implied time scales, \(r_{0}^{z},r_{1}^{z},r_{2}^{z}\ldots\) are eigenfunctions of the transfer operator of \(\{\mathbf{z}_{t}\}\), and \(r_{0}^{z}(\mathbf{z})\equiv 1\). If there is a spectral gap and \(t_{1}\geq\ldots\geq t_{s-1}\gg t_{s}\), we can conclude that the kinetics of \(\{\mathbf{z}_{t}\}\) at large time scales is dominated by the first \(s\) relaxation processes.
In the case where the RC flow is well trained and provides an accurate approximation of \(p_{\tau}^{x}\), it can be proved that
\(t_{0},t_{1},t_{2}\ldots\) are also implied time scales of \(\{\mathbf{x}_{t}\}\), and we can obtain the dominant eigenfunctions and relaxation processes of \(\{\mathbf{x}_{t}\}\) from those of RC with
\[r_{i}^{x}(\mathbf{x})=r_{i}^{z}(\Phi(\mathbf{x})) \tag{16}\]
and
\[p_{\tau}^{x}(\mathbf{x}_{t},\mathbf{x}_{t+\tau})=\sum_{t=0}^{\infty}r_{i}^{x}( \mathbf{x}_{t})\cdot e^{-\frac{z}{t_{t}}}r_{i}^{x}(\mathbf{x}_{t+\tau})\mu^{x} (\mathbf{x}_{t+\tau}). \tag{17}\]
The detailed derivations of the above conclusions are given in Appendix C.
### RC flow and information bottleneck
One important metric to evaluate the quality of RC is the mutual information \(\text{MI}(\mathbf{z}_{t},\mathbf{x}_{t+\tau})\), which quantifies the statistical dependence between \(\mathbf{z}_{t}\) and \(\mathbf{x}_{t+\tau}\). According to the principle of past-future information bottleneck [23; 54], if \(\mathbf{z}_{t}\) is an ideal bottleneck variable with large mutual information, it can accurately predict the future evolution of the configuration.
For RC flow, it can be established that, as the size of simulation data tends towards infinity, the following inequality holds:
\[\text{MI}(\mathbf{z}_{t},\mathbf{x}_{t+\tau})\geq-\mathcal{L}_{\text{kin}}+ \text{const.}, \tag{18}\]
where the constant is independent of our model parameters. Detailed proof of this result is provided in Appendix D. Thus, RC flow can also be interpreted as an information bottleneck method, which maximizes a lower bound of the mutual information \(\text{MI}(\mathbf{z}_{t},\mathbf{x}_{t+\tau})\) with a specific kinetic model of \(\mathbf{z}_{t}\).
## V Numerical examples
In this section, we apply RC flow to model reduction of some diffusion processes with multiple metastable states and the alanine dipeptide, where \(F\) is modeled by a realNVP. All the model and implementation details are presented in Appendix F.
First, we consider Brownian dynamics driven by the double well potential and the Mueller potential as shown in Fig. 2A, and use RC flow to identify the one-dimensional RC \(z\) based on simulation trajectories in the configuration space of \(\mathbf{x}=(x_{1},x_{2})\). Fig. 2B plots the reduced potential \(V\) of \(z\), which can be divided into several potential wells by barriers. We also present in Fig. 2A full-state configurations belonging to different potential wells through the inverse mapping \(\mathbf{x}=F^{-1}(z,0)\) for \(z\in\mathbb{R}\). It can be seen that RCs identified by the RC flow can preserve the structure of metastable states, and the stationary distributions of RCs are accurately estimated. In addition, we utilize Markov state models (MSMs) [55; 56] to estimate the dominant implied time scales of the processes from the original simulation trajectories of \(\mathbf{x}\) and trajectories of RC generated by (9). The results are given in Fig. 2C, which demonstrate the consistency between the full-state and reduced kinetics.
As a second example, the data are generated by a diffusion process with potential
\[V^{s}(\mathbf{s})=V^{\prime}(s_{1},s_{2})+10s_{3}^{2},\]
where \(\mathbf{s}=(s_{1},s_{2},s_{3})\) is the state, the potential \(V^{\prime}\) of \((s_{1},s_{2})\) plotted in Fig. 3A has seven wells, and \(s_{3}\) evolves as an independent Ornstein-Uhlenbeck process with equilibrium distribution \(\mathcal{N}(s_{3}|0,0.05)\) and the mixing time much smaller than that of \((s_{1},s_{2})\). In this example, the true simulation data of \(\mathbf{s}\) are mapped to \(\mathbf{x}=(x_{1},x_{2},x_{3})\) lying around a "Swiss roll" manifold by a nonlinear transformation as shown in Fig. 3B, and the RC flow is implemented to find a two-dimensional RC. Figs. 3C and 3D reveal that the proposed method successfully "unfolds" the manifold of simulation data and the kinetic properties of the process are preserved under the model reduction.
Last, we use RC flow to analyze simulation data of alanine dipeptide. This molecular system has been extensively studied in the existing literature, and its kinetics on long timescales can be characterized by two dihedral angles \(\phi,\psi\) of the backbone (Fig. 4A). Here we perform model reduction to find a two-dimensional RC with the configuration \(\mathbf{x}\in\mathbb{R}^{30}\)
Figure 2: Model reduction of two-dimensional diffusion processes. **(A)** Potential energy functions in \(\mathbb{R}^{2}\). The solid lines consist of points \(\{F^{-1}(z,0)|z\in\mathbb{R}\}\), and points with the same colors indicate they belong to the same potential well of the reduced potential \(V\). **(B)** Reduced potentials of RCs. The solid lines represent \(V\) estimated by the GMM in RC flow, where the potential wells are created by local maxima of \(V\), and the dash lines represent potentials estimated by the histogram of \(\{z_{t}\}\). **(C)** Dominant implied time scales of the reduced kinetics (9) of \(z_{t}\) (red) and the full-state Brownian dynamics of \(\mathbf{x}_{t}\) (black), which are both calculated from simulation trajectories by the MSM approach at different lag times (see Appendix F.3 for details of calculation).
being Cartesian coordinates of heavy atoms. From Fig. 4B, we can see that the free energy landscape of dihedral angles \((\phi,\psi)\) is close to that of \(\mathbf{z}\) identified by RC flow. We further grouped molecular configurations into six macro-states based on potential wells of \((\phi,\psi)\), and the macro-states can also be separated in the space of \(\mathbf{z}\) (Fig. 4C). In addition, Fig. 4D shows that the largest three implied time scales of the reduced kinetics provided by RC flow are close to those calculated from MSMs in the space of \((\phi,\psi)\).
## VI Conclusions
Model reduction of molecular kinetics, including discovery of RC and projection of kinetics into RC space, is an important task in analyzing and simulating molecular systems, and RC flow proposed in this paper provides a novel data-driven approach to the task. By harnessing the invertibility of NF, RC flow makes it tractable to compute conditional distributions of configurations based on specified RCs. Consequently, the reconstruction of full-state thermodynamics and kinetics from the reduced model becomes a straightforward process. Remarkably, within the RC flow framework, the optimization of coordinate transformations and the modeling of reduced kinetics can be simultaneously executed. Furthermore, it offers the flexibility to select from various governing equations for RCs in alignment with practical requirements.
In future, we will focus on the applications of RC flows to adaptive sampling and transition path finding methods, and the following open problems require further investigations:
* A rigorous mathematical analysis of modeling errors of RC flows under some proper assumptions (e.g., existence of low-dimensional transition manifold [21]) would be desirable.
* In common model reduction methods, large lag times \(\tau\) are usually selected to achieve effective low-dimensional description of kinetics especially for long time scales. But for RC flows, the accurate and efficient calculation of the transition density \(p_{\tau}^{z}(\mathbf{z}_{t},\mathbf{z}_{t+\tau})\) of the reduced kinetics with a large \(\tau\) is still challenging.
Figure 3: Model reduction of the diffusion process around a “Swiss roll”. **(A)** Circular potential \(V^{\prime}\) of \((s_{1},s_{2})\) (see Appendix F.2). **(B)** Diffusion trajectory of \(\mathbf{x}=(x_{1},x_{2},x_{3})\) obtained by the transformation **(F1)**. **(C)** Reduced potential of \(\mathbf{z}=(z_{1},z_{2})\) obtained by RC flow. **(D)** Dominant impact time scales \(t_{1},\ldots,t_{6}\) of the reduced kinetics (9) of \(\mathbf{z}_{t}\) (red) and the full-state Brownian dynamics of \(\mathbf{x}_{t}\) (black), which are calculated by the MSM approach at different lag times.
Figure 4: Model reduction of alanine dipeptide. **(A)** Structure of alanine dipeptide. The main coordinates are the backbone torsion angles \(\phi\) and \(\psi\). **(B)** Reduced potential of \((\phi,\psi)\) and \(V(\mathbf{z})\) given by the RC flow. **(C)** Macro-states defined in the space of \((\phi,\psi)\) and their projections in the space of \(\mathbf{z}\). **(D)** Implied timescales \(t_{1},t_{2},t_{3}\) of the reduced kinetics provided by the RC flow (red), and those estimated from MD simulation trajectories in the space of \((\phi,\psi)\) (black).
## Appendix A Analysis of the inverse temperature
For a RC flow given by (5) and (9) with \(\beta\neq 1\), we can define a new RC \(\mathbf{z}_{t}^{\prime}\) as
\[F^{\prime}(\mathbf{x}_{t})=(\mathbf{z}_{t}^{\prime},\mathbf{v}_{t})=(\sqrt{ \beta}\mathbf{z}_{t},\mathbf{v}_{t}), \tag{10}\]
where \(F^{\prime}\) is also an invertible function. Substituting (10) into (9) yields
\[\mathrm{d}\mathbf{z}_{t}^{\prime}=-\nabla V^{\prime}(\mathbf{z}_{t}^{\prime} )\mathrm{d}t+\sqrt{2}\mathrm{d}W_{t} \tag{11}\]
with
\[V^{\prime}(\mathbf{z}^{\prime})=\beta V(\sqrt{\beta^{-1}}\mathbf{z}^{\prime}).\]
It can be seen that (10, 11) provide an equivalent RC flow with the inverse temperature of the reduced kinetics being 1.
## Appendix B Data pre-processing
For completeness, we introduce the TICA-based data pre-processing approach. The interested readers can refer to [6; 7] for further discussion and implementation guidance.
Suppose we are given a trajectory \(\mathbf{x}_{1},\ldots,\mathbf{x}_{T}\) of length \(T\) and lag time \(\Delta t\). For notational simplicity, we denote the mean value of \(\mathbf{x}_{t}\) by \(\bar{\mathbf{x}}_{t}\) by \(\bar{\mathbf{x}}_{t}\) and \(\bar{\mathbf{x}}\) by \(\bar{\mathbf{x}}_{t}=\mathbf{x}_{t}-\bar{\mathbf{x}}\), and the projection of \(\mathbf{x}_{t}\) onto the TICA space by \(\mathbf{x}_{t}^{\text{TICA}}\). TICA solves the following optimization problems iteratively for \(i=1,2,\ldots\),
\[\max_{\mathbf{w}_{i}}\mathrm{auto}(\mathbf{w}_{i},\Delta t)=\mathbf{w}_{i}^{ \top}\mathbf{C}(\Delta t)\mathbf{w}_{i}\]
subject to constraints \(\mathbf{w}_{i}^{\top}\mathbf{C}(0)\mathbf{w}_{i}=1\) and \(\mathbf{w}_{i}^{\top}\mathbf{C}(0)\mathbf{w}_{j}=0\) for \(j=1,\ldots,i-1\). Here, \(\mathbf{C}(0)\) and \(\mathbf{C}(\Delta t)\) are the estimated covariance and time-lagged covariance matrices of \(\mathbf{x}_{t}\) given by
\[\mathbf{C}(0) = \frac{1}{T}\sum_{t}\bar{\mathbf{x}}_{t}\bar{\mathbf{x}}_{t}\bar {\mathbf{x}}_{t}^{\top},\] \[\mathbf{C}(\Delta t) = \frac{1}{2\left(T-\Delta t\right)}\sum_{t}\left(\delta\mathbf{x}_ {t}\delta\mathbf{x}_{t+\Delta t}^{\top}+\delta\mathbf{x}_{t+\Delta t}\delta \mathbf{x}_{t}^{\top}\right).\]
\(\mathrm{auto}(\mathbf{w}_{i},\Delta t)\) denotes the autocorrelation of the component \(\mathbf{w}_{i}^{\top}\mathbf{x}_{t}\) with lag time \(\Delta t\), and the largest autocorrelations yield the slowest components. By combining all optimal \(\mathbf{w}_{i}\), we can obtain the TICA transformation
\[\mathbf{x}_{t}^{\text{TICA}}=\mathbf{W}(\mathbf{x}_{t}-\bar{\mathbf{x}})\]
with \(\mathbf{W}=(\mathbf{w}_{1},\mathbf{w}_{2},\ldots)^{\top}\), which ensures that \(\mathbf{x}_{t}^{\text{TICA}}\) has zero mean and identity covariance. In this paper, we use the function pyemma.coordinates.tica in the Python package PyEMMA[57] to implement the TICA transformation.
In the case where \(\mathbf{C}(0)\) is full-rank, \(\mathbf{W}\in\mathbb{R}^{D\times D}\) is an invertible matrix. Adopting TICA for data pre-processing ensures that the mapping \((\mathbf{z}_{t},\mathbf{v}_{t})=F(\mathbf{x}_{t}^{\text{TICA}})=F(\mathbf{W}( \mathbf{x}_{t}-\bar{\mathbf{x}}))\) given by RC flow is still invertible with respect to \(\mathbf{x}_{t}\). We can then reconstruct the original configuration \(\mathbf{x}_{t}\) from the RC \(\mathbf{z}_{t}\) and noise \(\mathbf{v}_{t}\) as
\[\mathbf{x}_{t} = \mathbf{W}^{-1}\mathbf{x}_{t}^{\text{TICA}}+\bar{\mathbf{x}}\] \[= \mathbf{W}^{-1}F^{-1}\left(\mathbf{z}_{t},\mathbf{v}_{t}\right)+ \bar{\mathbf{x}}.\]
However, if \(\mathbf{x}_{t}\) contains linearly dependent elements and \(\mathbf{C}(0)\) is numerically singular, TICA removes the linear dependence and provides \(\mathbf{W}\) with row number smaller than \(D\). In this case, we can obtain an approximate inverse mapping from \((\mathbf{z}_{t},\mathbf{v}_{t})\) to \(\mathbf{x}_{t}\) by minimizing the error \(\left\|\hat{\mathbf{x}}_{t}-\widehat{\mathbf{W}^{-1}}\mathbf{x}_{t}^{\text{TICA }}\right\|^{2}=\left\|\hat{\mathbf{x}}_{t}-\widehat{\mathbf{W}^{-1}}\mathbf{W }\delta\mathbf{x}_{t}\right\|^{2}\), which yields
\[\widehat{\mathbf{W}^{-1}}=\delta\mathbf{X}\left(\mathbf{W}\delta\mathbf{X} \right)^{+}.\]
Here \(\delta\mathbf{X}=(\delta\mathbf{x}_{1},\ldots,\delta\mathbf{x}_{T})\) and the superscript \(+\) denotes the Moore-Penrose pseudo inverse. Finally, we can approximately reconstruct configuration \(\mathbf{x}_{t}\) from \((\mathbf{z}_{t},\mathbf{v}_{t})\) by
\[\mathbf{x}_{t}\approx\widehat{\mathbf{W}^{-1}}F^{-1}\left(\mathbf{z}_{t}, \mathbf{v}_{t}\right)+\bar{\mathbf{x}}.\]
## Appendix C Transfer operators of \(\{\mathbf{x}_{t}\}\) and \(\{\mathbf{z}_{t}\}\)
We first briefly introduce properties of the transfer operator \(\mathcal{T}_{\bar{\mathbf{z}}}^{z}\) of (9), which is defined by
\[\mathcal{T}_{\bar{\mathbf{z}}}^{z}h(\mathbf{z})=\int\frac{\mu^{z}(\mathbf{z}^{ \prime})}{\mu^{z}(\mathbf{z})}p_{\bar{\mathbf{z}}}^{z}(\mathbf{z}^{\prime}, \mathbf{z})h(\mathbf{z}^{\prime})\mathrm{d}\mathbf{z}^{\prime},\text{ for }\langle h,h\rangle_{\mu^{z}}<\infty.\]
For more details, we refer to [4] and references therein. Due to the time-reversibility of the Brownian dynamics, \(\mathcal{T}_{\bar{\mathbf{z}}}^{z}\) is a self-adjoint operator and can be written in terms of the eigenfunctions as
\[\mathcal{T}_{\bar{\mathbf{z}}}^{z}h=\sum_{i=0}^{\infty}\lambda_{i}^{\tau} \langle h,r_{i}^{z}\rangle_{\mu^{z}}r_{i}^{z}\]
Here \(1=\lambda_{0}^{\tau}>\lambda_{1}^{\tau}\geq\lambda_{2}^{\tau}\geq\ldots\) are eigenvalues, which are associated with implied time scales as \(t_{i}=-\tau/\log(\lambda_{i}^{\tau})\), \(r_{0}^{z}=\mu^{z},r_{1}^{z},r_{2}^{z},\ldots\) are normalized eigenfunctions with \(\left\langle r_{i}^{z},r_{i}^{z}\right\rangle_{\mu^{z}}=1_{i=j}\), and the inner product \(\langle h,h^{\prime}\rangle_{\mu^{z}}=\int h(\mathbf{z})h^{\prime}(\mathbf{z}) \mu(\mathbf{z})\mathrm{d}\mathbf{z}\). Then, we have
\[p_{\bar{\mathbf{z}}}^{z}(\mathbf{z}^{\prime},\mathbf{z}) = \frac{\mu^{z}(\mathbf{z})}{\mu^{z}(\mathbf{z}^{\prime})}\mathcal{T}_{ \bar{\mathbf{z}}}^{z}\delta_{\bar{\mathbf{z}}}(\mathbf{z})\] \[= \frac{\mu^{z}(\mathbf{z})}{\mu^{z}(\mathbf{z}^{\prime})}\sum_{i=0}^{ \infty}\lambda_{i}^{\tau}\mu^{z}(\mathbf{z}^{\prime})r_{i}^{z}(\mathbf{z}^{ \prime})r_{i}^{z}(\mathbf{z})\] \[= \sum_{i=0}^{\infty}r_{i}^{z}(\mathbf{z}^{\prime})\cdot\lambda_{i}^{ \tau}r_{i}^{z}(\mathbf{z})\mu^{z}(\mathbf{z})\]
Next, we prove that (17) holds for RC flow. According to the decomposition of \(p_{\bar{\mathbf{z}}}^{z}\) and (7),
\[p_{\bar{\mathbf{z}}}^{x}(\mathbf{x}^{\prime},\mathbf{z}) = p_{\bar{\mathbf{z}}}^{z}(\mathbf{z}^{\prime},\mathbf{z})S( \mathbf{x})\] \[= \sum_{i=0}^{\infty}r_{i}^{z}(\mathbf{z}^{\prime})\cdot\lambda_{i}^{ \tau}r_{i}^{z}(\mathbf{z})\mu^{z}(\mathbf{z})S(\mathbf{x})\] \[= \sum_{i=0}^{\infty}r_{i}^{x}(\mathbf{x}^{\prime})\cdot\lambda_{i}^{ \tau}r_{i}^{r}(\mathbf{x})\mu^{z}(\mathbf{x}),\]
where \(F(\mathbf{x})=(\mathbf{z},\mathbf{v})\), \(F(\mathbf{x}^{\prime})=(\mathbf{z}^{\prime},\mathbf{v}^{\prime})\) and \(r_{l}^{x}(\mathbf{x})=r_{l}^{z}(\Phi(\mathbf{x}))\).
Moreover, based on the above analysis, the eigendecomposition of the transfer operator of \(\{\mathbf{x}_{t}\}\) can be written as
\[\mathscr{T}_{\tau}^{x}h(\mathbf{x}) \triangleq \int\frac{\mu^{x}(\mathbf{x}^{\prime})}{\mu^{x}(\mathbf{x})}p_{ \tau}^{x}(\mathbf{x}^{\prime},\mathbf{x})h(\mathbf{x}^{\prime})\mathrm{d} \mathbf{x}^{\prime}\] \[= \sum_{i=0}^{\infty}\int r_{i}^{x}(\mathbf{x}^{\prime})h(\mathbf{x }^{\prime})\mu^{x}(\mathbf{x}^{\prime})\mathrm{d}\mathbf{x}^{\prime}\cdot \lambda_{i}^{\tau}r_{i}^{x}(\mathbf{x})\] \[= \sum_{i=0}^{\infty}\lambda_{i}^{\tau}\left\langle h,r_{i}^{x} \right\rangle_{\mu^{x}}r_{i}^{x}(\mathbf{x}).\]
By considering
\[\left\langle r_{i}^{x},r_{j}^{x}\right\rangle_{\mu^{x}} = \int r_{i}^{x}(\mathbf{x})r_{j}^{x}(\mathbf{x})\mu^{x}(\mathbf{x })\mathrm{d}\mathbf{x}\] \[= \iint r_{i}^{z}(\mathbf{z})r_{j}^{z}(\mathbf{z})\mu^{z}(\mathbf{ z})\mathscr{N}(\mathbf{v}|\mathbf{0},\mathbf{I})\mathrm{d}\mathbf{z}\mathrm{d} \mathbf{v}\] \[= \left\langle r_{i}^{z},r_{j}^{z}\right\rangle_{\mu^{z}}\] \[= 1_{i=j},\]
we can conclude that \(\mathscr{T}_{\tau}^{x}r_{i}^{x}=\lambda_{i}^{\tau}r_{i}^{x}\), i.e., \(\lambda_{i}^{\tau}\) and \(r_{i}^{x}(\mathbf{x})=r_{i}^{x}(\Phi(\mathbf{x}))\) are the \(i\)th eigenvalue and eigenfunction of \(\mathscr{T}_{\tau}^{x}\).
## Appendix D Proof of (18)
We show here the derivation of (18) for the self-completeness of the paper. The similar proof can be found in [23].
For convenience of notation, we denote by
\[\hat{\mathbb{P}}(\mathbf{x}_{t+\tau}|\mathbf{z}_{t})=p_{\tau}^{z}(\mathbf{z}_ {t},\Phi(\mathbf{x}_{t+\tau}))S(\mathbf{x}_{t+\tau})\]
the approximation of the conditional distribution \(\mathbb{P}(\mathbf{x}_{t+\tau}|\mathbf{z}_{t})\) given by RC flow. It can be seen from (7) that \(\hat{\mathbb{P}}(\mathbf{x}_{t+\tau}|\mathbf{z}_{t})=\hat{p}_{\tau}^{x}( \mathbf{x},\mathbf{x}_{t+\tau})\) in RC flow. According to the definition of mutual information, we have
\[\mathrm{MI}(\mathbf{z}_{t},\mathbf{x}_{t+\tau}) = \mathbb{E}\left[\log\mathbb{P}(\mathbf{x}_{t+\tau}|\mathbf{z}_{t })\right]+H(\mathbf{x}_{t+\tau})\] \[= \mathbb{E}\left[\log\frac{\mathbb{P}(\mathbf{x}_{t+\tau}|\mathbf{z }_{t})}{\hat{\mathbb{P}}(\mathbf{x}_{t+\tau}|\mathbf{z}_{t})}\right]+\mathbb{ E}\left[\log\hat{\mathbb{P}}(\mathbf{x}_{t+\tau}|\mathbf{z}_{t})\right]\] \[+H(\mathbf{x}_{t+\tau})\] \[= \mathbb{E}_{\mathbf{z}_{t}}\left[\mathbb{E}_{\mathbf{x}_{t+ \tau}\sim\mathbb{P}(\mathbf{x}_{t+\tau}|\mathbf{z}_{t})}\left[\log\frac{ \mathbb{P}(\mathbf{x}_{t+\tau}|\mathbf{z}_{t})}{\hat{\mathbb{P}}(\mathbf{x}_{ t+\tau}|\mathbf{z}_{t})}\right]|\mathbf{z}_{t}\right]\] \[+\mathbb{E}\left[\log\hat{p}_{\tau}^{x}(\mathbf{x}_{t},\mathbf{x} _{t+\tau})\right]+H(\mathbf{x}_{t+\tau})\]
where \(H\) denotes the entropy and \(\mathbb{E}\) denotes the mean value over all transition pairs \((\mathbf{x}_{t},\mathbf{x}_{t+\tau})\) in trajectories. Notice that \(\mathbb{E}_{\mathbf{x}_{t+\tau}\sim\mathbb{P}(\mathbf{x}_{t+\tau}|\mathbf{z}_ {t})}\left[\log\frac{\mathbb{P}(\mathbf{x}_{t+\tau}|\mathbf{z}_{t})}{\hat{ \mathbb{P}}(\mathbf{x}_{t+\tau}|\mathbf{z}_{t})}\right]\) equals the KL divergence between \(\mathbb{P}(\mathbf{x}_{t+\tau}|\mathbf{z}_{t})\) and \(\mathbb{P}(\mathbf{x}_{t+\tau}|\mathbf{z}_{t})\), which is always non-negative. Therefore, in the limit case of infinite data size, we can obtain
\[\mathbb{E}\left[\log\hat{\mathbb{P}}(\mathbf{x}_{t+\tau}|\mathbf{z}_{t}) \right]=-\mathscr{L}_{\text{kin}}\]
and
\[\mathrm{MI}(\mathbf{z}_{t},\mathbf{x}_{t+\tau})\geq-\mathscr{L}_{\text{kin}}+H( \mathbf{x}_{t+\tau}),\]
where the last term is independent of parameters of RC flow. The equality holds only if \(\hat{\mathbb{P}}(\mathbf{x}_{t+\tau}|\mathbf{z}_{t})=\mathbb{P}(\mathbf{x}_{t +\tau}|\mathbf{z}_{t})\).
## Appendix E Calculation of \(p_{\tau}^{z}\)
In the importance sampling method [52], the time interval \([t,t+\tau]\) is divided into \(M\) sub-intervals of length \(\Delta=\tau/M\). Letting \(\mathbf{u}_{0}=\mathbf{z}_{t},\ \mathbf{u}_{1}=\mathbf{z}_{t+\Delta}\),..., \(\mathbf{u}_{M}=\mathbf{z}_{t+M\Delta}=\mathbf{z}_{t+\tau}\) and applying Euler-Maruyama discretization to each sub-interval, we have
\[p_{\tau}^{z}(\mathbf{z}_{t},\mathbf{z}_{t+\tau}) = \int\mathbb{P}(\mathbf{u}_{1}|\mathbf{u}_{0})\ldots\mathbb{P}( \mathbf{u}_{M}|\mathbf{u}_{M-1})\mathrm{d}\mathbf{u}_{1:M-1} \tag{12}\] \[= \int f(\mathbf{u}_{0},\mathbf{u}_{1})\ldots f(\mathbf{u}_{M-1}, \mathbf{u}_{M})\mathrm{d}\mathbf{u}_{1:M-1},\]
where \(u_{1:M-1}=(u_{1},\ldots,u_{M-1})\) and
\[f(\mathbf{u},\mathbf{u}^{\prime})=\mathscr{N}\left(\mathbf{u}^{\prime}|\mathbf{ u}-\nabla V(\mathbf{u})\cdot\Delta,2\Delta\mathbf{I}\right).\]
According to [52], We can draw \(K_{s}\) samples of \(\mathbf{u}_{1:M-1}\) from the proposal density
\[\mathbf{u}_{1:M-1}^{k}\sim\prod_{m=0}^{M-2}g_{m}(\mathbf{u}_{m}^{k},\mathbf{u}_{ m+1}^{k}),\quad\text{for }k=1,\ldots,K_{s}\]
and calculate the integral (12) by importance sampling
\[p_{\tau}^{z}(\mathbf{z}_{t},\mathbf{z}_{t+\tau})\approx\frac{1}{K_{s}}\sum_{k= 1}^{K_{s}}\frac{\prod_{m=0}^{M-1}f(\mathbf{u}_{m}^{k},\mathbf{u}_{m+1}^{k})}{ \prod_{m=0}^{M-2}g_{m}(\mathbf{u}_{m}^{k},\mathbf{u}_{m+1}^{k})} \tag{13}\]
where \(\mathbf{u}_{0}^{k}\equiv\mathbf{u}_{0}=\mathbf{z}_{t}\), \(\mathbf{u}_{M}^{k}\equiv\mathbf{u}_{M}=\mathbf{z}_{t+\tau}\), and
\[g_{m}(\mathbf{u},\mathbf{u}^{\prime})=\mathscr{N}\left(\mathbf{u}^{\prime}| \mathbf{u}+\frac{\mathbf{u}_{M}-\mathbf{u}}{M-m},\frac{2\Delta\left(M-m-1 \right)}{M-m}\mathbf{I}\right). \tag{14}\]
The whole procedure of the approximation of \(p_{\tau}^{z}\) is summarized in Algorithm 2.
```
1for\(k=1,\ldots,K_{s}\)do
2for\(m=1,\ldots,M-1\)do
3 Draw \(\mathbf{u}_{m}^{k}\sim g_{m}(\mathbf{u}_{m-1}^{k},\cdot)\) with (14).
4 end for
5
6 Calculate \(p_{\tau}^{z}(\mathbf{z}_{t},\mathbf{z}_{t+\tau})\) by (13).
```
**Algorithm 2** Importance sampling approximation of
## Appendix F Model and implementation details
### Structure and hyperparameters of RC flow
In this work, the RealNVP model of \(F\) is implemented in the Bgflow package [58]. It consists of 12 affine coupling
blocks, where shift and scale transformations are performed by multiple layer perceptrons (MLPs) with 3 hidden layers of width 128. In the GMM (11) of \(\mu^{z}\), \(w(\mathbf{c})\) and \(\sigma(\mathbf{c})\) are modeled by MLPs with 3 hidden layers of width 64, and \(K=40\). The weight \(\alpha\) in (14) is set to be 0.1.
In Algorithm 1, Lines 6, 10 and 12 are implemented by Adam algorithm [53]. Models are trained for 5 epochs (Lines 6 and 10) and 20 epochs (Line 12), and the learning rate is initially \(10^{-3}\). Moreover, the learning rate is decayed by a factor of 0.1 for every 5 epochs when solving the optimization problem in Line 12. In Algorithm 2, \(K_{s}=20\) and \(M=10\).
For all our experiments, we implemented pre-training steps. Furthermore, we applied TICA transformations specifically to the examples shown in Figs. 3 and 4 with a lag time of 10 steps.
#### a.2.2 Simulations
Potential functions of examples shown in Fig. 2 are
\[V_{\text{double well}}(x_{1},x_{2}) = 5\left(x_{1}^{2}-1\right)^{2}+10\left(x_{1}^{2}+x_{2}-1\right)^{ 2},\] \[V_{\text{Mueller}}(x_{1},x_{2}) = \sum_{i=1}^{4}A_{i}\exp\left(a_{i}\left(x_{1}-\bar{x}_{i}\right)^ {2}+\right.\] \[\left.b_{i}\left(x_{1}-\bar{x}_{i}\right)\left(x_{2}-\bar{y}_{i} \right)+c_{i}\left(x_{2}-\bar{y}_{i}\right)^{2}\right),\]
where \(\left(A_{1},\ldots,A_{4}\right)=\left(-\frac{20}{3},-\frac{10}{3},-\frac{17}{3 },\frac{1}{2}\right)\), \(\left(a_{1},\ldots,a_{4}\right)=\left(-1,-1,-6.5,0.7\right)\), \(\left(b_{1},\ldots,b_{4}\right)=\left(0,0,11,0.6\right)\), \(\left(c_{1},\ldots,c_{4}\right)\)\(=\left(-10,-10,-6.5,0.7\right)\), \(\left(\bar{x}_{1},\ldots,\bar{x}_{4}\right)=\left(1,0,-0.5,-1\right)\) and \(\left(\bar{y}_{1},\ldots,\bar{y}_{4}\right)=\left(0,0.5,1.5,1\right)\). For each potential, we generate a trajectory containing \(1.5\times 10^{5}\) frames by Euler-Maruyama discretization of the Brownian dynamics with the inverse temperature 0.5 (double well potential) or 1 (Mueller potential). The time intervals between frames and step sizes of the discretization are \(0.01,2\times 10^{-4}\) (double well potential) and \(0.025,5\times 10^{-4}\) (Mueller potential).
For the example illustrated by Fig. 3, \(V^{\prime}(s_{1},s_{2})=\cos(7\cdot\text{atan2}(z_{2},s_{1}))\). The trajectory of \(\mathbf{s}\) containing \(3\times 10^{5}\) frames is also generated by the Euler-Maruyama scheme, where the inverse temperature is 1, the time interval between frames is 0.01 and the discretization step size is \(2\times 10^{-4}\). The simulation data in the space of \(\mathbf{x}\) are obtained via the transformation
\[x_{1} = s_{1}^{\prime}\cos s_{1}^{\prime}+\frac{u_{1}}{\sqrt{u_{1}^{2}+u _{3}^{2}}}s_{3},\] \[x_{2} = s_{2}^{\prime},\] \[x_{3} = s_{1}^{\prime}\sin s_{1}^{\prime}+\frac{u_{3}}{\sqrt{u_{1}^{2}+u _{3}^{2}}}s_{3}, \tag{11}\]
where \(u_{1}=\sin s_{1}^{\prime}+s_{1}^{\prime}\cos s_{1}^{\prime}\), \(u_{3}=-\cos s_{1}^{\prime}+s_{1}^{\prime}\sin s_{1}^{\prime}\), \(s_{1}^{\prime}=3\pi\left(s_{1}+4\right)/4\) and \(s_{2}^{\prime}=3\pi\left(s_{2}+4\right)/4\).
#### a.2.3 Calculation of implied time scales
In our examples, implied time scales are all calculated by 50-state Markov state models built by pyEMMA, where the spatial discretization is performed by k-means clustering. Implied time scales of full-state kinetics are obtained from simulation trajectories in the space of \(\mathbf{x}\) (two well and Mueller potentials), \(\mathbf{s}\) (Swiss roll), and \(\left(\phi,\psi\right)\) (alanine dipeptide). For reduced kinetics defined by (9), implied time scales are obtained from trajectories with the same sizes as training data, which are also generated by the Euler-Maruyama discretization of (9).
|
2307.16370 | Inference for Low-rank Completion without Sample Splitting with
Application to Treatment Effect Estimation | This paper studies the inferential theory for estimating low-rank matrices.
It also provides an inference method for the average treatment effect as an
application. We show that the least square estimation of eigenvectors following
the nuclear norm penalization attains the asymptotic normality. The key
contribution of our method is that it does not require sample splitting. In
addition, this paper allows dependent observation patterns and heterogeneous
observation probabilities. Empirically, we apply the proposed procedure to
estimating the impact of the presidential vote on allocating the U.S. federal
budget to the states. | Jungjun Choi, Hyukjun Kwon, Yuan Liao | 2023-07-31T02:26:52Z | http://arxiv.org/abs/2307.16370v1 | Inference for Low-rank Completion without Sample Splitting with Application to Treatment Effect Estimation
###### Abstract
This paper studies the inferential theory for estimating low-rank matrices. It also provides an inference method for the average treatment effect as an application. We show that the least square estimation of eigenvectors following the nuclear norm penalization attains the asymptotic normality. The key contribution of our method is that it does not require sample splitting. In addition, this paper allows dependent observation patterns and heterogeneous observation probabilities. Empirically, we apply the proposed procedure to estimating the impact of the presidential vote on allocating the U.S. federal budget to the states.
_Keywords:_ Matrix completion; Nuclear norm penalization; Two-step least squares estimation; Approximate factor model; Causal inference
_JEL Classification:_ C12, C14, C33, C38, C55
## 1 Introduction
The task of imputing the missing entries of a partially observed matrix, often dubbed as _matrix completion_, is widely applicable in various areas. In addition to the well-known application to recommendation systems (e.g., the Netflix problem), this problem is applied in a diverse array of science and engineering such as collaborative filtering, system identification, social network recovery, and causal inference.
In this paper, we focus on the following approximate low-rank model with a factor structure:
\[Y=M+\mathcal{E}\approx\beta F^{\prime}+\mathcal{E}, \tag{1.1}\]
where \(Y\) is an \(N\times T\) data matrix which is subject to missing, \(M\) is a latent matrix of interest, and \(\mathcal{E}\) represents a noise contamination. Importantly, \(M\) is assumed to be an approximate low-rank matrix having an approximate factor structure \(M\approx\beta F^{\prime}\), where \(\beta\) is factor loadings and \(F\) is latent factors. In addition, we allow some entries of \(Y\) to be unobserved by defining an indicator \(\omega_{it}\), which equals one if the \((i,t)\) element of \(Y\) is observed, and zero otherwise. In this practical setting, we provide the inferential theory for each entry of \(M\), regardless of whether its corresponding entry in \(Y\) is observed or not.
One of the widely used methods for the low-rank matrix completion is the nuclear norm penalization and it has been intensively studied in the last decade. Candes and Recht (2009), Candes and Plan (2010), Koltchinskii et al. (2011), Negahban and Wainwright (2012), and Chen et al. (2020b) provide statistical rates of convergence for the nuclear norm penalized estimator and a branch of studies including Beck and Teboulle (2009), Cai et al. (2010), Mazumder et al. (2010), Ma et al. (2011), and Parikh and Boyd (2014) provide algorithms to compute the nuclear norm penalized estimator. However, research on inference is still limited. This is because the shrinkage bias caused by the penalization, as well as the lack of the closed-form expression of the estimator, hinders the distributional characterization of the estimator.
We contribute to the literature by providing an inferential theory of the low-rank estimation without sample splitting. Our estimation procedure consists of the following main steps:
1. Using the full sample of observed \(Y\), compute the nuclear norm penalized estimator \(\widetilde{M}\) and use the left singular vectors of \(\widetilde{M}\) as the initial estimator for \(\beta\).
2. To estimate \(F\), regress the observed \(Y\) onto the initial estimator for \(\beta\).
3. To re-estimate \(\beta\), regress the observed \(Y\) on the estimator for \(F\).
4. The product of the estimators in Steps 2 and 3 is the final estimator for \(M\).
Note that steps 2-3 are only conducted once without further iterations.
An important contribution is that we do not rely on the sample splitting to make inference, but simply use the full (observed) sample in every step of our procedure. There are at least three advantages to avoid sample splitting. First, the resulting estimator using sample splitting is unstable and random even conditioning on the data. Second, sample splitting requires relatively large \(T\) in practice, because it practically works with only \(T/2\) observations. This is demanding in applied micro applications when \(T\) is just a few decades. In the simulation study, we show that the performance of the estimator using sample splitting is worse than that of the estimator without sample splitting when \(T\) is relatively small. Lastly, sample splitting increases computational costs in multiple tests because for each target time '\(t\)', we need to use different sample splitting.
Technically, we apply a new approach to showing the negligibility of the potential bias terms, by making use of a hypothetically defined _auxiliary leave-one-out_ (ALOO) estimator. We emphasize the word "auxiliary" because it is only introduced in the technical argument, but _not_ implemented in the estimation. So it is a hypothetical estimator, which is to be shown that it is
i) asymptotically equivalent to the initial estimator for \(\beta\) in Step 1 and
ii) independent of the sample used in the least squares estimation, namely, the sample in period \(t\). Using the ALOO estimator, we can separate out the part in the initial estimator for \(\beta\), which is correlated with the sample in period \(t\). Once we separate out the correlated part, we can enjoy a similar effect to the sample splitting. And we show the separated correlated part is sufficiently small. Importantly, the leave-one-out estimator only appears in the proof as an auxiliary point of the initial estimator for \(\beta\), so we do not need to compute it in the estimation procedure, which allows us to remove the sample splitting step without implementing any additional steps.
Empirically, we apply the proposed procedure to making inference for the impact of the presidential vote on allocating the U.S. federal budget to the states. We find the states that supported the incumbent president in past presidential elections tend to receive more federal funds and this tendency is stronger for the loyal states than the swing states. In addition, this tendency is stronger after the 1980s.
### Relation to the literature
Very recently, some studies proposed the ways of achieving unbiased estimation for the inference of the nuclear norm penalized estimator. Chernozhukov et al. (2019, 2021) propose a two-step least square procedure with sample splitting, which estimates the factors and loadings successively using the least square estimations. As we discussed earlier, sampling splitting comes with several undesirable costs.
The idea of the ALOO estimator has been employed in other recent works such as Ma et al. (2019); Chen et al. (2019, 2020a, 2020b); Yan et al. (2021) as well. Among them, in particular, Chen et al. (2019) pioneered using this idea to convex relaxation of low-rank inference. This paper has some important contributions compared to Chen et al. (2019).
1. We consider a general nonparametric panel model which is an approximate low-rank model rather than an exact low-rank model.
2. This paper accommodates more general data-observation patterns: the heterogeneous observational probabilities and the correlated observation patterns by assuming the cluster structure and allowing dependence within a cluster.
3. The inferential theory for the average treatment effect estimation is provided as an application.
4. We formally address a technical issue concerning the ALOO estimator. The ALOO estimator is to be (hypothetically) calculated by using the gradient descent iteration from the leave-one-out problem, which rules out, for example, samples in period \(t\). This exclusion is designed to guarantee the independence between the leave-one-out estimator and the period \(t\) sample. However, due to the non-convexity of the loss functions, the gradient descent iteration must stop where the gradient of the loss function is sufficiently "small." If this stopping point depends on the sample in period \(t\), as in Chen et al. (2019) who derive the stopping point from the problem using the full sample, the leave-one-out estimator using this stopping point may not be truly independent of the sample in period \(t\). This dependence frustrates the analysis of the bounds regarding the leave-one-out estimator. We provide two solutions for this potential dependence issue to be detailed in the paper.
5. Our method does not have an explicit debias step, but is based on refitting least squares. While we do not claim that this estimator is advantageous over the explicit debiasing method, we view our estimator as the natural extension of "post model selection methods" to the low rank framework.
Other related works on inference include Xia and Yuan (2021), Xiong and Pelger (2020), and Jin et al. (2021). We compare these methods with ours in simulations.
Lastly, a comparison with other literature that takes advantage of a low-rank model to estimate the treatment effect would be helpful. The close connection between low-rank completion and treatment effect estimation was first made formal by Athey et al. (2021) who showed that the nuclear norm regularization can be useful for causal panel data by presenting the convergence rate of the estimator. Another line of research proposes inferential theories under weaker assumptions on the treatment assignment with other restrictions. Farias et al. (2021) allow the assignment of the treatment that can depend on historical observations while focusing on the estimation of the average treatment effect. Agarwal et al. (2021) and Bai and Ng (2021) consider the case where the assignment is not random but has a certain block structure that often occurs in causal panel data.1 In addition, Arkhangelsky et al. (2021) propose an estimator that is more robust than the conventional difference-in-differences and synthetic control methods by using a low-rank fixed effect model with the homogeneous treatment effect assumption.
Footnote 1: In Agarwal et al. (2021), a certain submatrix for estimation has a block structure.
This paper is organized as follows. Section 2 provides the model and the estimation procedure as well as our strategy for achieving the unbiased estimation. Section 3 gives the asymptotic results of our estimator. Section 4 provides the inferential theory for the average treatment effect estimator as an application. Section 5 presents an empirical study about the impact of the president on allocating the
U.S. federal budget to the states to illustrate the use of our inferential theory. Section 6 includes the simulation studies. Section 7 concludes.
There are a few words on our notation. For any matrix \(A\), we use \(\left\|A\right\|_{F}\), \(\left\|A\right\|\), and \(\left\|A\right\|_{*}\) to denote the Frobenius norm, operator norm, and nuclear norm respectively. \(\left\|A\right\|_{2,\infty}\) denotes the largest \(l_{2}\) norm of all rows of a matrix \(A\). \(\mathrm{vec}(A)\) is the vector constructed by stacking the columns of the matrix \(A\) in order. Also, \(\psi_{r}(A)\) is \(r\)th largest singular value of \(A\). \(\psi_{\max}(A)\) and \(\psi_{\min}(A)\) are the largest and the smallest nonzero singular value of A. For any vector \(B\), \(\mathrm{diag}(B)\) is the diagonal matrix whose diagonal entries are \(B\). \(a\asymp b\) means \(a/b\) and \(b/a\) are \(O_{P}(1)\).
## 2 Model and Estimation
We consider the following nonparametric panel model subject to missing data problem:
\[y_{it}=h_{t}\left(\zeta_{i}\right)+\varepsilon_{it},\]
where \(y_{it}\) is the scalar outcome for a unit \(i\) in a period \(t\), \(h_{t}(\cdot)\) is a time-varying nonparametric function, \(\zeta_{i}\) is a unit-specific latent state variable, \(\varepsilon_{it}\) is the noise, and \(\omega_{it}=1\{y_{it}\text{ is observed}\}\). Here, \(\{h_{t}(\cdot),\zeta_{i},\varepsilon_{it}\}\) are unobservable. In the model, the (latent) unit states \(\zeta_{i}\) have a time-varying effect on the outcome variable through \(h_{t}(\cdot)\). This model can be written in (1.1) using the sieve representation. Suppose the function \(h_{t}(\cdot)\) has the following sieve approximation:
\[h_{t}(\zeta_{i})=\sum_{r=1}^{K}\kappa_{t,r}\phi_{r}(\zeta_{i})+M_{it}^{R}= \beta_{i}^{\prime}F_{t}+M_{it}^{R}=M_{it}^{\star}+M_{it}^{R},\]
where \(\beta_{i}=(\phi_{1}(\zeta_{i}),\ldots,\phi_{K}(\zeta_{i}))^{\prime}\) and \(F_{t}=(\kappa_{t,1},\ldots,\kappa_{t,K})^{\prime}\). Here, \(M_{it}^{R}\) is the sieve approximation error and, for all \(1\leq r\leq K\), \(\phi_{r}(\zeta_{i})\) is the sieve transformation of \(\zeta_{i}\) using the basis function \(\phi_{r}(\cdot)\) and \(\kappa_{t,r}\) is the sieve coefficient. Then,
\[M=[M_{it}]_{N\times T},\quad M_{it}=h_{t}(\zeta_{i})\]
can be successfully represented as the approximate factor structure.
In matrix form, we can represent the model as
\[Y=M+\mathcal{E}=M^{\star}+M^{R}+\mathcal{E}=\beta F^{\prime}+M^{R}+\mathcal{E}, \tag{2.1}\]
where we denote \(Y=[y_{it}]_{N\times T}\), \(M=[M_{it}]_{N\times T}\), \(M^{\star}=[M_{it}^{\star}]_{N\times T}\), \(M^{R}=[M_{it}^{R}]_{N\times T}\), \(\beta=[\beta_{1},\ldots,\beta_{N}]^{\prime}\), \(F=[F_{1},\ldots,F_{T}]^{\prime}\), and \(\mathcal{E}=[\varepsilon_{it}]_{N\times T}\). Note that \(Y\) is an incomplete matrix that has missing components.
Let \(\mathcal{M}\coloneqq(\beta,F,M^{R})\) be the triplet of random matrices that compose \(M\). In the paper, we allow the
heterogeneous observation probability, i.e., \(P(\omega_{it}=1)=p_{i}\) and denote \(\Pi=\text{diag}(p_{1},\ldots,p_{N})\). Here, we shall assume the sieve dimension \(K\) is pre-specified by researchers and propose some data-driven ways of choosing \(K\) in Section A of Appendix.
### Nuclear norm penalized estimation with inverse probability weighting
To accommodate the heterogeneous observation probability, this paper uses the inverse probability weighting scheme, referred to as inverse propensity scoring (IPS) or inverse probability weighting in causal inference literature (e.g., Imbens and Rubin (2015), Little and Rubin (2019), Schnabel et al. (2016)), in addition to the nuclear norm penalization:
\[\widetilde{M}\coloneqq\operatorname*{arg\,min}_{A\in\mathbb{R}^{N\times T}} \frac{1}{2}\|\widehat{\Pi}^{-\frac{1}{2}}\Omega\circ(A-Y)\,\|_{F}^{2}+\lambda \|A\|_{*} \tag{2.2}\]
where \(\widehat{\Pi}=\text{diag}(\widehat{p}_{1},\ldots,\widehat{p}_{N})\), and \(\widehat{p}_{i}=\frac{1}{T}\sum_{t=1}^{T}\omega_{it}\) for each \(i\leq N\), \(\Omega=[\omega_{it}]_{N\times T}\) and \(\circ\) denotes the Hadamard product. As noted in Ma and Chen (2019), this inverse probability weighting debiases the objective function itself. If there is heterogeneity in the observation probability, \(\|\Pi^{-\frac{1}{2}}\Omega\circ(A-Y)\,\|_{F}^{2}\) is an unbiased estimate of \(\|A-Y\|_{F}^{2}\), which we would use if there is no missing entry, in the sense that \(\mathbb{E}_{\Omega}[\|\Pi^{-\frac{1}{2}}\Omega\circ(A-Y)\,\|_{F}^{2}]=\|A-Y\|_ {F}^{2}\), while \(\|\Omega\circ(A-Y)\|_{F}^{2}\) is biased.
### Estimation procedure
Although the inverse probability weighting enhances the estimation quality, the weighting alone cannot guarantee the asymptotic normality of the estimator because of the shrinkage bias. To achieve the unbiased estimation having the asymptotic normality, we run the two-step least squares procedure. As noted previously, our estimation does not have the sample splitting steps. Our estimation algorithm is as follows:
```
Step 1 Compute the initial estimator \(\widetilde{M}\) using the nuclear norm penalization. Step 2 Let \(\widetilde{\beta}\) be \(N\times K\) matrix whose columns are \(\sqrt{N}\) times the top \(K\) left singular vectors of \(\widetilde{M}\). Step 3 For each \(t\leq T\), run OLS to get \(\widehat{F}_{t}=\left(\sum_{j=1}^{N}\omega_{it}\widetilde{\beta}_{j}\widetilde {\beta}_{j}^{\prime}\right)^{-1}\sum_{j=1}^{N}\omega_{jt}\widetilde{\beta}_{j} y_{it}\). Step 4 For each \(i\leq N\), run OLS to get \(\widehat{\beta}_{i}=\left(\sum_{s=1}^{T}\omega_{is}\widehat{F}_{s}\widehat{F }_{s}^{\prime}\right)^{-1}\sum_{s=1}^{T}\omega_{is}\widehat{F}_{s}y_{is}\). Step 5 The final estimator \(\widehat{M}_{it}\) is \(\widehat{\beta}_{i}^{\prime}\widehat{F}_{t}\) for all \((i,t)\).
```
**Algorithm 1** Constructing the estimator for \(M\).
After deriving the initial estimator of loadings from the nuclear norm penalized estimator \(\widetilde{M}\), we estimate latent factors and loadings using the two-step least squares procedure. The final estimator of \(M\) is then the product of the estimates for latent factors and loadings.
### A general discussion of the main idea
It is well-known that the nuclear-norm penalized estimator \(\widetilde{M}\), like other penalized estimators, is subject to shrinkage bias which complicates statistical inference. To resolve this problem, we use the two-step least squares procedure, i.e., Steps 3 and 4 in Algorithm 1. In showing the asymptotic normality of the resulting estimator \(\widehat{M}\), a key challenge is to show the following term is asymptotically negligible:
\[R_{t}=\frac{1}{\sqrt{N}}\sum_{j=1}^{N}\omega_{jt}\varepsilon_{jt}(\widetilde{ \beta}_{j}-H_{1}^{\prime}\beta_{j})\]
where \(H_{1}\) is some rotation matrix.2 This term represents the effect of the bias of the nuclear-norm penalization since \(\widetilde{\beta}_{j}\) is derived from the nuclear-norm penalized estimator. Chernozhukov et al. (2019, 2021) resort to sample splitting to show the asymptotic negligibility of \(R_{t}\).
Footnote 2: Another term \(\frac{1}{\sqrt{N}}\sum_{j=1}^{N}(\omega_{jt}-p_{j})\beta_{j}F_{t}^{\prime}H_{ 1}^{\prime-1}(\widetilde{\beta}_{j}-H_{1}^{\prime}\beta_{j})\) is also to be shown negligible, but the argument is similar to that of \(R_{t}\).
#### 2.3.1 The auxiliary leave-one-out method
Motivated by Chen et al. (2020b), we show the asymptotic negligibility of \(R_{t}\) without sample splitting by using two hypothetical estimators which are asymptotically equivalent to the nuclear norm penalized estimator \(\widetilde{\beta}\). Namely, we consider a hypothetical non-convex iteration procedure for the low-rank regularization, where singular vectors are iteratively solved as the solution and show that this procedure can be formulated as the following two problems:
\[L^{\text{full}}(B,F) =\frac{1}{2}\|\Pi^{-\frac{1}{2}}\Omega\circ\left(BF^{\prime}-Y \right)\|_{F}^{2}+\frac{\lambda}{2}\|B\|_{F}^{2}+\frac{\lambda}{2}\|F\|_{F}^{2}\] \[=\frac{1}{2}\|\Pi^{-\frac{1}{2}}\Omega\circ\left(BF^{\prime}-Y \right)\|_{F,(-t)}^{2}+\frac{1}{2}\|\Pi^{-\frac{1}{2}}\Omega\circ\left(BF^{ \prime}-Y\right)\|_{F,t}^{2}+\frac{\lambda}{2}\|B\|_{F}^{2}+\frac{\lambda}{2} \|F\|_{F}^{2} \tag{2.3}\] \[L^{(-t)}(B,F) =\frac{1}{2}\|\Pi^{-\frac{1}{2}}\Omega\circ\left(BF^{\prime}-Y \right)\|_{F,(-t)}^{2}+\frac{1}{2}\|BF^{\prime}-M^{\star}\|_{F,t}^{2}+\frac{ \lambda}{2}\|B\|_{F}^{2}+\frac{\lambda}{2}\|F\|_{F}^{2}. \tag{2.4}\]
Here, \(\|\cdot\|_{F,(-t)}\) denotes the Frobenius norm which is computed ignoring \(t\)-th column and \(\|\cdot\|_{F,t}\) is the Frobenius norm of only \(t\)-th column. Note that the only difference between (2.3) and (2.4) is that the \(t\)-th column of the goodness of fit part in (2.3) is replaced by its conditional expectation in (2.4). So, \(\{\omega_{jt},\varepsilon_{jt}\}_{j\leq N}\) is excluded from the problem (2.4).
We emphasize that (i) both problems defined above are non-convex; (ii) both problems are "auxiliary", meaning that they are introduced only for proofs, not actually implemented. (iii) Optimizing \(L^{(-t)}(B,F)\) is an auxiliary leave-one-out (ALOO) problem, leading to the ALOO estimator \(\breve{\beta}^{(-t)}\) to be discusssed below.
Because of the non-convexity, both hypothetical problems should be computed iteratively until the
gradients of the non-convex loss functions become "sufficiently small." However, the gradients do not monotonically decrease as iteration proceeds since the problem is non-convex. So, one cannot let it iterate until convergence is reached, but has to stop at the point where the gradient is small enough. The choice of this "stoping point" is crucial in the analysis of the residual terms. Chen et al. (2019) define the stopping point using the full sample problem (2.3), which potentially causes dependence problem of the leave-one-out estimators. We propose two approaches of addressing this issue.
**Approach I**: First, we derive the stopping point from the leave-one-out problem (2.4). Let \(B^{\mathrm{full},\tau}\) and \(B^{(-t),\tau}\) be \(\tau\)-th iterates of the gradient decent for (2.3) and (2.4), respectively. Fix \(t\) of interest and suppose we iterate both problems \(\tau_{t}\) times, where \(\tau_{t}\) depends on \(t\). Define the "solutions" at \(\tau_{t}\)-th iterations:
\[\breve{\beta}^{\mathrm{full},t}=B^{\mathrm{full},\tau_{t}}\quad\text{and} \quad\breve{\beta}^{(-t)}=B^{(-t),\tau_{t}}.\]
Hence, they share the same stopping point \(\tau_{t}\). Noticeably, although \(\breve{\beta}^{\mathrm{full},t}\) is a solution for the full sample problem (2.3), it depends on \(t\) through \(\tau_{t}.\) In this first approach, we derive the stopping point from the ALPOO problem (2.4). Hence, it ensures that the estimator \(\breve{\beta}^{(-t)}\) using this stopping point is independent of the \(t\)-th period sample, \(\{\omega_{jt},\varepsilon_{jt}\}_{j\leq N}\). This introduces nontrivial technical challenges. Namely, \(\tau_{t}\), being derived from the problem \(L^{(-t)}(B,F)\), depends on \(t\), so the "full-problem" solution \(\breve{\beta}^{\mathrm{full},t}\) would therefore also depend on \(t\). We derive the uniform convergence of both \(\breve{\beta}^{\mathrm{full},t}\) and \(\breve{\beta}^{(-t)}\) uniformly in \(t=1,...,T\).
Being equipped with these two auxiliary non-convex estimators, we can bound \(R_{t}\) in the following scheme:
1. First, decompose \(R_{t}\) into two terms: \[R_{t} =\frac{1}{\sqrt{N}}\sum_{j=1}^{N}\omega_{jt}\varepsilon_{jt}( \widetilde{\beta}_{j}-H_{1}^{\prime}\beta_{j})\] \[=\underbrace{\frac{1}{\sqrt{N}}\sum_{j=1}^{N}\omega_{jt} \varepsilon_{jt}(\widetilde{\beta}_{j}-\breve{\beta}_{j}^{(-t)})}_{\coloneqq a} +\underbrace{\frac{1}{\sqrt{N}}\sum_{j=1}^{N}\omega_{jt}\varepsilon_{jt}( \breve{\beta}_{j}^{(-t)}-H_{1}^{\prime}\beta_{j})}_{\coloneqq b}.\] (2.5)
2. \(\max_{t}\|b\|=o_{P}(1)\) can be shown relatively easily due to the genuine independence between \(\breve{\beta}^{(-t)}\) and \(\{\omega_{jt}\varepsilon_{jt}\}_{j\leq N}\), which is along the same line as sample splitting. Importantly, it is crutial to require that \(\tau_{t}\) should not depend on observations of time \(t\). So the stopping time should be defined carefully, which is one of the main technical contributions of the paper.
3. In addition, \(\max_{t}\left\|a\right\|=o_{P}(1)\) comes from the following two rationales: \[a=\frac{1}{\sqrt{N}}\sum_{j=1}^{N}\omega_{jt}\varepsilon_{jt}(\widetilde{\beta}_{ j}-\breve{\beta}_{j}^{\text{full},t})+\frac{1}{\sqrt{N}}\sum_{j=1}^{N}\omega_{jt} \varepsilon_{jt}(\breve{\beta}_{j}^{\text{full},t}-\breve{\beta}_{j}^{(-t)}).\] 1. \(\breve{\beta}^{\text{full},t}\approx\breve{\beta}^{(-t)}\) Their loss functions (2.3) and (2.4) are very similar and they share the same stopping point \(\tau_{t}\). Therefore, \(\max_{t}\left\|\breve{\beta}^{\text{full},t}-\breve{\beta}^{(-t)}\right\|\) is sufficiently small. Following the guidance of Chen et al. (2020), we apply the mathematical induction. 2. \(\widetilde{\beta}\approx\breve{\beta}^{\text{full},t}\) Note that \(\breve{\beta}^{\text{full},t}\) is derived from the non-convex problem (2.3) and \(\widetilde{\beta}\) comes from the nuclear norm penalization (2.2). Although the loss functions (2.2) and (2.3) are seemingly distinct, their penalty terms are closely related in the sense that \[\left\|A\right\|_{*}=\inf_{B\in\mathbb{R}^{N\times K},F\in\mathbb{R}^{T \times K};BF^{\prime}=A}\Big{\{}\frac{1}{2}\left\|B\right\|_{F}^{2}+\frac{1}{ 2}\left\|F\right\|_{F}^{2}\Big{\}}.\] Hence, \(\max_{t}\left\|\widetilde{\beta}-\breve{\beta}^{\text{full},t}\right\|\) is sufficiently small. A technical innovation is that \(\breve{\beta}^{\text{full},t}\) depends on \(t\) so the uniformity is crucially relevant. Hence, we have \(\max_{t}\left\|R_{t}\right\|=o_{P}(1)\).
**Approach II**: Alternatively, we can follow the definition of the stopping point in Chen et al. (2019), which uses the full sample. And then, we correct their proof by showing that, although the leave-one-out estimator is not independent of the sample data in period \(t\), we can still obtain a uniform bound over iterations. Denote the stopping point from Chen et al. (2019) as \(\tau^{*}\). In lieu of \((B^{\text{full},\tau_{t}},B^{(-t),\tau_{t}})\), we use \((B^{\text{full},\tau^{*}},B^{(-t),\tau^{*}})\) as the solutions for (2.3) and (2.4), respectively.
Recall the decomposition (2.5). The analysis of term \(a\) is analogous to the previous case. Regarding term \(b\), we highlight that \(\breve{\beta}^{(-t)}\), which is \(B^{(-t),\tau^{*}}\), is not independence from the sample in period \(t\), i.e., \(\{\omega_{jt},\varepsilon_{jt}\}_{j\leq N}\), since the stopping point \(\tau^{*}\) depends on it. We will provide a uniform bound over iteration \(\tau\) and period \(t\) for term \(b:\)
\[\max_{t}\left\|b\right\| =\max_{t}\left\|\frac{1}{\sqrt{N}}\sum_{j=1}^{N}\omega_{jt} \varepsilon_{jt}(\breve{\beta}_{j}^{(-t)}-H_{1}^{\prime}\beta_{j})\right\|= \max_{t}\left\|\frac{1}{\sqrt{N}}\sum_{j=1}^{N}\omega_{jt}\varepsilon_{jt}(B_{ j}^{(-t),\tau^{*}}-H_{1}^{\prime}\beta_{j})\right\|\] \[\leq\max_{t}\max_{\tau}\left\|\frac{1}{\sqrt{N}}\sum_{j=1}^{N} \omega_{jt}\varepsilon_{jt}(B_{j}^{(-t),\tau}-H_{1}^{\prime}\beta_{j})\right\| =o_{P}(1).\]
In either way, we can successfully show the negligibility of \(R_{t}\) uniformly in \(t\) without resorting to
sample splitting. We highlight that the first approach is more natural in the sense that it automatically ensures the independence that we need for term \(b\). Our first approach, while technically more involved, is potentially more applicable to general machine learning inferences that rely on auxiliary leave-one-out estimators, because of the natural independence. In contrast, it is unclear whether the second approach is still applicable in other cases.
#### 2.3.2 Why is the auxiliary leave-one-out problem defined in this way?
It is natural to ask why would not we define the ALOO estimator more naturally as the original estimator \(\widetilde{\beta}\), but simply dropping the \(t\) th column from the data matrix in the optimization? One of the key differences between \(L^{(-t)}(B,F)\) in (2.4) and the "more natural dropping-\(t\)" loss, is that the \(t\) th column in the least squares part of \(L^{(-t)}(B,F)\) is not simply dropped, but is replaced by its expectation:
\[\mathbb{E}\|\Pi^{-\frac{1}{2}}\Omega\circ\left(BF^{\prime}-Y\right)\|_{F,t}^{2 }=\|BF^{\prime}-M^{*}\|_{F,t}^{2}+C\]
where the constant \(C\) does not depend on \((B,F)\). The reason for defining the ALOO loss function in this way is to gain "hypothetical efficiency", so that the ALOO estimator would be closer to the full-sample estimator.
It is easier to understand the issue using a simple example. Consider estimating the mean \(\mathbb{E}Y_{t}\) using iid data \(Y_{t}\). The full-sample estimator \(\widehat{\mu}\) is the solution to
\[\widehat{\mu}=\arg\min_{\mu}L(\mu),\quad\text{where }L(\mu)=\sum_{s=1}^{T}(Y_{s}- \mu)^{2}.\]
Now consider the ALOO version of this problem. Our definition of \(L^{(-t)}(\mu)\) is _not_ dropping \(Y_{t}\), but replacing \((Y_{t}-\mu)^{2}\) with its expectation:
\[\breve{\mu}^{(-t)}=\arg\min_{\mu}L^{(-t)}(\mu),\quad\text{where }L^{(-t)}(\mu)=\sum_{s \neq t}(Y_{s}-\mu)^{2}+\mathbb{E}(Y_{t}-\mu)^{2}.\]
The solution is then \(\breve{\mu}^{(-t)}=\frac{1}{T}(\sum_{s\neq t}Y_{s}+\mathbb{E}Y_{t})\). Then straightforward calculations can verify that \(\breve{\mu}^{(-t)}\) (although infeasible) is more efficient and "closer" to the full-sample average \(\widehat{\mu}\) than the naive dropping-\(t\) estimator \(\bar{Y}_{-t}:=\frac{1}{T-1}\sum_{s\neq t}Y_{s}\). For instance,
\[\frac{\operatorname{Var}(\breve{\mu}^{(-t)})}{\operatorname{Var}(Y_{-t})}= \left(\frac{T-1}{T}\right)^{2}<1,\quad\frac{\mathbb{E}(\breve{\mu}^{(-t)}- \widehat{\mu})^{2}}{\mathbb{E}(Y_{-t}-\widehat{\mu})^{2}}=\frac{T-1}{T}<1.\]
The definitions of \(L^{(-t)}(B,F)\) and \(L^{(-t)}(\mu)\) also fulfill the intuition of the EM algorithm, which imputes the missing data in the loss function by its conditional expectations before optimizations,
rather than simply dropping the missing values.
#### 2.3.3 Singular vector estimation is unbiased
From Algorithm 1, we see that there is no explicit debias step. In fact, in terms of estimating the singular vector space, the singular vector estimator from the least square estimation following the nuclear norm penalization, \(\widehat{F}_{t}\), is unbiased (up to a rotation).
To see this, note that the estimation of \(F_{t}\) has the following maximization problem:
\[\widehat{F}_{t}\coloneqq\operatorname*{arg\,max}_{f\in\mathbb{R}^{K}}Q_{t}(f, \widetilde{\beta})\]
where \(Q_{t}(f,B)=-N^{-1}\sum_{j=1}^{N}\omega_{jt}(y_{jt}-f^{\prime}b_{j})^{2}\), \(B=(b_{1},\ldots,b_{N})^{\prime}\) and \(b_{j}\) are \(K\) dimensional vectors. In this step, \(\beta\) is the nuisance parameter and \(F_{t}\) is the parameter of interest. By Taylor expansion, we have, for some invertible matrix \(A\),
\[\sqrt{N}(\widehat{F}_{t}-H_{1}^{-1}F_{t})\] \[\quad=-\sqrt{N}A^{-1}\frac{\partial Q_{t}(H_{1}^{-1}F_{t},\beta H _{1})}{\partial f}-\underbrace{\sqrt{N}A^{-1}\frac{\partial^{2}Q_{t}(H_{1}^{- 1}F_{t},\beta H_{1})}{\partial f\partial\text{vec}(B)}}_{d}+o_{P}(1). \tag{2.6}\]
The first term is the score which leads to the asymptotic normality and the second term represents the effect of the \(\beta\) estimation which is subject to the shrinkage bias. The second term, while is the "usual bias" in a generic machine learning inference problem, can be shown to take the form:
\[d=\sqrt{N}\varphi H_{1}^{-1}F_{t}+o_{P}(1)\]
for some \(\varphi=o_{P}(1)\). It has a useful feature of being on the space of \(F_{t}\). Making use of this fact, (2.6) can be re-written as follows:
\[\sqrt{N}(\widehat{F}_{t}-H_{2}F_{t})=-\underbrace{\sqrt{N}A^{-1}\frac{ \partial Q_{t}(H_{1}^{-1}F_{t},\beta H_{1})}{\partial f}}_{\text{asymptotically normal}}+o_{P}(1)\]
by defining \(H_{2}\coloneqq(I_{K}+\varphi)H_{1}^{-1}\). Note that the non-negligible bias term in \(d\) is absorbed by the rotation matrix \(H_{2}\), and thus \(\widehat{F}_{t}\) can unbiasedly estimate \(F_{t}\) up to this new rotation. Then, in Step 4 of Algorithm 1, \(\widehat{\beta}\), the least square estimator using \(\widehat{F}\) as a regressor, can unbiasedly estimate \(\beta_{i}\) up to the rotation since \(\widehat{F}_{t}\) has only a higher order bias now. As a result, the product of them estimates \(M_{it}\) unbiasedly:
\[\widehat{M}_{it}=\widehat{\beta}_{i}^{\prime}\widehat{F}_{t}\approx\beta_{i}^ {\prime}H_{2}^{-1}H_{2}F_{t}=M_{it}\]
which allows us to conduct inference successfully. This is how the two-step least squares procedure works.
## 3 Asymptotic Results
### Inferential theory
This section presents the inferential theory. We provide the asymptotic normality of the estimator of the group average of \(M_{it}\). Our assumptions allow the rank \(K\) to grow, but slowly. Remind the following notation:
\[h_{t}(\zeta_{i})=\sum_{r=1}^{K}\kappa_{t,r}\phi_{r}(\zeta_{i})+M_{it}^{R}= \beta_{i}^{\prime}F_{t}+M_{it}^{R},\]
where \(\beta_{i}=(\phi_{1}(\zeta_{i}),\ldots,\phi_{K}(\zeta_{i}))^{\prime}\) and \(F_{t}=(\kappa_{t,1},\ldots,\kappa_{t,K})^{\prime}\). Let \(S_{\beta}=N^{-1}\sum_{i=1}^{N}\beta_{i}\beta_{i}^{\prime},S_{F}=T^{-1}\sum_{s= 1}^{T}F_{s}F_{s}^{\prime},\) and \(Q=S_{\beta}^{1/2}S_{F}^{1/2}.\)
**Assumption 3.1** (Sieve representation).: _(i) \(\{h_{t}(\cdot)\}_{t\leq T}\) belong to ball \(\mathcal{H}\left(\mathcal{Z},\left\|\cdot\right\|_{L_{2}},C\right)\) inside a Hilbert space spanned by the basis \(\{\phi_{r}\}_{r\geq 1}\), with a uniform \(L_{2}\)-bound \(C\): \(\sup_{h\in\mathcal{H}(\mathcal{Z},\left\|\cdot\right\|_{L_{2}})}\left\|h\right\|\leq C,\) where \(\mathcal{Z}\) is the support of \(\zeta_{i}\). (ii) The sieve approximation error satisfies: For some \(\nu>0\), \(\max_{i,t}\left|M_{it}^{R}\right|\leq CK^{-\nu}\). (iii) For some \(C>0\), \(\max_{r\leq K}\sup_{\zeta}\left|\phi_{r}(\zeta)\right|<C\). In addition, there is \(\eta>0\) such that \(\psi_{\min}^{-1}\left(S_{\beta}\right)<\eta\) and \(\psi_{\min}^{-1}\left(S_{F}\right)<\eta\) with probability converging to 1. (iv) \((NT)^{-1}\sum_{i,t}h_{t}^{2}(\zeta_{i})=O_{P}(1)\). (v) There are constants \(\delta,g\geq 0\) such that \(\psi_{1}(Q)/\psi_{K}(Q)=O_{P}(K^{\delta})\), \(\min_{1\leq r\leq K-1}\psi_{r}(Q)-\psi_{r+1}(Q)\geq cK^{-g}\) for some constant \(c>0.\)_
First, we present some assumptions for the sieve representation. Assumption 3.1 (ii) is well satisfied with a large \(\nu\) if the functions \(\{h_{t}\left(\cdot\right)\}\) are sufficiently smooth. For example, consider \(h_{t}\) belonging to a H\(\ddot{o}\)lder class: for some \(a,b,C>0\), \(\left\{h:\|D^{b}h(x_{1})-D^{b}h(x_{2})\|\leq C\|x_{1}-x_{2}\|^{a}\right\}.\) In addition, suppose that we take a usual basis like polynomials, trigonometric polynomials, and B-splines. Then, \(\max_{i,t}\left|M_{it}^{R}\right|\leq CK^{-\nu}\), and \(\nu=2(a+b)/\text{dim}(\zeta_{i}).\) So, Assumption 3.1 (ii) is satisfied with very large \(\nu\) if \(\{h_{t}\left(\cdot\right)\}\) are smooth. In addition, the first part of Assumption 3.1 (iii) can be satisfied if the basis is a bounded basis like trigonometric basis or \(\zeta_{i}\) has a compact support. Assumption 3.1 (iv) and (v) are not restrictive, and have been verified by Chernozhukov et al. (2021).
**Assumption 3.2** (DGP for \(\varepsilon_{it}\) and \(\omega_{it}\)).: _(i) Conditioning on \(\mathcal{M}\), \(\varepsilon_{it}\) is zero-mean, sub-gaussian random variable such that \(\mathbb{E}[\varepsilon_{it}|\mathcal{M}]=0\), \(\mathbb{E}[\varepsilon_{it}^{2}|\mathcal{M}]=\sigma_{it}^{2}\leq\sigma^{2}\), \(\mathbb{E}[\exp(s\varepsilon_{it})|\mathcal{M}]\leq\exp(Cs^{2}\sigma^{2})\), \(\forall s\in\mathbb{R}\) for some
constant \(C>0\). We assume that \(\sigma^{2}\) is bounded above and \(\sigma^{2}_{it}\) are bounded away from zero. In addition, \(\varepsilon_{it}\) is indepedent across \(i\) and \(t\). (ii) \(\Omega\) is independent of \(\mathcal{E}\). Conditioning on \(\mathcal{M}\), \(\omega_{it}\) is independent across \(t\). In addition, \(\mathbb{E}[\omega_{it}|\mathcal{M}]=\mathbb{E}[\omega_{it}]=p_{i}\) where \(0<p_{\min}\leq p_{i}\leq p_{\max}\leq 1\). (iii) Let \(a_{t}\) be the column of either \(\Omega-\Pi\mathbf{1}_{N}\mathbf{1}_{T}^{\prime}\) or \(\Omega\circ\mathcal{E}.\) Then, \(\{a_{t}\}_{t\leq T}\) are independent sub-gaussian random vector with \(\mathbb{E}[a_{t}]=0\); more specifically, there is \(C>0\) such that
\[\max_{t\leq T}\sup_{\|x\|=1}\mathbb{E}[\exp(sa_{t}^{\prime}x)]\leq\exp(s^{2}C ),\quad\forall s\in\mathbb{R}.\]
We assume the heterogeneous observation probability across \(i\). It generalizes the homogeneous observation probability assumption which is a typical assumption in the matrix completion literature. The sub-gaussian assumption in Assumption 3.2 (iii) helps us to bound \(\|\Omega\circ\mathcal{E}\|\) and \(\|\Omega-\Pi\mathbf{1}_{N}\mathbf{1}_{T}^{\prime}\|\).
While the serial independence of the missing data indicators \(\omega_{it}\) is assumed, we allow they are cross-sectional dependence among \(i\). In doing so, we assume a cluster structure in \(\{1,\ldots,N\}\), i.e., there is a family of nonempty disjoint clusters, \(\mathcal{C}_{1},\ldots,\mathcal{C}_{\rho}\) such that \(\cup_{g=1}^{\rho}\mathcal{C}_{g}=\{1,\ldots,N\}\). So we divide units \(\{1,...,N\}\) into \(\rho\) disjoint clusters. In addition, denote the size of the largest cluster by \(\vartheta\). That is, \(\vartheta=\max_{g}|\mathcal{C}_{g}|_{o}\). We highlight that \(\vartheta\) is allowed to increase as \(N\) and \(T\) increase.
**Assumption 3.3** (Cross-sectional Dependence in \(\omega_{it}\)).: _Cross sectional units \(\omega_{it}\) are independent across clusters. Within the same cluster, arbitrary dependence is allowed, but overall, we require \(\max_{t}\max_{i}\sum_{j=1}^{N}|\mathrm{Cov}(\omega_{it},\omega_{jt}|\mathcal{M })|<C.\)_
Due to the cluster structure in Assumption 3.3 (i), we can construct a "leave-cluster-out" estimator \(\breve{\beta}^{\{-i\}}\) which is independent of the sample of unit \(i\). Similarly to the idea of (2.3) and (2.4), we can rule out the samples of the cluster that includes unit \(i\). The difference from (2.4) is that we identify all the units which are in the same cluster as unit \(i\) and replace their rows of the goodness of fit part by their conditional expectations.3 Together with the leave-one-out estimator \(\breve{\beta}^{(-t)}\), the leave-cluster-out estimator \(\breve{\beta}^{\{-i\}}\) plays a pivotal role in showing the solution of (2.2) is close to that of (2.3).
Footnote 3: For the formal definitions of the estimators, please refer to Section D of Appendix and Remark 1 in the section.
The parameter for the cluster size \(\vartheta\) is bounded by Assumption 3.4. For instance, in the case where \(N\asymp T\) and \(\{h_{t}(\cdot)\}_{t\leq T}\) are smooth enough, if we estimate the cross-sectional average of a certain period, the assumption requires \(\vartheta\approx o(\sqrt{N/\log N})\) since \(K\) is allowed to grow very slowly when \(\{h_{t}(\cdot)\}_{t\leq T}\) are smooth.
We are interested in making inference about group-averaged effects. Let \(\mathcal{G}\) be a particular group;
the object of interest is
\[\frac{1}{|\mathcal{G}|_{o}}\sum_{(i,t)\in\mathcal{G}}M_{it}=\frac{1}{|\mathcal{G }|_{o}}\sum_{(i,t)\in\mathcal{G}}h_{t}(\zeta_{i}).\]
Here the group of interest as \(\mathcal{G}=\mathcal{I}\times\mathcal{T}\) where \(\mathcal{I}\subseteq\{1,\ldots,N\}\) and \(\mathcal{T}\subseteq\{1,\ldots,T\}\). We impose the following assumption on the rates of parameters. Define a sequence \(\psi_{NT}\) as \(\psi_{NT}\asymp\sqrt{K^{-(2\delta+1)}\sum_{i=1}^{N}\sum_{t=1}^{T}h_{t}^{2}( \zeta_{i})}\). It is a lower bound of \(\psi_{\min}(\beta F^{\prime})\) and works as the parameter for signal. Recall that \(K\) denotes the sieve dimension.
**Assumption 3.4** (Parameter size and signal-to-noise ratio).: _Let \(\gamma=\frac{p_{\max}}{p_{\min}}\) and \(\tilde{\vartheta}=\max\{\vartheta,\log N+\log T\}\). Then, we have_
\[\begin{array}{ll}(i)&\min\{|\mathcal{I}|_{o}^{\frac{1}{2}},|\mathcal{T}|_{o }^{\frac{1}{2}}\}\ \tilde{\vartheta}\eta^{3}\gamma^{4}K^{(4+2g+\frac{13}{2}\delta)}\max\{\sqrt{N \log N},\sqrt{T\log T}\}=o(p_{\min}^{\frac{3}{2}}\min\{N,T\}),\\ &\min\{|\mathcal{I}|_{o}^{\frac{1}{2}},|\mathcal{T}|_{o}^{\frac{1}{2}}\}\eta^ {\frac{1}{2}}\gamma^{3}K^{(1+g+\frac{7}{2}\delta)}\max\{N^{\frac{3}{2}},T^{ \frac{3}{2}}\}=o(p_{\min}^{\frac{3}{2}}\psi_{NT}^{2}),\\ (ii)&\min\{|\mathcal{I}|_{o}^{\frac{1}{2}},|\mathcal{T}|_{o}^{\frac{1}{2}}\} \eta^{\frac{3}{2}}\gamma^{2}\max\{\sqrt{N},\sqrt{T}\}=o(p_{\min}^{\frac{1}{2} }K^{(\nu-2\delta-\frac{3}{2})}),\\ &\min\{|\mathcal{I}|_{o}^{\frac{1}{2}},|\mathcal{T}|_{o}^{\frac{1}{2}}\}\eta^ {\frac{1}{2}}\gamma^{\frac{3}{2}}\max\{\sqrt{N},\sqrt{T}\}\sqrt{NT}=o(\psi_{ NT}p_{\min}^{\frac{1}{2}}K^{(\nu-\delta-\frac{1}{2})}).\end{array}\]
Assumption 3.4 (ii) is used to bound the sieve approximation error. For this condition to be satisfied, the smoothness of \(\{h_{t}(\cdot)\}_{t\leq T}\) is crucial. If \(\{h_{t}(\cdot)\}_{t\leq T}\) are smooth enough, \(\nu=2(a+b)/\text{dim}(\zeta_{i})\) can be arbitrarily large. Hence, Assumption 3.4 (ii) can be easily satisfied with a slowly increasing \(K\) as long as \(\{h_{t}(\cdot)\}_{t\leq T}\) is smooth.
Assumptions 3.4 (i) is the conditions about sample complexity and signal-to-noise ratio. As long as \(K,\eta,\gamma\) are bounded or increase sufficiently slowly, it would be satisfied. Note that, in the cases like the cross-sectional average of a certain period t or the time average of a certain unit i, \(\min\{|\mathcal{I}|_{o}^{\frac{1}{2}},|\mathcal{T}|_{o}^{\frac{1}{2}}\}=1\). In many interesting cases, \(\min\{|\mathcal{I}|_{o}^{\frac{1}{2}},|\mathcal{T}|_{o}^{\frac{1}{2}}\}\) is usually not that large. However, due to Assumption 3.4 (i), we cannot derive the inferential theory in the case where both \(|\mathcal{I}|_{o}\) and \(|\mathcal{T}|_{o}\) are large like \(|\mathcal{I}|_{o}=N\) and \(|\mathcal{T}|_{o}=T\). In this case, the asymptotically normal distribution part cannot dominate other residual parts, since the convergence rate of the asymptotically normal distribution part is roughly \(\frac{1}{\sqrt{N|\mathcal{T}|_{o}}}+\frac{1}{\sqrt{T|\mathcal{I}|_{o}}}\), while that of the residual term is similar to or greater than \(\frac{1}{\sqrt{NT}}\) regardless of the group size. For inference, at least one part of the asymptotically normal term should dominate other residual terms. On the other hand, in terms of the convergence rate, the large sizes of \(|\mathcal{I}|_{o}\) and \(|\mathcal{T}|_{o}\) are beneficial, as noted in Section B in Appendix. In addition, for comparison with the conditions of other low-rank literature, it would be helpful to refer to Assumption C.2 in Appendix where we consider the general low-rank model.
Under the above assumptions, Theorem C.1 shows that the estimator for the group average of
has the asymptotic normality:
\[\mathcal{V}_{\mathcal{G}}^{-\frac{1}{2}}\left(\frac{1}{|\mathcal{G}|_{o}}\sum_{(i,t)\in\mathcal{G}}\widehat{M}_{it}-\frac{1}{|\mathcal{G}|_{o}}\sum_{(i,t)\in \mathcal{G}}M_{it}\right)\stackrel{{ D}}{{\longrightarrow}} \mathcal{N}(0,1),\]
where the asymptotic variance \(\mathcal{V}_{\mathcal{G}}\) is given in the statement of Theorem C.1, and needs to be estimated. In this result, \(\mathcal{G}\) can consist of either multiple columns with multiple rows or solely a certain \((i,t)\), implying that we can conduct inference for one specific element of the matrix.
To make the estimation of \(\mathcal{V}_{\mathcal{G}}\) feasible, we consider the case of \(\mathbb{E}[\varepsilon_{it}^{2}|\mathcal{M}]=\sigma_{i}^{2}\). Let \(U_{i}^{\prime}\) is the \(i\)-th row of the left singular vector of \(\beta F^{\prime}\) and \(V_{t}^{\prime}\) is the \(t\)-th row of the right singular vector of \(\beta F^{\prime}\). The following theorem gives the feasible asymptotic normality.
**Theorem 3.1** (Feasible CLT).: _Suppose Assumptions 3.1 - 3.4 hold. In addition, suppose that \(\left\|\frac{\sqrt{N}}{|\mathcal{I}|_{o}}\sum_{i\in\mathcal{I}}U_{M^{*},i} \right\|\geq c\) and \(\left\|\frac{\sqrt{T}}{|\mathcal{T}|_{o}}\sum_{t\in\mathcal{T}}V_{M^{*},t} \right\|\geq c\) for some constant \(c>0\). Then we have_
\[\widehat{\mathcal{V}}_{\mathcal{G}}^{-\frac{1}{2}}\left(\frac{1}{|\mathcal{G}| _{o}}\sum_{(i,t)\in\mathcal{G}}\widehat{M}_{it}-\frac{1}{|\mathcal{G}|_{o}} \sum_{(i,t)\in\mathcal{G}}M_{it}\right)\stackrel{{ D}}{{ \longrightarrow}}\mathcal{N}(0,1),\]
_where_
\[\widehat{\mathcal{V}}_{\mathcal{G}} =\frac{1}{|\mathcal{T}|_{o}^{2}}\sum_{t\in\mathcal{T}}\widehat{ \beta}_{\mathcal{I}}^{\prime}\left(\sum_{j=1}^{N}\omega_{jt}\widehat{\beta}_{ j}\widehat{\beta}_{j}^{\prime}\right)^{-1}\left(\sum_{j=1}^{N}\omega_{jt} \widehat{\sigma}_{j}^{2}\widehat{\beta}_{j}\widehat{\beta}_{j}^{\prime}\right) \left(\sum_{j=1}^{N}\omega_{jt}\widehat{\beta}_{j}\widehat{\beta}_{j}^{\prime }\right)^{-1}\widehat{\widehat{\beta}}_{\mathcal{I}}\] \[\quad+\frac{1}{|\mathcal{I}|_{o}^{2}}\sum_{i\in\mathcal{I}} \widehat{\sigma}_{i}^{2}\widehat{F}_{\mathcal{T}}^{\prime}\left(\sum_{s=1}^{T} \omega_{is}\widehat{F}_{s}\widehat{F}_{s}^{\prime}\right)^{-1}\widehat{\widehat {F}}_{\mathcal{T}},\]
\(\widehat{\widehat{\beta}}_{\mathcal{I}}=\frac{1}{|\mathcal{I}|_{o}}\sum_{a\in \mathcal{I}}\widehat{\beta}_{a}\)_, \(\widehat{\widehat{F}}_{\mathcal{T}}=\frac{1}{|\mathcal{T}|_{o}}\sum_{a\in \mathcal{T}}\widehat{F}_{a}\), \(\widehat{\sigma}_{i}^{2}=\frac{1}{|\mathcal{W}_{i}|_{o}}\sum_{t\in\mathcal{W} _{i}}\widehat{\varepsilon}_{it}^{2}\), \(\mathcal{W}_{i}=\{t:\omega_{it}=1\}\) and \(\widehat{\varepsilon}_{it}=y_{it}-\widehat{\beta}_{i}^{\prime}\widehat{F}_{t}\)._
### Semiparametric efficiency
We now establish the semiparametric efficiency of our estimator, following a similar approach as in Jankova and Van De Geer (2018). In order to make calculation tractable, we suppose that \(\omega_{it}\sim\text{Bernoulli}(p)\) and \(\varepsilon_{it}\sim\mathcal{N}(0,\sigma^{2})\) are independent across \((i,t)\). We will focus on the case of block group, where both \(|\mathcal{I}|_{o}\) and \(|\mathcal{T}|_{o}\) are finite or growing slowly, satisfying \(N|\mathcal{T}|_{o}\ll T^{2}|\mathcal{I}|_{o}^{2}\) and \(T|\mathcal{I}|_{o}\ll N^{2}|\mathcal{T}|_{o}^{2}\). The other cases like cross-sectional and serial groups (e.g., \(|\mathcal{I}|_{o}=N\) and \(|\mathcal{T}|_{o}\) is finite or slowly growing, or vice versa) can also be attained, which are very similar to Theorem 4.2 in Chernozhukov et al. (2021). Hence, we omit them. The novelty of our efficiency theorem is that it is for estimating the general block group.
As specified in Theorem C.1, the asymptotic variance in this case is
\[\mathcal{V}_{\mathcal{G}} =\frac{\sigma^{2}}{|\mathcal{T}|_{o}^{2}}\sum_{t\in\mathcal{T}} \bar{\beta}_{\mathcal{I}}^{\prime}\left(\sum_{j=1}^{N}\omega_{jt}\beta_{j}\beta _{j}^{\prime}\right)^{-1}\bar{\beta}_{\mathcal{I}}+\frac{\sigma^{2}}{|\mathcal{ I}|_{o}^{2}}\sum_{i\in\mathcal{T}}\bar{F}_{\mathcal{T}}^{\prime}\left(\sum_{s=1}^{T} \omega_{is}F_{s}F_{s}^{\prime}\right)^{-1}\bar{F}_{\mathcal{T}}\] \[=s_{*}^{2}(M,p,\sigma)+o(s_{*}^{2}(M,p,\sigma))\] \[s_{*}^{2}(M,p,\sigma) :=\frac{\sigma^{2}}{p}\frac{1}{|\mathcal{T}|_{o}}\bar{\beta}_{ \mathcal{I}}^{\prime}(\beta^{\prime}\beta)^{-1}\bar{\beta}_{\mathcal{I}}+ \frac{\sigma^{2}}{p}\frac{1}{|\mathcal{I}|_{o}}\bar{F}_{\mathcal{T}}^{\prime}( F^{\prime}F)^{-1}\bar{F}_{\mathcal{T}}.\]
The following theorem shows that \(s_{*}^{2}(M,p,\sigma)\) is the asymptotic Cramer-Rao bound for asymptotically unbiased estimators.
**Theorem 3.2**.: _Suppose \(\omega_{it}\sim\mathrm{Bernoulli}(p)\) and \(\varepsilon_{it}\sim\mathcal{N}(0,\sigma^{2})\) are independent across \((i,t)\). Suppose also that \(N|\mathcal{T}|_{o}\ll T^{2}|\mathcal{I}|_{o}^{2}\) and \(T|\mathcal{I}|_{o}\ll N^{2}|\mathcal{T}|_{o}^{2}\). Define_
\[\mathcal{A}=\{(M,p,\sigma):M=M^{\star}+M^{R},M^{\star}=\beta F^{ \prime},\mathrm{rank}(M^{\star})\leq K,\text{ and Assumptions~{}\ref{eq:Cramer-Rao bound}-\ref{eq:Cramer-Rao bound} hold}\}.\]
_Let \(U(Y,\Omega)\) be an asymptotically unbiased estimator of \(|\mathcal{G}|^{-1}\sum_{(i,t)\in\mathcal{G}}M_{it}\) in that_
\[\mathbb{E}_{M,p,\sigma}U(Y,\Omega)-|\mathcal{G}|^{-1}\sum_{(i,t) \in\mathcal{G}}M_{it}=o(s_{*}(M,p,\sigma))\]
_where \(\mathbb{E}_{M,p,\sigma}\) denotes the expectation with respect to given \((M,p,\sigma)\). Then for any sequence of \((M,p,\sigma)\in\mathcal{A}\), we have_
\[\liminf_{N,T\rightarrow\infty}\frac{\mathbb{E}_{M,p,\sigma}\left[U (Y,\Omega)-|\mathcal{G}|^{-1}\sum_{(i,t)\in\mathcal{G}}M_{it}\right]^{2}}{s_{ *}^{2}(M,p,\sigma)}\geq 1,\]
_with probability converging to 1._
## 4 Applications to Heterogeneous Treatment Effect Estimation
In this section, we propose the inference procedure for treatment effects by utilizing the asymptotic results in Section 3. Following the causal potential outcome setting (e.g., Rubin (1974), Imbens and Rubin (2015)), we assume that for each of \(N\) units and \(T\) time periods, there exists a pair of potential outcomes, \(y_{it}^{(0)}\) and \(y_{it}^{(1)}\) where \(y_{it}^{(0)}\) denotes the potential outcome of the untreated situation and \(y_{it}^{(1)}\) denotes the potential outcome of the treated situation. Importantly, among potential outcomes \(y_{it}^{(0)}\) and \(y_{it}^{(1)}\), we can observe only one realized outcome \(y_{it}^{(\Upsilon_{it})}\) where \(\Upsilon_{it}=1\{\text{unit $i$ is treated at period $t$}\}\). Hence, we have two incomplete potential outcome matrices, \(Y^{(0)}\) and \(Y^{(1)}\), having missing components, and the problem of estimating the treatment effects can be cast as a matrix completion problem because of the
missing components in the two matrices.
Specifically, we consider the nonparametric model such that for each \(\iota\in\{0,1\}\),
\[y_{it}^{(\iota)}=M_{it}^{(\iota)}+\varepsilon_{it}=h_{t}^{(\iota)}(\zeta_{i})+ \varepsilon_{it},\]
where \(\varepsilon_{it}\) is the noise, \(\zeta_{i}\) is a vector of unit specific latent state variables. We regard \(h_{t}^{(\iota)}(\cdot)\) as a deterministic function while \(\zeta_{i}\) is a random vector. In the model, the treatment effect comes from the difference between the time-varying treatment function \(h_{t}^{(1)}(\cdot)\) and the control function \(h_{t}^{(0)}(\cdot)\). Let \(\omega_{it}^{(\iota)}=1\{y_{it}^{(\iota)}\text{ is observed}\}\). Then, \(\omega_{it}^{(1)}=\Upsilon_{it}\) and \(\omega_{it}^{(0)}=1-\Upsilon_{it}\) because we observe \(y_{it}^{(1)}\) when there is a treatment on \((i,t)\) and observe \(y_{it}^{(0)}\) when there is no treatment on \((i,t)\).
We suppose the following sieve representation for \(h_{t}^{(\iota)}\) :
\[h_{t}^{(\iota)}(\zeta_{i})=\sum_{r=1}^{K}\kappa_{t,r}^{(\iota)}\phi_{r}(\zeta _{i})+M_{it}^{R(\iota)},\qquad\iota\in\{0,1\}\]
where \(\kappa_{t,r}^{(\iota)}\) is the sieve coefficient, \(\phi_{r}(\zeta_{i})\) is the sieve transformation of \(\zeta_{i}\) using the basis function \(\phi_{r}(\cdot)\) and \(M_{it}^{R(\iota)}\) is the sieve approximation error. Then, by representing \(\sum_{r=1}^{K}\kappa_{t,r}^{(\iota)}\phi_{r}(\zeta_{i})\) as \(\beta_{i}^{\prime}F_{t}^{(\iota)}\) where \(\beta_{i}=[\phi_{1}(\zeta_{i}),\dots,\phi_{K}(\zeta_{i})]^{\prime}\) and \(F_{t}^{(\iota)}=[\kappa_{t,1}^{(\iota)},\dots,\kappa_{t,K}^{(\iota)}]^{\prime}\), \(h_{t}^{(\iota)}(\zeta_{i})\) can be successfully represented as the approximate factor structure.
We make inference about the average treatment effect for a particular group of interest \((i,t)\in\mathcal{G}\):
\[\frac{1}{|\mathcal{G}|_{o}}\sum_{(i,t)\in\mathcal{G}}\Gamma_{it},\quad\text{ where }\Gamma_{it}=M_{it}^{(1)}-M_{it}^{(0)}.\]
The individual treatment effect \(\Gamma_{it}\) is estimated by \(\widehat{\Gamma}_{it}=\widehat{M}_{it}^{(1)}-\widehat{M}_{it}^{(0)}\) where \(\widehat{M}_{it}^{(0)}\) and \(\widehat{M}_{it}^{(1)}\) are estimators of \(M_{it}^{(0)}\) and \(M_{it}^{(1)}\), respectively. Hence, by implementing the estimation steps in Algorithm 1 for each \(\iota\in\{0,1\}\), we can derive the estimators for the group average of \(M_{it}^{(0)}\) and \(M_{it}^{(1)}\), and construct the average treatment effect estimator.
The notations are essentially the same as those in Section 2, and we just put the superscript \((\iota)\) to all notations to distinguish the pair of potential realizations.
**Theorem 4.1** (Feasible CLT).: _Suppose the assumptions of Theorem 3.1 hold for each \(\iota\in\{0,1\}\). With \(\mathbb{E}[\varepsilon_{it}^{2}|\mathcal{M}]=\sigma_{i}^{2}\), we have_
\[\left(\widehat{\mathcal{V}}_{\mathcal{G}}^{(0)}+\widehat{\mathcal{V}}_{ \mathcal{G}}^{(1)}\right)^{-\frac{1}{2}}\left(\frac{1}{|\mathcal{G}|_{o}} \sum_{(i,t)\in\mathcal{G}}\widehat{\Gamma}_{it}-\frac{1}{|\mathcal{G}|_{o}} \sum_{(i,t)\in\mathcal{G}}\Gamma_{it}\right)\stackrel{{ D}}{{ \longrightarrow}}\mathcal{N}(0,1),\]
_where for each \(\iota\in\{0,1\}\),_
\[\widehat{\mathcal{V}}_{\mathcal{G}} =\frac{1}{|\mathcal{T}|_{o}^{2}}\sum_{t\in\mathcal{T}}\widehat{ \beta}_{\mathcal{I}}^{(\iota)\prime}\left(\sum_{j=1}^{N}\omega_{it}^{(\iota)} \widehat{\beta}_{j}^{(\iota)}\widehat{\beta}_{j}^{(\iota)\prime}\right)^{-1} \left(\sum_{j=1}^{N}\omega_{jt}^{(\iota)}\widehat{\sigma}_{j}^{(\iota)2} \widehat{\beta}_{j}^{(\iota)\prime}\right)\left(\sum_{j=1}^{N}\omega_{jt}^{( \iota)}\widehat{\beta}_{j}^{(\iota)\prime}\right)^{-1}\widehat{\beta}_{ \mathcal{I}}^{(\iota)}\] \[+\frac{1}{|\mathcal{I}|_{o}^{2}}\sum_{i\in\mathcal{I}}\widehat{ \sigma}_{i}^{(\iota)2}\widehat{F}_{\mathcal{T}}^{(\iota)\prime}\left(\sum_{s=1 }^{T}\omega_{is}^{(\iota)}\widehat{F}_{s}^{(\iota)\prime}\right)^{-1}\widehat{ F}_{\mathcal{T}}^{(\iota)}.\]
_Here, \(\widehat{\beta}_{\mathcal{I}}^{(\iota)}=\frac{1}{|\mathcal{I}|_{o}}\sum_{a \in\mathcal{I}}\widehat{\beta}_{a}^{(\iota)}\), \(\widehat{F}_{\mathcal{T}}^{(\iota)}=\frac{1}{|\mathcal{T}|_{o}}\sum_{a\in \mathcal{T}}\widehat{F}_{a}^{(\iota)}\), \(\left(\widehat{\sigma}_{i}^{(\iota)}\right)^{2}=\frac{1}{|\mathcal{W}_{i}^{( \iota)}|_{o}}\sum_{t\in\mathcal{W}_{i}^{(\iota)}}\left(\widehat{\varepsilon}_{ it}^{(\iota)}\right)^{2}\), \(\mathcal{W}_{i}^{(\iota)}=\{t:\omega_{it}^{(\iota)}=1\}\) and \(\widehat{\varepsilon}_{it}^{(\iota)}=y_{it}^{(\iota)}-\widehat{\beta}_{i}^{( \iota)\prime}\widehat{F}_{t}^{(\iota)}\)._
## 5 Empirical study: Impact of the president on allocating the U.S. federal budget to the states
To illustrate the use of our inferential theory, we present an empirical study about the impact of the president on allocating the U.S. federal budget to the states. The allocation of the federal budget in the U.S. is the outcome of a complicated process involving diverse institutional participants. However, the president plays a particularly important role among the participants. Ex-ante, the president is responsible for composing a proposal, which is to be submitted to Congress and initiates the actual authorization and appropriations processes. Ex post, once the budget has been approved, the president has a veto power that can be overridden only by a qualified majority equal to two-thirds of Congress. In addition, the president exploits extra additional controls over agency administrators who distribute federal funds.
There is a vast theoretical and empirical literature about the impact of the president on allocating the federal budget to the states (e.g., Cox and McCubbins (1986), Anderson and Tollison (1991), McCarty (2000), Larcinese et al. (2006), Berry et al. (2010)). In particular, Cox and McCubbins (1986) provide a theoretical model which supports the idea that more funds are allocated where the president has larger support because of the ideological relationship between voters and the president, and Larcinese et al. (2006) have found that states which supported the incumbent president in past presidential elections tend to receive more funds empirically. We contribute by showing the impact using our inferential theory for the heterogeneous treatment effect with a wider set of data.
Here, the hypothesis we want to test is whether federal funds are disproportionately targeted to states where the incumbent president is supported in the past presidential election. We use data on federal outlays for the 50 U.S. states with the District of Columbia from 1953 to 2018. The data are obtained from websites of the U.S. Census Bureau, NASBO (National Association of State Budget Officers), and
SSA (Social Security Administration).
Following Section 4, we set the treatment indicator as \(\Upsilon_{it}=1\) if the state \(i\) supported the president of year \(t\) in the presidential election, and \(\Upsilon_{it}=0\) otherwise. If the candidate whom the state \(i\) supported in the previous presidential election is the same as the president at year \(t\), we consider it as "treated" and otherwise, we consider it as "untreated". While applying our inferential procedure, we adopt the assumption that the treatment (whether state \(i\) supported the resident in the election) is exogenously assigned, which is probably not practical, but we take our stand on this assumption in this study, and do not claim a causal interpretation of the treatment effect.
In addition, for the outcome variable \(y_{it}\), we use the following ratio: \(y_{it}=(\tilde{y}_{it}/\sum_{i}\tilde{y}_{it})\times 100\) where \(\tilde{y}_{it}\) is the per-capita federal grant in state \(i\) at year \(t\). Note that the outcome variable, \(y_{it}\), is a proportion so that \(\sum_{i}y_{it}=100\) for all \(t\), which is to treat each period equally.
Our inferential theory allows novel approaches to study the following effects:
1. State Effects: the time average of the treatment effect of each state \(i\), i.e., \(T^{-1}\sum_{t=1}^{T}\Gamma_{it}\).
2. Region Effects: the time average of the treatment effect of each "Region", i.e., \[\frac{1}{|\text{Region}|_{0}}\sum_{i\in\text{Region}}\frac{1}{T}\sum_{t=1}^{T }\Gamma_{it}.\]
Figure 1: State effects and corresponding t-statistics
3. Loyal/Swing Effects: the time average of the treatment effect of "loyal" and "swing" states, e.g., \[\frac{1}{|\text{Loyal States}|_{0}}\sum_{i\in\text{Loyal States}}\frac{1}{T}\sum_{t=1}^{T}\Gamma_{it}.\quad\text{(see Table 1 for the definition of "Loyal States")}\]
4. President Effects: the average treatment effect of each president, i.e., \[\frac{1}{|\mathcal{T}|_{0}}\sum_{t\in\mathcal{T}}\frac{1}{N}\sum_{i=1}^{N} \Gamma_{it}.\quad\text{($\mathcal{T}$ denotes the period of a given President in Office)}\]
5. Party Effects: the average treatment effect of each Party, i.e., \[\frac{1}{|\mathcal{S}|_{0}}\sum_{t\in\mathcal{S}}\frac{1}{N}\sum_{i=1}^{N} \Gamma_{it}.\quad\text{($\mathcal{S}$ denotes the period of a given Party to which the President belonged)}\]
First, Figure 1 presents the State Effects and the corresponding t-statistics. The results suggest significantly positive treatment effects in most states. To investigate the reason of differences, we categorize states according to the number of times a state swung the party it supports in the presidential elections as in Table 1. Together with Figure 1, it shows that most states with large t-statistics are in "Loyal states" while other states are generally in "Swing state" or "Weak swing state". It suggests that the treatment effect is closely related to the loyalty of states to parties.
In addition, the results for the Region Effects in Figure 2 show that, at the 1% significant level, New England, Mid Atlantic, Plains, Rocky Mountain, and Far West have the positive treatment effects while Great Lakes, South East, and South West do not. Note that Many states in Great Lakes, South East, and South West are in "Swing states" or "Weak swing states." As we can see in Figure 2, "Swing states" do not have statistically significant positive treatment effects while "Loyal states" have significant positive treatment effects. This result is in line with the empirical study of Larcinese et al. (2006) finding that states with loyal supports tend to receive more funds, while swing states are not rewarded. In addition, it is aligned with the assertion of Cox and McCubbins (1986) that the targeting of loyal voters can be seen as a safer investment as compared to aiming for swing voters and risk-a
\begin{table}
\begin{tabular}{l l l} \hline Group & \# of swing & States \\ \hline Loyal states & 0\(\sim\)2 & DC, AK, ID, KS, NE, ND, OK, SD, UT, WY \\ \hline Weak loyal states & 3\(\sim\)4 & AZ, CA, CT, IL, ME, MA, MN, NJ, OR, SC, VT, VA, WA, IN, MI, MT, TX \\ \hline Weak swing states & 5\(\sim\)6 & AL, CO, DE, HI, MD, NV, NH, NM, NY, NC, RI, IA, MS, MO, PA, TN, WI \\ \hline Swing states & 7\(\sim\) & AR, GA, KY, WV, FL, OH, LA \\ \hline \end{tabular}
\end{table}
Table 1: Number of swings of each state
allocate more funds to loyal states.
Figure 3 shows the President Effects and the Party Effects. Despite some exceptions, there are no statistically significant positive treatment effects before Carter, while there are significant positive treatment effects after Reagan. Figure 4 shows that before 1980, there is no significant positive treatment effect.
Figure 4: Test statistics for the average treatment effect before 1980 and after 1981
Figure 3: Test statistics for the President Effects and the Party Effects
Figure 2: Test statistics for the Region Effects and the Loyal/Swing Effects
ment effect in most states, while there are significant positive treatment effects in most states after 1981. Hence, there is a substantial difference between 'before 1980' and 'after 1981' and the tendency that incumbent presidents reward states that showed their support in the presidential elections became significant after Reagan, that is, after the 1980s. It suggests that after the 1980s, the presidents show more influence on the allocation of federal funds to reward their supporters. Evidence is that starting from the 1980s, all presidents have put forward proposals for the introduction of presidential line-item veto and tried to increase the power of the president to control federal spending.
Finally, when testing for the treatment effects of multiple states, the tests may subject to the issue of multiple testing problems, with undesirable false discovery rates (FDR). We also address this issue by adopting the procedure of Benjamini and Hochberg (1995) to control the FDR at 5%. We find that the list of states with significant treatment effects is unchanged.
## 6 Simulation Study
This section provides the finite sample performances of the estimators. We first study the performances of the estimators of \(M_{it}\) and \(|\mathcal{G}|_{o}^{-1}\sum_{(i,t)\in\mathcal{G}}M_{it}\), and then study performances of the average treatment effect estimators. To save space, some results are relegated to Appendix.
First of all, in order to check the estimation quality of our estimator, we compare the Frobenius norms of the estimation errors for several existing estimators of \(M\). Our two-step least squares is labelled as "TLS". We also consider the debiased nuclear norm penalized estimators from Xia and Yuan (2021), "(Hetero) XY," and Chen et al. (2019), "(Hetero) CFMY." "(Hetero)" represents that they are modified to allow the heterogeneous observation probabilities. The comparison also includes the inverse probability weight based estimator, "IPW," from Xiong and Pelger (2020), and the EM algorithm based estimator, "EM," from Jin et al. (2021). The plain nuclear norm penalized estimator, "Plain Nuclear," and the TLS estimator using sample splitting, "TLS with SS," are also considered. For the data-generating designs, we consider the following three models:
\[\bullet\ \text{Factor model:}\ \ y_{it}=\beta_{1,i}F_{1,t}+\beta_{2,i}F_{2,t}+ \varepsilon_{it},\ \ \ \ \text{where}\ \ \ \beta_{1,i},F_{1,t},\beta_{2,i},F_{2,t}\sim\mathcal{N}\left(\frac{1}{\sqrt{2}}, 1\right),\] \[\bullet\ \text{Nonparametric model 1:}\ \ y_{it}=h_{t}\left(\zeta_{i} \right)+\varepsilon_{it},\ \ \ \ \text{where}\ \ \ h_{t}(\zeta)=h_{t}^{\text{poly}}(\zeta)\coloneqq\sum_{r=1}^{\infty}\frac{ |U_{t,r}|}{r^{3}}\cdot\zeta^{r},\] \[\bullet\ \text{Nonparametric model 2:}\ \ y_{it}=h_{t}\left(\zeta_{i} \right)+\varepsilon_{it},\ \ \ \ \text{where}\ \ \ h_{t}(\zeta)=h_{t}^{\text{sine}}(\zeta)\coloneqq\sum_{r=1}^{\infty}\frac{ |U_{t,r}|}{r^{3}}\sin(r\zeta). \tag{6.1}\]
Here, \(U_{t,r}\) is generated from \(\mathcal{N}(2,1)\) and \(\zeta_{i}\) is generated from \(\text{Uniform}[0,1]\). In addition, \(\varepsilon_{it}\) is generated from the standard normal distribution independently across \(i\) and \(t\). The observation pattern follows a
heterogeneous missing-at-random mechanism where \(\omega_{it}\sim\text{Bernoulli}(p_{i})\) and \(p_{i}\) is generated from Uniform \([0.3,0.7]\).
Table 2 reports \(\|\widehat{M}-M\|_{F}/\sqrt{NT}\) averaged over 100 replications. We highlight that the TLS shows the best performance in almost all scenarios. Only the EM is comparable to ours, but it computes much slower since it requires multi-step iterations. In contrast, our proposed method does not iterate. Also, our method always outperforms the TLS with SS. The (Hetero) XY and (Hetero) CFMY are slightly worse than ours in this experiment. Lastly, both the IPW and the Plain Nuclear show the worst performances uniformly. The IPW, being non-statistically efficient, is only slightly better than the Plain Nuclear.
Additionally, to show the relative advantage of TLS over TLS with sample splitting, Table 3 reports \((\widehat{M}_{it}-M_{it})^{2}\) in the case where \(T\) is small. Here, we choose \((i,t)\) randomly and fix it during replications. As we can check in the table, when \(T\) is relatively small, the performance of TLS with sample splitting is much worse than that of TLS without sample splitting. Especially, in the factor model, the difference in performance is quite large.
\begin{table}
\begin{tabular}{r r r r r r r r r r} \hline \hline \multicolumn{1}{c}{Model} & \multicolumn{3}{c}{Factor} & \multicolumn{3}{c}{Sine} & \multicolumn{3}{c}{Poly} & \\ \multicolumn{1}{c}{Sample Size} & \multicolumn{1}{c}{TLS} & \multicolumn{1}{c}{TLS w/ SS} & Ratio & TLS & TLS w/ SS & Ratio & TLS & TLS w/ SS & Ratio \\ \hline N=100,T=20 & 0.4665 & 2.8951 & 16.1\% & 0.1401 & 0.1702 & 82.3\% & 0.1272 & 0.1894 & 67.2\% \\ N=100,T=40 & 0.2162 & 0.2685 & 80.5\% & 0.0736 & 0.0819 & 89.9\% & 0.0807 & 0.0865 & 93.3\% \\ N=100,T=60 & 0.1111 & 0.1300 & 85.5\% & 0.0603 & 0.0637 & 94.7\% & 0.0538 & 0.0567 & 94.9\% \\ \hline \hline \end{tabular} NOTE: The values are the averaged \((\widehat{M}_{it}-M_{it})^{2}\) over 1,000 replications. “ Ratio” denotes the ratio between performances of TLS and TLS with SS. Here, we assume \(\omega_{it}\sim\text{Bernoulli}(0.5)\). When \(T=20\), the working sample size for the sample splitting is only 10, which leads to singularity issues in the inverse covariance matrix estimation. As a result, the estimator performs badly in this case.
\end{table}
Table 3: \((\widehat{M}_{it}-M_{it})^{2}\) Comparison between TLS and TLS with SS
Second, we study the finite sample distributions for standardized estimates defined as \((\widetilde{M}_{it}-M_{it})/se(\widetilde{M}_{it})\). For comparison, we report the results of the Plain Nuclear and the TLS with SS, in addition to the TLS. For the Plain Nuclear, we use the sample standard deviation obtained from the simulations for \(se(\widetilde{M}_{it})\) because the theoretical variance of it is unknown. For the TLS with SS, we construct the standard error following Chernozhukov et al. (2019). Here, we consider the nonparametric models in (6.1). Hereinafter, the number of replications is 1,000, and the sample size is \(N=T=200\).
Figure 5 plots the scaled histograms of the standardized estimates with the standard normal density. As we expected in theory, it shows that the standardized TLS and the standardized TLS with SS fit the standard normal distribution well, while the standardized Plain Nuclear is biased. Without sample splitting, the TLS itself provides a good approximation to the standard normal distribution so that it can be used for the inference successfully. The coverage probabilities of confidence interval in Appendix also show similar results.
Next, we study the finite sample performance of the average treatment effect estimator. Following Section 4, for each \(\iota\in\{0,1\}\), we generate the data from \(y_{it}^{(\iota)}=h_{t}^{(\iota)}(\zeta_{i})+\varepsilon_{it}\), where \(h_{t}^{(0)}(\zeta)=\sum_{r=1}^{\infty}|U_{t,r}|r^{-a}\sin(r\zeta)\), \(h_{t}^{(1)}(\zeta)=\sum_{r=1}^{\infty}(|U_{t,r}|+2)r^{-a}\sin(r\zeta)\). The power parameter \(a>1\) controls the decay speed of the sieve coefficients. The forms of the above functions and the treatment effect \(\Gamma_{it}=h_{t}^{(1)}(\zeta_{i})-h_{t}^{(0)}(\zeta_{i})\) are in Figure 6.
Here, \(\varepsilon_{it}\) and \(U_{t,r}\) are independently generated from the standard normal distribution and \(\zeta_{i}\) is independently generated from Uniform\([0,1]\). The treatment pattern follows \(\Upsilon_{it}\sim\text{Bernoulli}(p_{i}^{(1)})\) and \(p_{i}^{(1)}\sim\text{Uniform}[0.3,0.7]\).
Figure 7 presents the scaled histograms of the standardized estimates of the average treatment effect
estimators for the groups \(\mathcal{G}_{1}=\{(i,t)\}\), \(\mathcal{G}_{2}=\{(j,t):1\leq j\leq N\}\), and \(\mathcal{G}_{3}=\{(i,s):1\leq s\leq T\}\). Here, the standard estimates are given as
\[\frac{\frac{1}{|\mathcal{G}|_{o}}\sum_{(i,t)\in\mathcal{G}}\widehat{\Gamma}_{it }-\frac{1}{|\mathcal{G}|_{o}}\sum_{(i,t)\in\mathcal{G}}\Gamma_{it}}{se\left( \frac{1}{|\mathcal{G}|_{o}}\sum_{(i,t)\in\mathcal{G}}\widehat{\Gamma}_{it} \right)}.\]
As expected in theory, the standardized estimates of the average treatment effect estimators of all groups approximately show the standard normal distribution. In addition, the coverage probabilities of the confidence interval in Appendix also show similar results.
Conclusion
This paper studies the inferential theory for low-rank matrices and provides an inference method for the average treatment effect as an application. Without the aid of sample splitting, our estimation procedure successfully resolves the problem of the shrinkage bias, and the resulting estimator attains the asymptotic normality. Unlike Chernozhukov et al. (2019, 2021) which exploit sample splitting, our estimation step is simple, and we can avoid some undesirable properties of sample splitting. In addition, this paper allows the heterogeneous observation probability and uses inverse probability weighting to control the effect of the heterogeneous observation probability.
## 8 Supplement Materials
For the sake of brevity, some of the technical proofs are relegated to the Supplement.
## Appendix A Data-driven ways of choosing \(K\)
**Using a consistent estimator of \(K\)**
To choose the sieve dimension \(K\), we can use the following rank estimator of \(M^{\star}\) in the general approximate factor model \(\widehat{K}=\sum_{r}1\{\psi_{r}(\widetilde{M})\geq((N+T)/2)^{\frac{11}{20}} \|\widetilde{M}\|^{\frac{1}{4}}\}\) where \(\psi_{r}(\widetilde{M})\) denotes the \(r\)th largest singular value of \(\widetilde{M}\). As noted in Claim F.1 (iii), it works as a consistent rank estimator for \(M^{\star}\) in the general approximate factor model. By the same token in Footnote 5 of Bai (2003), our inferential theory for the general approximate factor model is not affected even if the rank \(K\) is unknown and estimated using this estimator since \(P(\widehat{K}=K)\to 1\).
**Cross-validation method**
When the matrix of interest \(M\) is approximated by a low-rank structure via a sieve representation like our main model, we can treat the sieve dimension \(K\) as a tuning parameter. Hence, we introduce one data-driven way of selecting \(K\) which exploits the cross-validation which is similar to the idea in Athey et al. (2021). From the observed sample \(\{(i,t):\omega_{it}=1\}\), we randomly create a subsample by using a Bernoulli process, namely the subsample is \(\{(i,t):\omega_{it}X_{it}=1\}\) where \(\{X_{it}\}_{i\leq N,t\leq T}\) are independent Bernoulli random variables of probability \(\sum_{i,t}\omega_{it}/NT\), which is independent of \(\{\omega_{it}\}_{i\leq N,t\leq T}\). This guarantees that we have \(\sum_{i,t}\omega_{it}/NT\approx\sum_{i,t}\omega_{it}X_{it}/\sum_{i,t}\omega_{it}\). We then pre-specify the set of candidates of \(K\) as \(\{K_{1},K_{2},\ldots\}\) and compute the estimates \(\widehat{M}_{K_{1}},\widehat{M}_{K_{2}},\ldots\), respectively, using only the
subsample. To compare their out-of-sample performance, we measure the mean squared error of them on \(\{(i,t):\omega_{it}(1-X_{it})=1\}\). For robustness, we repeat this process five times, creating different independent subsamples each time, to obtain five mean squared errors for each \(K\in\{K_{1},K_{2},\ldots\}\). The sieve dimension which minimizes the sum of five mean squared errors is chosen. In our simulation study, we use this method with \(\{2,4,6,8,10\}\) as the set of candidates of \(K\).
## Appendix B Finite sample convergence rate
For completeness, this section studies the finite sample convergence rate of our estimator. First, we provide several conditions. Here, \(a\lesssim b\) means \(|a|/|b|\leq C\) for some constant \(C>0\). \(a\ll b\) indicates \(|a|\leq c|b|\) for some sufficiently small constant \(c>0\).
**Assumption B.1** (Sieve representation).: _(i) \(\{h_{t}(\cdot)\}_{t\leq T}\) belong to ball \(\mathcal{H}\left(\mathcal{Z},\left\|\cdot\right\|_{L_{2}},C\right)\) inside a Hilbert space spanned by the basis \(\{\phi_{r}\}_{r\geq 1}\), with a uniform \(L_{2}\)-bound \(C\): \(\sup_{h\in\mathcal{H}(\mathcal{Z},\left\|\cdot\right\|_{L_{2}})}\|h\|\leq C,\) where \(\mathcal{Z}\) is the support of \(\zeta_{i}\). (ii) The sieve approximation error satisfies: For some \(\nu>0\), \(\max_{i,t}|M_{it}^{R}|\leq CK^{-\nu}\). (iii) For some \(C>0\), \(\max_{r\leq K}\sup_{\zeta}|\phi_{r}(\zeta)|<C\). In addition, there is \(\eta>0\) such that \(\psi_{\min}^{-1}\left(S_{\beta}\right)<\eta\) and \(\psi_{\min}^{-1}\left(S_{F}\right)<\eta\). (iv) \(\sum_{i,t}h_{t}^{2}(\zeta_{i})\lesssim NT\). (v) There are constants \(\delta,g\geq 0\) such that \(\psi_{1}(Q)/\psi_{K}(Q)\lesssim K^{\delta}\), \(\min_{1\leq r\leq K-1}\psi_{r}(Q)-\psi_{r+1}(Q)\geq cK^{-g}\) for some constant \(c>0\)._
This condition is basically the same as Assumption 3.1, and we modify some notation to be suitable for finite sample analysis.
**Assumption B.2** (Parameter size and signal-to-noise ratio).: _Let \(\gamma=\frac{p_{\max}}{p_{\min}}\) and \(\tilde{\vartheta}=\max\{\vartheta,\log N+\log T\}\). Then, we have_
\[(i) \tilde{\theta}\eta^{\frac{3}{2}}\gamma^{\frac{5}{2}}K^{(2+2g+ \frac{9}{2}\delta)}\max\{\sqrt{N\log N},\sqrt{T\log T}\}\ll p_{\min}^{\frac{1 }{2}}\min\{N,T\},\] \[\gamma^{\frac{3}{2}}K^{(g+\frac{3}{2}\delta)}\max\{N,T\}\ll p_{ \min}^{\frac{1}{2}}\min\{\sqrt{N\log N},\sqrt{T\log T}\}\psi_{NT},\] \[(ii) \min\{|\mathcal{I}|_{o}^{\frac{1}{2}},|\mathcal{I}|_{o}^{\frac{1} {2}}\}\max\{\sqrt{N},\sqrt{T}\}\ll p_{\min}^{\frac{1}{2}}K^{(\nu-\frac{1}{2}-2 \delta)},\] \[\min\{|\mathcal{I}|_{o}^{\frac{1}{2}},|\mathcal{I}|_{o}^{\frac{1} {2}}\}\max\{\sqrt{N},\sqrt{T}\}\sqrt{NT}\ll\gamma^{\frac{1}{2}}\psi_{NT}K^{v}.\]
The above condition is weaker than the condition for the asymptotic normality (Assumption 3.4). For example, Assumption B.2 (i) does not restrict the size of the interesting group, \(\min\{|\mathcal{I}|_{o},|\mathcal{T}|_{o}\}\)
unlike Assumption 3.4 (i). Hence, we can deal with the case where \(|\mathcal{I}|_{o}=N\) and \(|\mathcal{T}|_{o}=T\). In addition, it allows for a weaker signal-to-noise ratio than that of Assumption 3.4.
**Proposition B.1**.: _Suppose Assumptions 3.2, 3.3, B.1, and B.2. Then, with probability at least \(1-O(\min\{N^{-3},T^{-3}\})\), we have_
\[\left\|\frac{1}{|\mathcal{G}|_{o}}\sum_{(i,t)\in\mathcal{G}} \widehat{M}_{it}-\frac{1}{|\mathcal{G}|_{o}}\sum_{(i,t)\in\mathcal{G}}M_{it} \right\| \leq C\left(\frac{\sigma\eta^{\frac{1}{2}}K^{\frac{1}{2}}\max\{ \sqrt{\log N},\sqrt{\log T}\}}{p_{\min}^{\frac{1}{2}}\sqrt{N|\mathcal{T}|_{o}} }+\frac{\sigma\eta^{\frac{1}{2}}K^{\frac{1}{2}}\max\{\sqrt{\log N},\sqrt{\log T }\}}{p_{\min}^{\frac{1}{2}}\sqrt{T|\mathcal{I}|_{o}}}\right.\] \[\left.+\frac{\sigma\tilde{\vartheta}\gamma^{\frac{7}{2}}K^{(4+2g+ \frac{13}{2}\delta)}\eta^{3}\max\{\log N,\log T\}}{p_{\min}^{\frac{3}{2}}\min \{N,T\}}+\frac{\sigma^{3}\gamma^{2}K^{(\frac{7}{2}\delta+g+1)}\eta^{\frac{1} {2}}\max\{N,T\}}{p_{\min}^{2}\psi_{NT}^{2}}\right)\]
_for some constant \(C>0\)._
The first two terms represent the asymptotically normal distribution parts, while the last two terms are the residual parts related to the estimation errors of \(\beta_{i}\) and \(f_{t}\). If we ignore some small parameters and logarithmic terms, the convergence rate of the first two terms is reduced to
\[\frac{1}{\sqrt{N|\mathcal{T}|_{o}}}+\frac{1}{\sqrt{T|\mathcal{I}|_{o}}}.\]
However, if both \(|\mathcal{I}|_{o}\) and \(|\mathcal{T}|_{o}\) are large, as in the case where \(|\mathcal{I}|_{o}=N\) and \(|\mathcal{T}|_{o}=T\), the asymptotically normal parts cannot dominate the residual parts. Thus, we are unable to derive the inferential theory in this case. For inference, at least one part of the asymptotically normal terms should dominate other residual terms. On the other hand, in terms of the convergence rate, the large sizes of \(|\mathcal{I}|_{o}\) and \(|\mathcal{T}|_{o}\) are beneficial.
## Appendix C Inferential theory for the general approximated factor model
This section provides assumptions for the asymptotic normality of the estimator of the group average of \(M_{it}\) for the general approximated factor model having the form \(Y=M+\mathcal{E}\) where \(M=M^{\star}+M^{R}\), \(rank(M^{\star})=r\). For this, we define some additional notations. The condition number of \(M^{\star}\) is defined as \(q\coloneqq\psi_{\max}(M^{\star})/\psi_{\min}(M^{\star})\). Define \(\bar{c}=\min_{1\leq r\leq K+1}\left|c_{r-1}^{2}-c_{r}^{2}\right|\), where \(c_{r}\coloneqq\psi_{r}(M^{\star})/\psi_{\min}(M^{\star})\), and \(c_{\rm inv}\coloneqq 1/\bar{c}\).4
Footnote 4: We set \(c_{0}\coloneqq\infty\). Note that \(\psi_{r}=0\) for \(r>K\), and that \(c_{1}^{2}=q^{2}\geq c_{r}^{2}\geq c_{K}^{2}=1\) for all \(1\leq r\leq K\). \(\bar{c}\) is always smaller than 1 since \(c_{K}^{2}-c_{K+1}^{2}=1\). Hence, \(c_{\rm inv}\geq 1\). We allow \(c_{\rm inv}\) to increase slowly as \(N\) and \(T\) increase.
**Assumption C.1** (Incoherence).: _The matrix \(M^{\star}\) satisfies \(\mu\)-incoherence condition. That is, \(\left\|U_{M^{\star}}\right\|_{2,\infty}\leq\sqrt{\frac{\mu}{N}}\left\|U_{M^{ \star}}\right\|_{F}=\sqrt{\frac{\mu K}{N}}\) and \(\left\|V_{M^{\star}}\right\|_{2,\infty}\leq\sqrt{\frac{\mu}{T}}\left\|V_{M^{ \star}}\right\|_{F}=\sqrt{\frac{\mu K}{T}}\) with probability converging to 1. Here, \(\mu\) is allowed to increase as \(N,T\) increase._
**Assumption C.2** (Parameters size).: _Let \(\gamma=\frac{p_{\max}}{p_{\min}}\) and \(\tilde{\vartheta}=\max\{\vartheta,\log N+\log T\}\). Then, we have_
1. \(\min\{|\mathcal{I}|_{o}^{1/2},|\mathcal{T}|_{o}^{1/2}\}\tilde{\vartheta}\tilde{c} _{\mathrm{inv}}q^{\frac{15}{2}}\mu^{3}K^{4}\gamma^{\frac{7}{2}}\max\{\sqrt{N \log N},\sqrt{T\log T}\}=o_{P}(p_{\min}\min\{N,T\}),\)__
2. \(\min\{|\mathcal{I}|_{o}^{1/2},|\mathcal{T}|_{o}^{1/2}\}\tilde{\vartheta}\tilde{ c}_{\mathrm{inv}}^{2}q^{7}\mu^{\frac{5}{2}}k^{\frac{7}{2}}\gamma^{4}\max\{N \sqrt{\log N},T\sqrt{\log T}\}=o_{P}(\psi_{\min}p_{\min}^{\frac{3}{2}}\min\{ \sqrt{N},\sqrt{T}\}),\)__
3. \(\min\{|\mathcal{I}|_{o}^{1/2},|\mathcal{T}|_{o}^{1/2}\}\vartheta c_{\mathrm{ inv}}^{2}q^{6}\mu^{2}K^{\frac{7}{2}}\gamma^{\frac{7}{2}}\max\{N^{\frac{3}{2}} \sqrt{\log N},T^{\frac{3}{2}}\sqrt{\log T}\}=o_{P}(\psi_{\min}^{2}p_{\min}),\)__
4. \(\min\{|\mathcal{I}|_{o}^{1/2},|\mathcal{T}|_{o}^{1/2}\}c_{\mathrm{inv}}q^{ \frac{7}{2}}\mu^{\frac{1}{2}}K\gamma^{3}\max\{N^{2},T^{2}\}\min\{\sqrt{N}, \sqrt{T}\}=o_{P}(\psi_{\min}^{3}p_{\min}^{\frac{3}{2}}).\)__
**Assumption C.3** (Low-rank approximation error \(M^{R}\)).: _The low-rank approximation error \(M^{R}\) satisfies the following condition:_
\[\max_{i,t}\left|M^{R}_{it}\right|= o_{P}\left(\frac{p_{\min}^{\frac{5}{2}}}{\min\{|\mathcal{I}|_{o}^{1/ 2},|\mathcal{T}|_{o}^{1/2}\}p_{\max}^{2}q^{2}\mu^{\frac{3}{2}}K^{\frac{3}{2}} \max\{\sqrt{N},\sqrt{T}\}}\right.\] \[\left.+\frac{\psi_{\min}p_{\min}^{2}}{\min\{|\mathcal{I}|_{o}^{1/ 2},|\mathcal{T}|_{o}^{1/2}\}p_{\max}^{\frac{3}{2}}q\mu^{\frac{1}{2}}K^{\frac{ 1}{2}}\max\{\sqrt{N},\sqrt{T}\}\sqrt{NT}}\right)\]
Then, the estimator for the group average of \(M_{it}\) has the asymptotic normality as follows.
**Theorem C.1**.: _Suppose Assumptions 3.2, 3.3 and C.1-C.3 hold. In addition, suppose that \(\left\|\frac{\sqrt{N}}{|\mathcal{I}|_{o}}\sum_{i\in\mathcal{I}}U_{M^{*},i} \right\|\geq c\) and \(\left\|\frac{\sqrt{T}}{|\mathcal{T}|_{o}}\sum_{t\in\mathcal{T}}V_{M^{*},t} \right\|\geq c\) for some constant \(c>0\). Then,_
\[\mathcal{V}_{\mathcal{G}}^{-\frac{1}{2}}\left(\frac{1}{|\mathcal{G}|_{o}}\sum _{(i,t)\in\mathcal{G}}\widehat{M}_{it}-\frac{1}{|\mathcal{G}|_{o}}\sum_{(i,t) \in\mathcal{G}}M_{it}\right)\overset{D}{\longrightarrow}\mathcal{N}(0,1),\]
_where \(\quad\mathcal{V}_{\mathcal{G}}=\frac{1}{|\mathcal{T}|_{o}}\sum_{t\in \mathcal{T}}\bar{\beta}_{\mathcal{I}}^{\prime}\left(\sum_{j=1}^{N}\omega_{jt }\beta_{j}\beta_{j}^{\prime}\right)^{-1}\left(\sum_{j=1}^{N}\omega_{jt}\sigma _{jl}^{2}\beta_{j}\beta_{j}^{\prime}\right)\left(\sum_{j=1}^{N}\omega_{jt}\beta _{j}\beta_{j}^{\prime}\right)^{-1}\bar{\beta}_{\mathcal{I}}\)_
\[+\frac{1}{|\mathcal{I}|_{o}^{2}}\sum_{i\in\mathcal{I}}\bar{F}_{\mathcal{T}}^{ \prime}\left(\sum_{s=1}^{T}\omega_{is}F_{s}F_{s}^{\prime}\right)^{-1}\left( \sum_{s=1}^{T}\omega_{is}\sigma_{is}^{2}F_{s}F_{s}^{\prime}\right)\left(\sum_{ s=1}^{T}\omega_{is}F_{s}F_{s}^{\prime}\right)^{-1}\bar{F}_{\mathcal{T}},\]
\(\bar{\beta}_{\mathcal{I}}=\frac{1}{|\mathcal{I}|_{o}}\sum_{i\in\mathcal{I}} \beta_{i}\)_, \(\bar{F}_{\mathcal{T}}=\frac{1}{|\mathcal{T}|_{o}}\sum_{s\in\mathcal{T}}F_{s}\). In addition, Assumptions C.1 - C.3 are satisfied under Assumptions 3.1 - 3.4 by setting \(\mu=C\eta\) for some constant \(C>0\)._
In fact, Assumptions C.1 - C.3 are verified by Lemma F.1.
**Theorem C.2** (Feasible CLT).: _Under the assumptions of Theorem C.1, we have_
\[\widehat{\mathcal{V}}_{\mathcal{G}}^{-\frac{1}{2}}\left(\frac{1}{|\mathcal{G}|_ {o}}\sum_{(i,t)\in\mathcal{G}}\widehat{M}_{it}-\frac{1}{|\mathcal{G}|_{o}} \sum_{(i,t)\in\mathcal{G}}M_{it}\right)\overset{D}{\longrightarrow}\mathcal{N} (0,1),\]
_where \(\widehat{\mathcal{V}}_{\mathcal{G}}\) is the same as the one in Theorem 3.1._
## Appendix D Formal definitions of the non-convex estimator and the leave-one-out estimator
Here, we introduce formal definitions of the non-convex optimization estimator \((\widetilde{W}^{[l]},\widetilde{Z}^{[l]})\) and the leave-one-out estimator \((\breve{W}^{(l)},\breve{Z}^{(l)})\) where \(1\leq l\leq N+T\). We start with defining the following two loss functions:
\[f^{\text{infs}}(w,z)\coloneqq\frac{1}{2}\|\Pi^{-\frac{1}{2}} \mathcal{P}_{\Omega}\left(wz^{\prime}-Y\right)\|_{F}^{2}+\frac{\lambda}{2}\|w \|_{F}^{2}+\frac{\lambda}{2}\|z\|_{F}^{2},\] (D.1) \[f^{\text{infs},(l)}(w,z)\] (D.2) \[\coloneqq\begin{cases}\frac{1}{2}\left\|\Pi^{-1/2}\mathcal{P}_{ \Omega_{-l,\cdot}}(wz^{\prime}-Y)\right\|_{F}^{2}+\frac{1}{2}\left\|\mathcal{P }_{l,\cdot}(wz^{\prime}-M^{\star})\right\|_{F}^{2}+\frac{\lambda}{2}\left\|w \right\|_{F}^{2}+\frac{\lambda}{2}\left\|z\right\|_{F}^{2},\quad\text{if }1 \leq l\leq N,\\ \frac{1}{2}\left\|\Pi^{-1/2}\mathcal{P}_{\Omega_{\cdot,-(l-N)}}(wz^{\prime}- Y)\right\|_{F}^{2}+\frac{1}{2}\left\|\mathcal{P}_{\cdot,(l-N)}(wz^{\prime}-M^{ \star})\right\|_{F}^{2}+\frac{\lambda}{2}\left\|w\right\|_{F}^{2}+\frac{ \lambda}{2}\left\|z\right\|_{F}^{2},\\ \text{if }N+1\leq l\leq N+T,\end{cases}\]
where \(w\) and \(z\) are \(N\times K\) and \(T\times K\) matrices, respectively. The loss function (D.1) is for the non-convex optimization estimator \((\widetilde{W}^{[l]},\widetilde{Z}^{[l]})\) and the loss function (D.2) is for the leave-one-out estimator \(\breve{W}^{(l)}\). In the loss function (D.2), we use the following definitions. Let \(\mathcal{C}_{g(i)}\) be the cluster where the unit \(i\) is included in. For each \(N\times T\) matrix \(D\), let \(\mathcal{P}_{\Omega}=\Omega\circ D\). Also, for each \(N\times T\) matrix \(D\) and for each \(1\leq l\leq N\), let \(\mathcal{P}_{\Omega_{-l,\cdot}}(D)\coloneqq\Omega_{-l,\cdot}\circ D\) where \(\Omega_{-l,\cdot}\coloneqq[\omega_{js}1\{j\notin\mathcal{C}_{g(l)}\}]_{N\times T}\), and \(\mathcal{P}_{l,\cdot}(D)\coloneqq E_{l,\cdot}\circ D\) where \(E_{l,\cdot}\coloneqq[1\{j\in\mathcal{C}_{g(l)}\}]_{N\times T}\). Roughly speaking, \(f^{\text{infs},(l)}\) changes \(\{p_{j}^{-1}\omega_{js},y_{js}\}_{j\in\mathcal{C}_{g(l)},s\leq T}\) in \(f^{\text{infs}}\) to its (approximate) population mean \(\{1,M_{js}^{\star}\}_{j\in\mathcal{C}_{g(l)},s\leq T}\). Hence, the leave-one-out estimator constructed from the loss function \(f^{\text{infs},(l)}\) can be independent of \(\{\omega_{ls},\varepsilon_{ls}\}_{s\leq T}\) because \(f^{\text{infs},(l)}\) excludes \(\{\omega_{js},\varepsilon_{js}\}_{j\in\mathcal{C}_{g(l)},s\leq T}\) which is in the cluster where the unit \(l\) is included in.
On the other hand, for each \(N+1\leq l\leq N+T\), we define \(\mathcal{P}_{\Omega_{\cdot,-(l-N)}}(D)\coloneqq\Omega_{\cdot,-(l-N)}\circ D\) where \(\Omega_{\cdot,-(l-N)}\coloneqq[\omega_{js}1\{s\neq l-N\}]_{N\times T}\), and \(\mathcal{P}_{\cdot,(l-N)}(D)\coloneqq E_{\cdot,(l-N)}\circ D\) where \(E_{\cdot,(l-N)}\coloneqq[1\{s=l-N\}]_{N\times T}\). In this case, \(f^{\text{infs},(l)}\) changes \(\{p_{j}^{-1}\omega_{js},y_{js}\}_{j\leq N,s=l-N}\) in \(f^{\text{infs}}\) to \(\{1,M_{js}^{\star}\}_{j\leq N,s=l-N}\). So, the leave-one-out estimator constructed from \(f^{\text{infs},(l)}\) is independent of \(\{\omega_{j,(l-N)},\varepsilon_{j,(l-N)}\}_{j\leq N}\) because \(f^{\text{infs},(l)}\) excludes \(\{\omega_{j,(l-N)},\varepsilon_{j,(l-N)}\}_{j\leq N}\) and \(\omega_{js}\), \(\varepsilon_{js}\) are independent across time.
To define the gradient descent iterates, we denote the singular value decomposition (SVD) of \(M^{\star}\) by \(U_{M^{\star}}D_{M^{\star}}V_{M^{\star}}^{\prime}\) where \(U_{M^{\star}}^{\prime}U_{M^{\star}}=V_{M^{\star}}^{\prime}V_{M^{\star}}=I_{K}\). \(D_{M^{\star}}\) is a \(K\times K\) diagonal matrix with singular values in descending order, i.e., \(D_{M^{\star}}=\text{diag}(\psi_{1},\ldots,\psi_{K})\) where \(\psi_{\max}=\psi_{1}>\cdots>\psi_{K}=\psi_{\min}>0\). Then,
based on (D.1), we define the following gradient descent iterates:
\[\begin{bmatrix}W^{\tau+1}\\ Z^{\tau+1}\end{bmatrix}=\begin{bmatrix}W^{\tau}-\eta\nabla_{W}f^{\text{infs}}(W^{ \tau},Z^{\tau})\\ Z^{\tau}-\eta\nabla_{Z}f^{\text{infs}}(W^{\tau},Z^{\tau})\end{bmatrix}\] (D.3)
where \(W^{0}=W\coloneqq U_{M^{*}}D_{M^{*}}^{\frac{1}{2}}\), \(Z^{0}=Z\coloneqq V_{M^{*}}D_{M^{*}}^{\frac{1}{2}}\), \(\tau=0,1,\ldots,\tau_{0}-1\), and \(\tau_{0}=\max\{N^{18},T^{18}\}\). Here, \(\eta>0\) is the step size. Similarly, for (D.2), we define
\[\begin{bmatrix}W^{\tau+1,(l)}\\ Z^{\tau+1,(l)}\end{bmatrix}=\begin{bmatrix}W^{\tau,(l)}-\eta\nabla_{W}f^{ \text{infs},(l)}(W^{\tau,(l)},Z^{\tau,(l)})\\ Z^{\tau,(l)}-\eta\nabla_{Z}f^{\text{infs},(l)}(W^{\tau,(l)},Z^{\tau,(l)})\end{bmatrix}\] (D.4)
where \(W^{0,(l)}=W\), \(Z^{0,(l)}=Z\). Note that the gradient descent iterates in (D.3) and (D.4) cannot be feasibly computed because the initial values (\(W\), \(Z\)), the missing probability (\(\Pi\)), and the cluster structure are unknown. However, it does not cause any problem in the paper since we do not need to actually compute \(W^{\tau},Z^{\tau},W^{\tau,(l)}\), and \(Z^{\tau,(l)}\) and only use their existence and theoretical properties for the proof. We also define for each \(\tau\) and \(l\),
\[H^{\tau}\coloneqq\operatorname*{arg\,min}_{O\in\mathcal{O}^{K \times K}}\left\|\mathcal{F}^{\tau}O-\mathcal{F}\right\|_{F},\quad H^{\tau,(l )}\coloneqq\operatorname*{arg\,min}_{O\in\mathcal{O}^{K\times K}}\left\| \mathcal{F}^{\tau,(l)}O-\mathcal{F}\right\|_{F},\] \[Q^{\tau,(l)}\coloneqq\operatorname*{arg\,min}_{O\in\mathcal{O}^ {K\times K}}\left\|\mathcal{F}^{\tau,(l)}O-\mathcal{F}^{\tau}H^{\tau}\right\| _{F},\ \ \text{where }\mathcal{F}^{\tau}\coloneqq\begin{bmatrix}W^{\tau}\\ Z^{\tau}\end{bmatrix},\ \ \mathcal{F}^{\tau,(l)}\coloneqq\begin{bmatrix}W^{\tau,(l)}\\ Z^{\tau,(l)}\end{bmatrix},\ \ \mathcal{F}\coloneqq\begin{bmatrix}W \\ Z\end{bmatrix},\]
and \(\mathcal{O}^{K\times K}\) is the set of \(K\times K\) orthogonal matrix. Importantly, by the definition, \(H^{\tau,(l)}\) is also independent to the observations in \(l\).
In this paper, as emphasized in the main text, we consider the non-convex optimization estimator \((\widetilde{W}^{[l]},\widetilde{Z}^{[l]})\) and the leave-one-out estimator \((\breve{W}^{(l)},\breve{Z}^{(l)})\) at two different stopping points. Let \(\tau_{l}^{*}\coloneqq\operatorname*{arg\,min}_{0\leq\tau<\tau_{o}}\left\| \nabla f^{\text{infs},(l)}(W^{\tau,(l)},Z^{\tau,(l)})\right\|_{F}\). First, we use the stopping point \(\tau_{l}^{*}\), i.e.,
\[(\widetilde{W}^{[l]},\widetilde{Z}^{[l]})\coloneqq(W^{\tau_{l}^{*}},Z^{\tau_ {l}^{*}})\quad\text{from \eqref{eq:W^{l}}, }\quad(\breve{W}^{(l)},\breve{Z}^{(l)})\coloneqq(W^{\tau_{l}^{*},(l)},Z^{ \tau_{l}^{*},(l)})\quad\text{from \eqref{eq:W^{l}},}\]
and \(\widetilde{H}^{[l]}\coloneqq H^{\tau_{l}^{*}}\), \(\breve{H}^{(l)}\coloneqq H^{\tau_{l}^{*},(l)}\). For each \(l\), we set the same iteration number \(\tau_{l}^{*}\) for the non-convex optimization estimator \((\widetilde{W}^{[l]},\breve{Z}^{[l]})\) and the leave-one-out estimator \((\breve{W}^{(l)},\breve{Z}^{(l)})\) to ensure that they are close to each other. Note that, although the loss function (D.1) does not depend on \(l\), due to \(\tau_{l}^{*}\), the non-convex optimization estimator \((\widetilde{W}^{[l]},\widetilde{Z}^{[l]})\) depend on \(l\). Namely, \((\widetilde{W}^{[l]},\widetilde{Z}^{[l]})\) is selected to be close to the leave-one-out estimator \((\breve{W}^{(l)},\breve{Z}^{(l)})\) among many gradient descent iterates in (D.3). At last, we choose \(H_{4}^{[l]}\) so that \(\psi_{\min}^{-1/2}\widetilde{W}^{[l]}H_{4}^{[l]}\) is the left singular vector of \(\widetilde{W}^{[l]}\widetilde{Z}^{[l]\prime}\).
Secondly, we use the stopping point \(\tau^{*}\coloneqq\operatorname*{arg\,min}_{0\leq\tau<\tau_{o}}\left\|\nabla f ^{\text{infs}}(W^{\tau},Z^{\tau})\right\|_{F}\). For brevity, we will use
the same notations for the estimators. Namely,
\[(\widetilde{W}^{[l]},\widetilde{Z}^{[l]})\coloneqq(W^{\tau^{*}},Z^{\tau^{*}}) \quad\text{from (D.3)},\quad(\breve{W}^{(l)},\breve{Z}^{(l)})\coloneqq(W^{\tau^{*},(l)},Z^{ \tau^{*},(l)})\quad\text{from (D.4)},\]
and \(\widetilde{H}^{[l]}\coloneqq H^{\tau^{*}}\), \(\breve{H}^{(l)}\coloneqq H^{\tau^{*},(l)}\). Also, \(H^{[l]}_{4}\) is defined similarly. Here, we are abusing notation in the sense that \((\widetilde{W}^{[l]},\widetilde{Z}^{[l]})\), \(\widetilde{H}^{[l]}\) and \(H^{[l]}_{4}\) do not actually depend on \(l.\) However, this notational abuse is going to make the proofs more streamlined.
**Remark 1**.: In the main text, to facilitate understanding and save space, we use simpler notations. Specifically, \((\breve{\beta}^{\text{full},t},\breve{\beta}^{(-t)},\breve{\beta}^{\{-i\}})\) in the main text is the same as
\[\left(\breve{\beta}^{\text{full},t},\breve{\beta}^{(-t)},\breve{\beta}^{\{- i\}}\right)\coloneqq\left(\sqrt{N}\widetilde{W}^{[N+t]}\widetilde{H}^{[N+t]}D_{M^{*} }^{-\frac{1}{2}},\sqrt{N}\breve{W}^{(N+t)}\breve{H}^{(N+t)}D_{M^{*}}^{-\frac{ 1}{2}},\sqrt{N}\breve{W}^{(i)}\breve{H}^{(i)}D_{M^{*}}^{-\frac{1}{2}}\right).\]
## Appendix E Key part of proofs
As we mentioned in Section 2.3, the key for having an unbiased estimator for \(M_{it}\) is showing the following proposition:
**Proposition E.1**.: _Suppose assumptions of Theorem C.1 hold.5 Then, there is a \(K\times K\) matrix \(H_{2}\) so that_
Footnote 5: By Lemma F.1, the assumptions of Theorem C.1 are satisfied under the assumptions of Theorem C.1.
\[\sqrt{N}(\widehat{F}_{t}-H_{2}F_{t})=\sqrt{N}H_{2}\left(\sum_{j=1 }^{N}\omega_{jt}\beta_{j}\beta_{j}^{\prime}\right)^{-1}\left(\sum_{j=1}^{N} \omega_{jt}\beta_{j}\varepsilon_{jt}\right)+\sqrt{N}R_{t}^{F},\] \[\max_{t}\|\sqrt{N}R_{t}^{F}\|\] \[=O_{P}\left(\frac{\sigma p_{\max}^{\frac{3}{2}}\vartheta\circ_{ \text{inv}}q^{\frac{11}{2}}\mu^{\frac{3}{2}}K^{\frac{5}{2}}\sqrt{N}\max\{\sqrt {\log N},\sqrt{\log T}\}}{p_{\min}^{3}\min\{N,T\}}+\frac{\sigma^{2}p_{\max}^{ \frac{5}{2}}\vartheta c_{\text{inv}}^{2}q^{3}\mu K^{2}\sqrt{N}\max\{\sqrt{N\log N },\sqrt{T\log T}\}}{\psi_{\min}p_{\min}^{4}\min\{\sqrt{N},\sqrt{T}\}}\right.\] \[\left.+\frac{\sigma^{3}p_{\max}^{\frac{3}{2}}\sigma_{\text{inv}}q ^{\frac{5}{2}}K^{\frac{1}{2}}\sqrt{N}\max\{N,T\}}{\psi_{\min}^{2}p_{\min}^{3}}+ \frac{p_{\max}^{\frac{1}{2}}\sqrt{N}}{p_{\min}}\max_{it}\left|M_{it}^{R}\right| \right)=o_{P}(1).\]
### Important Lemmas
An important step is to show that uniformly in \(t\), the following two terms are negligible:
\[\frac{1}{\sqrt{N}}\sum_{j=1}^{N}\omega_{jt}\varepsilon_{jt}(\breve{\beta}_{j} -\breve{\beta}_{j}^{\text{full},t}),\quad\frac{1}{\sqrt{N}}\sum_{j=1}^{N} \omega_{jt}\varepsilon_{jt}(\breve{\beta}_{j}^{\text{full},t}-\breve{\beta}_{ j}^{(-t)}).\] (E.1)
The proof follows from Lemma E.2 below.
**Lemma E.2**.: _Suppose assumptions of Theorem C.1 hold. Uniformly in \(t\leq T\), the two terms in (E.1) are both \(o_{P}(1)\). Specifically, their order is_
\[O_{P}\left(\frac{\sigma^{2}p_{\max}^{\frac{3}{2}}\vartheta^{\frac{1}{2}}c_{\rm inv }q^{\frac{9}{2}}\mu^{\frac{1}{2}}K^{\frac{3}{2}}\sqrt{N}\max\{\sqrt{N\log N}, \sqrt{T\log T}\}}{p_{\min}^{2}\min\{\sqrt{N},\sqrt{T}\}\psi_{\min}}+\frac{ \sigma^{3}p_{\max}^{\frac{3}{2}}c_{\rm inv}q^{\frac{5}{2}}K^{\frac{1}{2}}\sqrt {N}\max\{N,T\}}{p_{\min}^{2}\psi_{\min}^{2}}\right).\]
_In addition, we have the following results:_
\[(i) \max_{t}\|\widetilde{W}^{[t+N]}\widetilde{H}^{[t+N]}-\hat{W}^{(t+ N)}\tilde{H}^{(t+N)}\|_{F}=O_{P}\left(\frac{\sigma p_{\max}^{\frac{1}{2}}q^{ \frac{1}{2}}q^{\frac{3}{2}}\mu^{\frac{1}{2}}K^{\frac{1}{2}}\max\{\sqrt{N\log N },\sqrt{T\log T}\}}{p_{\min}\psi_{\min}^{1/2}\min\{\sqrt{N},\sqrt{T}\}} \right),\] \[(ii) \max_{t}\|\widetilde{W}^{[t+N]}\widetilde{H}^{[t+N]}-W\|=O_{P} \left(\frac{\sigma p_{\max}^{\frac{1}{2}}q^{\frac{1}{2}}\max\{\sqrt{N},\sqrt{ T}\}}{p_{\min}\psi_{\min}^{1/2}}\right),\] \[(iii) \max_{t}\|\widetilde{W}^{[t+N]}\widetilde{Z}^{[t+N]}-\widetilde{M }\|_{F}=O_{P}\left(\frac{\sigma p_{\max}^{\frac{1}{2}}q^{\frac{7}{2}}\mu^{ \frac{1}{2}}K\max\{\sqrt{N\log N},\sqrt{T\log T}\}}{p_{\min}^{2}\min\{\sqrt{N},\sqrt{T}\}}\right),\] \[(iv) \|\widetilde{M}-M^{\star}\|=O_{P}\left(\frac{\sigma p_{\max}^{ \frac{1}{2}}q\max\{\sqrt{N},\sqrt{T}\}}{p_{\min}}\right),\] \[(v) \max_{t}\|\tilde{W}^{(t+N)}\tilde{H}^{(t+N)}-W\|_{2,\infty}=O_{P} \left(\frac{\sigma p_{\max}^{\frac{1}{2}}q^{\frac{1}{2}}q^{\frac{3}{2}}\mu^{ \frac{1}{2}}K^{\frac{1}{2}}\max\{\sqrt{N\log N},\sqrt{T\log T}\}}{p_{\min}\psi _{\min}^{1/2}\min\{\sqrt{N},\sqrt{T}\}}\right).\]
**Proof of Lemma E.2**. First of all, by Lemmas G.1 - G.5, we have (G.1), (G.2), (G.3), (G.4) and (G.5). Hence, we have (i)-(v). Next, we prove terms in (E.1) are \(o_{P}(1)\). By Remark 1, the first term is written as
\[\frac{1}{\sqrt{N}}\sum_{j=1}^{N}\omega_{jt}\varepsilon_{jt}( \widetilde{\beta}_{j}-\tilde{\beta}_{j}^{\rm full,}t)=N^{-\frac{1}{2}}( \widetilde{\beta}-\sqrt{N}\widetilde{W}^{[t+N]}\widetilde{H}^{[t+N]}D_{M^{\star }}^{-\frac{1}{2}})^{\prime}\Omega_{t}\mathcal{E}_{t}\] \[=N^{-\frac{1}{2}}(\widetilde{\beta}-\sqrt{N}\psi_{\min}^{-1/2} \widetilde{W}^{[t+N]}H_{4}^{[t+N]})^{\prime}\Omega_{t}\mathcal{E}_{t}+\psi_{ \min}^{-1/2}(H_{4}^{[t+N]}-\widetilde{H}^{[t+N]}D_{M^{\star}}^{-\frac{1}{2}} \psi_{\min}^{-1/2})^{\prime}\widetilde{W}^{[t+N]}\Omega_{t}\mathcal{E}_{t}\] (E.2)
where \(H_{4}^{[N+t]}\) is a \(K\times K\) matrix introduced in Claim F.2, \(\Omega_{t}={\rm diag}\left(\omega_{1t},\ldots,\omega_{Nt}\right)\), and \(\mathcal{E}_{t}=[\varepsilon_{1t},\ldots,\varepsilon_{Nt}]^{\prime}\). As noted in Claim F.2 (iii), we derive from Lemma E.2 (iii) that
\[\max_{1\leq t\leq T}\left\|\widetilde{\beta}-\sqrt{N}\psi_{\min}^{-1/2} \widetilde{W}^{[t+N]}H_{4}^{[t+N]}\right\|_{F}=O_{P}\left(\frac{\sigma p_{\max }\omega^{\frac{1}{2}}c_{\rm inv}q^{\frac{9}{2}}\mu^{\frac{1}{2}}K^{\frac{3}{2}} \sqrt{N}\max\{\sqrt{N\log N},\sqrt{T\log T}\}}{p_{\min}^{2}\min\{\sqrt{N}, \sqrt{T}\}\psi_{\min}}\right).\]
Hence, the first term of (E.2) is \(O_{P}\left(\frac{\sigma^{2}p_{\max}^{\frac{3}{2}}\omega^{\frac{1}{2}}c_{\rm inv }q^{\frac{9}{2}}\mu^{\frac{1}{2}}K^{\frac{3}{2}}\sqrt{N}\max\{\sqrt{N\log N}, \sqrt{T\log T}\}}{p_{\min}^{2}\min\{\sqrt{N},\sqrt{T}\}\psi_{\min}}\right)\). For the second term of (E.2), note that
\[\max_{t}\|H_{4}^{[t+N]}-\widetilde{H}^{[t+N]}D_{M^{\star}}^{-\frac{1}{2}}\psi_ {\min}^{1/2}\|\]
\[\max_{t}\|\widetilde{W}^{[t+N]\prime}\Omega_{t}\mathcal{E}_{t}\|\leq \max_{t}\|(\widetilde{W}^{[t+N]}\widetilde{H}^{[t+N]})^{\prime}\Omega_{t} \mathcal{E}_{t}\|\leq\max_{t}\|\widetilde{W}^{[t+N]}\widetilde{H}^{[t+N]}-W\| \|\Omega_{t}\mathcal{E}_{t}\|+\max_{t}\|W^{\prime}\Omega_{t}\mathcal{E}_{t}\|.\]
From Lemma E.2 (ii), we know \(\max_{t}\|\widetilde{W}^{[t+N]}\widetilde{H}^{[t+N]}-W\|\|\Omega_{t}\mathcal{E }_{t}\|=O_{P}\left(\frac{\sigma^{2}p_{\max}q^{\frac{1}{2}}\sqrt{N}\max\{\sqrt{N },\sqrt{T}\}}{p_{\min}\psi_{\min}^{1/2}}\right)\). In addition, we have \(\max_{t}\|W^{\prime}\Omega_{t}\mathcal{E}_{t}\|=O_{P}(\sigma q^{\frac{1}{2}}K ^{\frac{1}{2}}\sqrt{\log T}\psi_{\min}^{1/2})\) from the matrix Bernstein inequality because \(W=U_{M^{*}}D_{M^{*}}^{\frac{1}{2}}\). Hence, the second term of (E.2) is
\[O_{P}\left(\frac{\sigma^{3}p_{\max}^{\frac{3}{2}}c_{\mathrm{inv}}q^{\frac{5}{2 }}K^{\frac{1}{2}}\sqrt{N}\max\{N,T\}}{p_{\min}^{2}\psi_{\min}^{2}}+\frac{ \sigma^{2}p_{\max}^{\frac{1}{2}}c_{\mathrm{inv}}q^{\frac{5}{2}}K\sqrt{\log T} \max\{\sqrt{N},\sqrt{T}\}}{p_{\min}\psi_{\min}}\right).\]
Moreover, the second term of (E.1) can be written as
\[\frac{1}{\sqrt{N}}\sum_{j=1}^{N}\omega_{jt}\varepsilon_{jt}(\breve{\beta}_{j} ^{\mathrm{full},t}-\breve{\beta}_{j}^{(-t)})=D_{M^{*}}^{-\frac{1}{2}}\left( \widetilde{W}^{[t+N]}\widetilde{H}^{[t+N]}-\breve{W}^{(t+N)}\breve{H}^{(t+N)} \right)^{\prime}\Omega_{t}\mathcal{E}_{t}.\]
Then, we have from Lemma E.2 (i) that
\[\max_{t}\|D_{M^{*}}^{-\frac{1}{2}}\left(\widetilde{W}^{[t+N]} \widetilde{H}^{[t+N]}-\breve{W}^{(t+N)}\breve{H}^{(t+N)}\right)^{\prime} \Omega_{t}\mathcal{E}_{t}\|=O_{P}\left(\frac{\sigma^{2}p_{\max}\vartheta^{ \frac{1}{2}}q^{\frac{3}{2}}\mu^{\frac{1}{2}}K^{\frac{1}{2}}\sqrt{N}\max\{\sqrt {N}\log N,\sqrt{T\log T}\}}{p_{\min}\psi_{\min}\min\{\sqrt{N},\sqrt{T}\}} \right).\]
This completes the proof. \(\square\)
In addition, the following lemma shows the part in which the proofs are different depending on how we define the stopping point.
**Lemma E.3**.: _Suppose assumptions of Theorem C.1 hold.6 Then, we have_
Footnote 6: By Lemma F.1, it is enough to consider the assumptions of Theorem C.1.
\[\max_{t}\|\frac{1}{\sqrt{N}}\sum_{j=1}^{N}\omega_{jt}\varepsilon_ {jt}(\breve{\beta}_{j}^{(-t)}-H_{1}^{\prime}\beta_{j})\|=O_{P}\left(\frac{ \sigma^{2}p_{\max}^{\frac{1}{2}}q^{\frac{1}{2}}K^{\frac{1}{2}}\sqrt{\log T} \max\{\sqrt{N},\sqrt{T}\}}{p_{\min}\psi_{\min}}\right)=o_{P}(1),\] \[\max_{t}\|\frac{1}{\sqrt{N}}\sum_{j=1}^{N}(\omega_{jt}-p_{j})H_{1 }^{\prime}\beta_{j}(\breve{\beta}_{j}^{(-t)}-H_{1}^{\prime}\beta_{j})\|=O_{P} \left(\frac{\sigma p_{\max}\vartheta q^{\frac{1}{2}}\mu^{\frac{1}{2}}K\sqrt{ \log T}\max\{\sqrt{N},\sqrt{T}\}}{p_{\min}\psi_{\min}}\right)=o_{P}(1).\]
**Proof of Lemma E.3**.: (1)-i. Case of using \(\tau_{l}^{*}\) as a stopping point:
Let \(\xi_{t}\coloneqq\breve{\beta}^{(-t)}-\beta H_{1}=\sqrt{N}\breve{W}^{(t+N)} \breve{H}^{(t+N)}D_{M^{*}}^{-\frac{1}{2}}-\beta H_{1}\). To employ matrix Bernstein inequality, we first
estimate \(\max_{t}\|\xi_{t}\|_{2,\infty}\). Note \(\|\xi_{t}\|_{2,\infty}\leq\sqrt{N}\psi_{\min}^{-1/2}\|\widetilde{W}^{(t+N)} \widetilde{H}^{(t+N)}-W\|_{2,\infty}\). So, by Lemma E.2 (v), we have \(\max_{t}\|\xi_{t}\|_{2,\infty}=O_{P}\left(\frac{\sigma p_{\max}^{\frac{1}{2}} \omega^{\frac{1}{2}}q^{\frac{1}{2}}k^{\frac{1}{2}}\sqrt{N}\max\{\sqrt{N}\log N,\sqrt{T\log T}\}}{p_{\min}\psi_{\min}\min\{\sqrt{N},\sqrt{T}\}}\right)\). Furthermore, we have
\[\max_{t}\|\xi_{t}\|_{F} \leq\sqrt{N}\left(\max_{t}\|\widetilde{W}^{[t+N]}\widetilde{H}^{[ t+N]}-\widetilde{W}^{(t+N)}\widetilde{H}^{(t+N)}\|_{F}+\|W-\widetilde{W}^{[t+N]} \widetilde{H}^{[t+N]}\|_{F}\right)\|D_{M^{\star}}^{-\frac{1}{2}}\|\] \[=O_{P}\left(\frac{\sigma p_{\max}^{\frac{1}{2}}k^{\frac{1}{2}} \sqrt{N}\max\{\sqrt{N},\sqrt{T}\}}{p_{\min}\psi_{\min}}\right).\]
Because \(\xi_{t}\) only depends on \(M^{\star}\) and \(Y\) excluding the \(t\)th column of \(Y\), conditioning on \(\{\mathcal{M},\Omega\}\), \(\{\varepsilon_{jt}\}_{j\leq N}\) are independent of \(\xi_{t}\). Hence, \(\mathbb{E}\left[\varepsilon_{jt}|\mathcal{M},\Omega,\xi_{t}\right]=\mathbb{E} \left[\varepsilon_{jt}|\mathcal{M},\Omega\right]=0\) and, conditioning on \(\{\mathcal{M},\Omega,\xi_{t}\}\), \(\{\varepsilon_{jt}\}_{j\leq N}\) are independent across \(j\). Then, by matrix Bernstein inequality, we have
\[\|\xi_{t}^{\prime}\Omega_{t}\mathcal{E}_{t}\|=\|\sum_{j=1}^{N}\omega_{jt} \varepsilon_{jt}\xi_{t,j}^{\prime}\|\leq C\left(\sigma\log T\log N\max_{t}\| \xi_{t}\|_{2,\infty}+\sigma\sqrt{\log T}\max_{t}\|\xi_{t}\|_{F}\right)\]
with probability exceeding \(1-O(T^{-100})\) and so, \(\max_{t}\|\xi_{t}^{\prime}\Omega_{t}\mathcal{E}_{t}\|=O_{P}\left(\frac{\sigma^ {2}p_{\max}^{\frac{1}{2}}\omega^{\frac{1}{2}}q^{\frac{1}{2}}k^{\frac{1}{2}} \sqrt{N\log T}\max\{\sqrt{N},\sqrt{T}\}}{p_{\min}\psi_{\min}}\right)\).
(1)-ii. Case of using \(\tau^{*}\) as a stopping point:
In this case, we note that \(\xi_{t}\) is no longer independent of \(\{\varepsilon_{jt}\}_{j\leq N}\) conditioning on \(\{\mathcal{M},\Omega\}\), due to the fact that \(\tau^{*}\) does depend on the full sample. Therefore, we cannot directly apply the Bernstein inequality as in the \(\tau_{l}^{*}\) case. Instead, we apply Lemma G.10 and obtain the same bound for \(\max_{t}\|\xi_{t}^{\prime}\Omega_{t}\mathcal{E}_{t}\|\).
(2)-i. Case of using \(\tau_{l}^{*}\) as a stopping point:
The proof is similar to that in (1-i). So, we omit it.
(2)-ii. Case of using \(\tau^{*}\) as a stopping point:
The proof is the same as that in (1-ii) although we use Lemma G.11 instead. \(\square\)
### Proof of Proposition e.1
First of all, by Claim F.1 (i), we can know that there is a \(K\times K\) matrix \(H_{1}\) such that \(\frac{1}{\sqrt{N}}\beta H_{1}\) is the left singular vector of \(M^{\star}\). That is, \(\frac{1}{\sqrt{N}}\beta H_{1}=U_{M^{\star}}\). Let \(\widetilde{B}_{t}:=\frac{1}{N}\sum_{j=1}^{N}\omega_{jt}\widetilde{\beta}_{j} \widetilde{\beta}_{j}^{\prime}\), \(B_{t}^{*}:=\frac{1}{N}\sum_{j=1}^{N}\omega_{jt}H_{1}^{\prime}\beta_{j}\beta_{j} ^{\prime}H_{1}\) and \(B:=\frac{1}{N}\sum_{j=1}^{N}p_{j}H_{1}^{\prime}\beta_{j}\beta_{j}^{\prime}H_{1}\). Then, we define \(H_{2}:=\left(I_{K}+\varphi\right)H_{1}^{-1}\) where \(\varphi:=\frac{1}{N}B^{-1}H_{1}^{\prime}\beta^{\prime}\Pi\left(\beta H_{1}- \widetilde{\beta}\right)\). Note that both \(B\) and \(H_{2}\) do not depend on \(i\) or \(t\). Because \(\widehat{F}_{t}=\left(\sum_{j=1}^{N}\omega_{jt}\widetilde{\beta}_{j}\widetilde{ \beta}_{j}^{\prime}\right)^{-1}\sum_{j=1}^{N}\omega_{jt}\widetilde{\beta}_{j}y_{jt}\) by definition, basic algebras shows the following identity:
\[\widehat{F}_{t}-H_{2}F_{t}=H_{2}\left(\sum_{j=1}^{N}\omega_{jt} \beta_{j}\beta_{j}^{\prime}\right)^{-1}\left(\sum_{j=1}^{N}\omega_{jt}\beta_{j} \varepsilon_{jt}\right)+\sum_{d=1}^{6}\Delta_{d,t},\] \[\Delta_{1,t}:=\widetilde{B}_{t}^{-1}\frac{1}{N}\sum_{j=1}^{N} \omega_{jt}\varepsilon_{jt}\left(\widetilde{\beta}_{j}-H_{1}^{\prime}\beta_{j} \right)-B^{-1}H_{1}^{\prime}\frac{1}{N}\sum_{j=1}^{N}\left(\omega_{jt}-p_{j} \right)\beta_{j}F_{t}^{\prime}H_{1}^{\prime-1}\left(\widetilde{\beta}_{j}-H_{1} ^{\prime}\beta_{j}\right),\]
\[\Delta_{2,t} \coloneqq\left(\widetilde{B}_{t}^{-1}-B^{-1}\right)\frac{1}{N}\sum_{ j=1}^{N}\omega_{jt}\widetilde{\beta}_{j}\left(\beta_{j}^{\prime}H_{1}- \widetilde{\beta}_{j}^{\prime}\right)H_{1}^{-1}F_{t},\] \[\Delta_{3,t} \coloneqq B^{-1}\frac{1}{N}\sum_{j=1}^{N}\omega_{jt}\left( \widetilde{\beta}_{j}-H_{1}^{\prime}\beta_{j}\right)\left(\beta_{j}^{\prime}H_ {1}-\widetilde{\beta}_{j}^{\prime}\right)H_{1}^{-1}F_{t},\] \[\Delta_{4,t} \coloneqq\left(\widetilde{B}_{t}^{-1}-B_{t}^{*-1}\right)H_{1}^{ \prime}\frac{1}{N}\sum_{j=1}^{N}\omega_{jt}\beta_{j}\varepsilon_{jt},\ \ \Delta_{5,t}\coloneqq\left(H_{1}^{-1}-H_{2}\right)\left(\sum_{j=1}^{N}\omega_{ jt}\beta_{j}\beta_{j}^{\prime}\right)^{-1}\left(\sum_{j=1}^{N}\omega_{jt} \beta_{j}\varepsilon_{jt}\right),\] \[\Delta_{6,t} \coloneqq\widetilde{B}_{t}^{-1}\frac{1}{N}\sum_{j=1}^{N}\omega_{ jt}\widetilde{\beta}_{j}M_{jt}^{R}.\]
**Step 1.** We start from the first term of \(\Delta_{1,t}\): \(P_{1}\coloneqq\widetilde{B}_{t}^{-1}\frac{1}{N}\sum_{j=1}^{N}\omega_{jt} \varepsilon_{jt}\left(\widetilde{\beta}_{j}-H_{1}^{\prime}\beta_{j}\right)\). We have \(P_{1}=P_{1,1}+P_{1,2}\) where
\[P_{1,1} \coloneqq\widetilde{B}_{t}^{-1}\frac{1}{N}\sum_{j=1}^{N}\omega_{ jt}\varepsilon_{jt}\left(\widetilde{\beta}_{j}-\breve{\beta}_{j}^{(-t)} \right)=\frac{1}{N}\widetilde{B}_{t}^{-1}\left(\widetilde{\beta}-\sqrt{N}\breve {W}^{(N+t)}\breve{H}^{(N+t)}D_{M^{*}}^{-\frac{1}{2}}\right)^{\prime}\Omega_{t }\mathcal{E}_{t},\] \[P_{1,2} \coloneqq\widetilde{B}_{t}^{-1}\frac{1}{N}\sum_{j=1}^{N}\omega_{ jt}\varepsilon_{jt}\left(\breve{\beta}_{j}^{(-t)}-H_{1}^{\prime}\beta_{j}\right)= \frac{1}{N}\widetilde{B}_{t}^{-1}\left(\sqrt{N}\breve{W}^{(N+t)}\breve{H}^{(N +t)}D_{M^{*}}^{-\frac{1}{2}}-\beta H_{1}\right)^{\prime}\Omega_{t}\mathcal{E} _{t}.\]
Note that \(\max_{t}\|\widetilde{B}_{t}^{-1}\|=O_{P}(\frac{1}{p_{\min}})\) by Claim F.4 (iii). Hence, we have by Lemma E.2,
\[\max_{t}\|P_{1,1}\|\leq\max_{t}\|\widetilde{B}_{t}^{-1}\|N^{- \frac{1}{2}}\max_{t}\|N^{-\frac{1}{2}}(\widetilde{\beta}-\sqrt{N}\breve{W}^{(N +t)}\breve{H}^{(N+t)}D_{M^{*}}^{-\frac{1}{2}})^{\prime}\Omega_{t}\mathcal{E}_{t}\|\] \[=O_{P}\left(\frac{\sigma^{2}p_{\max}^{\frac{3}{2}}\mathrm{ov}^{ \frac{1}{2}}c_{\mathrm{inv}}q^{\frac{2}{2}}\mathrm{\mu}^{\frac{1}{2}}K^{\frac{ 3}{2}}\max\{\sqrt{N\log N},\sqrt{T\log T}\}}{p_{\min}^{3}\min\{\sqrt{N},\sqrt{ T}\}\psi_{\min}}+\frac{\sigma^{3}p_{\max}^{\frac{3}{2}}c_{\mathrm{inv}}q^{ \frac{5}{2}}K^{\frac{1}{2}}\max\{N,T\}}{p_{\min}^{3}\psi_{\min}^{2}}\right).\]
Note that \(\max_{t}\|P_{1,2}\|\leq\frac{1}{N}\|\widetilde{B}_{t}^{-1}\|\max_{t}\|\xi_{t}^ {\prime}\Omega_{t}\mathcal{E}_{t}\|\). Then, using Lemma E.3, we have
\[\max_{t}\|P_{1,2}\|=O_{P}\left(\frac{\sigma^{2}p_{\max}^{\frac{1}{2}}\mathrm{ov }^{\frac{1}{2}}q^{\frac{1}{2}}K^{\frac{1}{2}}\sqrt{\log T}\max\{\sqrt{N},\sqrt {T}\}}{p_{\min}^{2}\sqrt{N}\psi_{\min}}\right).\]
**Step 2.** By using the same logic in Step 1, we can bound the second term of \(\Delta_{1,t}\),
\(P_{2}\coloneqq B^{-1}H_{1}^{\prime}\frac{1}{N}\sum_{j=1}^{N}\left(\omega_{jt}- p_{j}\right)\beta_{j}F_{t}^{\prime}H_{1}^{\prime-1}\left(\widetilde{\beta}_{j}-H_{1}^{ \prime}\beta_{j}\right)\) similarly. The only difference is the part using the matrix Bernstein inequality since \(\{\omega_{jt}\}_{j\leq N}\) are dependent across \(j\) while \(\{\varepsilon_{jt}\}_{j\leq N}\) are independent across \(j\). We split \(P_{2}\) like \(P_{2}=P_{2,1}+P_{2,2}\) where
\[P_{2,1} \coloneqq\frac{1}{N}B^{-1}H_{1}^{\prime}\beta^{\prime}\left(\Omega_{t}- \Pi\right)\left(\widetilde{\beta}-\sqrt{N}\breve{W}^{(t+N)}\breve{H}^{(t+N)}D_ {M^{*}}^{-\frac{1}{2}}\right)H_{1}^{-1}F_{t},\] \[P_{2,2} \coloneqq\frac{1}{N}B^{-1}H_{1}^{\prime}\beta^{\prime}\left(\Omega _{t}-\Pi\right)\left(\sqrt{N}\breve{W}^{(t+N)}\breve{H}^{(t+N)}D_{M^{*}}^{- \frac{1}{2}}-\beta H_{1}\right)H_{1}^{-1}F_{t}.\]
By the same token as the part \(P_{1,1}\) in Step 1 with the aids of Claims F.1 - F.5, we can show that
\[P_{2,1}=O_{P}\left(\frac{\sigma^{2}p_{\max}^{\frac{3}{2}}c_{\mathrm{inv}}q^{\frac {7}{2}}\mu K\max\{\sqrt{N},\sqrt{T}\}}{\psi_{\min}p_{\min}^{3}\min\{\sqrt{N}, \sqrt{T}\}}+\frac{\sigma p_{\max}^{\frac{3}{2}}\partial q^{\frac{11}{2}}\mu^{ \frac{3}{2}}K^{\frac{5}{2}}\max\{\sqrt{\log N},\sqrt{\log T}\}}{p_{\min}^{3} \min\{N,T\}}\right).\]
and so, we omit the proof. In addition, using Lemma E.3, the part \(P_{2,2}\) can be bounded like
\[\max_{t}\|P_{2,2}\|\leq\frac{1}{\sqrt{N}}\|B^{-1}\|\max_{t}\|\frac{1}{\sqrt{N} }H_{1}^{\prime}\beta^{\prime}\left(\Omega_{t}-\Pi\right)\xi_{t}\|\max_{t}\|H_ {1}^{-1}F_{t}\|=O_{P}\left(\frac{\sigma p_{\max}\partial q^{\frac{3}{2}}\mu K ^{\frac{3}{2}}\sqrt{\log T}}{p_{\min}^{2}\sqrt{N}\min\{\sqrt{N},\sqrt{T}\}} \right).\]
**Step 3.** We bound \(\max_{t}\|\Delta_{2,t}\|\). By Claim F.1 (iv), Claim F.3 (ii)
\[\max_{t}\|\Delta_{2,t}\| \leq O_{P}(1)\max_{t}\|\widetilde{B}_{t}^{-1}-B^{-1}\|\max_{j}\|H _{1}\beta_{j}\|p_{\max}^{\frac{1}{2}}\frac{1}{\sqrt{N}}\|\beta H_{1}- \widetilde{\beta}\|_{F}\max_{t}\|H_{1}^{-1}F_{t}\|\] \[=O_{P}\left(\frac{\sigma^{2}p_{\max}^{\frac{5}{2}}c_{\mathrm{inv} }^{2}q^{5}\mu K^{2}\max\{\sqrt{N},\sqrt{T}\}}{p_{\min}^{4}\min\{N,T\}\psi_{ \min}}+\frac{\sigma p_{\max}^{\frac{3}{2}}c_{\mathrm{inv}}\vartheta q^{3}\mu^{ \frac{3}{2}}K^{\frac{5}{2}}\sqrt{\log T}}{\sqrt{N}\min\sqrt{N},\sqrt{T}}\right).\]
**Step 4.** We now bound \(\max_{t}\|\Delta_{3,t}\|\). By Claim F.1 (iv) and Claim F.3 (ii), we have
\[\max_{t}\|\Delta_{3,t}\| \leq O_{P}(1)\|B^{-1}\|\frac{1}{\sqrt{N}}\|\widetilde{\beta}- \beta H_{1}\|\|\Pi\|\frac{1}{\sqrt{N}}\|\widetilde{\beta}-\beta H_{1}\|\max_{ t}\|H_{1}^{-1}F_{t}\|\] \[=O_{P}\left(\frac{\sigma^{2}p_{\max}^{2}c_{\mathrm{inv}}^{2}q^{5 }\mu^{\frac{1}{2}}K^{\frac{3}{2}}\max\{\sqrt{N},\sqrt{T}\}}{p_{\min}^{3}\min\{ \sqrt{N},\sqrt{T}\}\psi_{\min}}\right).\]
**Step 5.** We estimate \(\max_{t}\|\Delta_{4,t}\|\). By Claims F.4 (iv) and F.6 (i), we have
\[\max_{t}\|\Delta_{4,t}\|\leq\frac{1}{N}\max_{t}\|\widetilde{B}_{t}^{-1}-B_{t}^ {*-1}\|\max_{t}\|\left(\beta H_{1}\right)^{\prime}\Omega_{t}\mathcal{E}_{t}\|= O_{P}\left(\frac{\sigma^{2}p_{\max}^{2}\partial c_{\mathrm{inv}}q^{2}K\sqrt{\log T }\max\{\sqrt{N},\sqrt{T}\}}{p_{\min}^{3}\sqrt{N}\psi_{\min}}\right).\]
**Step 6.** We bound \(\max_{t}\|\Delta_{5,t}\|\). First, note that \(H_{2}-H_{1}^{-1}=\varphi H_{1}^{-1}\) and \(\|\varphi\|=O_{P}\left(\frac{\sigma p_{\max}^{\frac{1}{2}}c_{\mathrm{inv}}q^{2} K^{\frac{1}{2}}\max\{\sqrt{N},\sqrt{T}\}}{p_{\min}\psi_{\min}}\right)\) as noted in the proof of Claim F.3. Moreover, by Claim F.4 (iv), we have \(\max_{t}\|H_{1}^{-1}(\sum_{j=1}^{N}\omega_{jt}\beta_{j}\beta_{j}^{\prime})^{-1 }H_{1}^{\prime-1}\|=\|(NB_{t}^{*})^{-1}\|=O_{P}(\frac{1}{p_{\min}N})\). Hence, by Claim F.6 (i),
\[\max_{t}\|\Delta_{5,t}\|\leq\|\varphi\|\|H_{1}^{-1}(\sum_{j=1}^{N}\omega_{jt} \beta_{j}\beta_{j}^{\prime})^{-1}H_{1}^{\prime-1}\|\max_{t}\|\left(\beta H_{1} \right)^{\prime}\Omega_{t}\mathcal{E}_{t}\|=O_{P}\left(\frac{\sigma^{2}p_{\max }^{\frac{1}{2}}c_{\mathrm{inv}}q^{2}K^{\frac{1}{2}}\max\{\sqrt{N},\sqrt{T}\}}{p_ {\min}^{2}\psi_{\min}}\right).\]
**Step 7.** Lastly, we bound \(\max_{t}\|\Delta_{6,t}\|\). Note that
\[\Delta_{6,t} =\left(\widetilde{B}_{t}^{-1}-B^{-1}\right)\frac{1}{N}\sum_{j=1}^{N }\omega_{jt}H_{1}^{\prime}\beta_{j}M_{jt}^{R}+B^{-1}\frac{1}{N}\sum_{j=1}^{N} \omega_{jt}\left(\widetilde{\beta}_{j}-H_{1}^{\prime}\beta_{j}\right)M_{jt}^{R}\] \[\quad+\left(\widetilde{B}_{t}^{-1}-B^{-1}\right)\frac{1}{N}\sum_{j=1 }^{N}\omega_{jt}\left(\widetilde{\beta}_{j}-H_{1}^{\prime}\beta_{j}\right)M_{jt }^{R}+B^{-1}\frac{1}{N}\sum_{j=1}^{N}\omega_{jt}H_{1}^{\prime}\beta_{j}M_{jt}^{R}.\]
By Claims F.1, F.3 and F.4, the last term dominates the first three terms. The last term is
\[\max_{t}\|B^{-1}\frac{1}{N}\sum_{j=1}^{N}\omega_{it}H_{1}^{\prime}\beta_{j}M_{it} ^{R}\|\leq\frac{1}{\sqrt{N}}\|B^{-1}\|\|\beta H_{1}\|p_{\max}^{\frac{1}{2}}\max _{it}|M_{it}^{R}|=O_{P}\left(\frac{p_{\max}^{\frac{1}{2}}}{p_{\min}}\right)\max _{it}|M_{it}^{R}|\]
by Claims F.3 and F.4. This completes the proof. \(\Box\)
|
2304.01348 | Methods for Estimating Neural Information | Estimating the Shannon information associated with individual neurons is a
non-trivial problem. Three key methods used to estimate the mutual information
between neuron inputs and outputs are described, and a list of further readings
is provided. | James V Stone | 2022-12-20T12:13:48Z | http://arxiv.org/abs/2304.01348v1 | # Methods for Estimating Neural Information
###### Abstract
Estimating the Shannon information associated with individual neurons is a non-trivial problem. Three key methods used to estimate the mutual information between neuron inputs and outputs are described, and a list of further readings is provided.
## 1 Neural Information Methods
Consider a temporal sequence of stimulus values \(x\) and the resultant neuron outputs \(y\), which can be either a sequence of continuous values or a sequence of spikes. The total Shannon entropy \(H(y)\) in the outputs is essentially a global measure of how much the response sequence varies over time. In contrast, the noise entropy \(H(y|x)\) is a measure of how much variation in the response sequence remains after the stimulus value \(x\) at each point in time has been taken into account. Therefore, the difference between \(H(y)\) and \(H(y|x)\) is the amount of variation in the response sequence that can be attributed to the stimulus sequence. This difference is the mutual information [13, 16] between \(x\) and \(y\),
\[I(x,y) = H(y)-H(y|x))\mbox{ bits}, \tag{1}\]
where all logarithms are base 2, which ensures that information is measured in bits; one bit provides enough information to choose between two equally probable alternatives.
In practice, it will prove useful to know that mutual information can be obtained from two other equations. Somewhat counter-intuitively, \(I(x,y)\) is also given by the difference between \(H(x)\) (the entropy of the stimulus values) and \(H(x|y)\) (the entropy in the stimulus values \(x\) that remains after the responses \(y\) have been taken into account),
\[I(x,y) = H(x)-H(x|y))\mbox{ bits}. \tag{2}\]
Finally, it can be shown that
\[I(x,y) \leq 0.5\log(1+SNR)\mbox{ bits}, \tag{3}\]
where SNR is the signal-to-noise ratio (see Section 3), with equality if each variable is independent and has a Gaussian distribution.
The mutual information can be estimated using three broad strategies [2], which provide:
1. a direct estimate using Equation 1,
2. a lower bound using Equation 2,
3. an upper bound using Equation 3.
For simplicity, stimulus values are represented as \(x\) here, so that \(y\!=\!g(x)+\eta\), where \(g\) is a neuron transfer function and \(\eta\) is a noise term.
## 2 The Direct Method
**Estimating the Entropy of a Spike Train**. In physics, the entropy of a jar of gas is proportional to the volume of the jar. By analogy, we can treat a spike train as if it were a one-dimensional jar, so that spike train entropy is proportional to the amount of time \(T\) over which the spike train is measured: \(H(T,\Delta t)\!\propto\!T\), where \(\Delta t\) defines the temporal resolution used to measure spikes. Dividing \(H(T,\Delta t)\) by \(T\) yields the _entropy rate_, which
\begin{tabular}{c c c c} Trial & Spike Trains & Trial & Spike Trains \\
**1** & 001000010111**10001101 & **1** & 00100001011100011 \\
**2** & 011000100011**10001001 & **2** & 01100010001110001001 \\
**3** & 01100010001100001001 & **3** & 0110001000110001001 \\
**4** & 011000100010001001001 & **4** & 0110001000100001001 \\
**5** & 01100010001000101 & **5** & 0110001000100001011 \\
**6** & 0110001000100010001001 & **6** & 011000100010001001 \\
**7** & 0010001000100011101 & **7** & 001000100010011101 \\
**8** & 1110001000100010001001 & **8** & 1110001000100001001 \\
**9** & 01000010100100000001 & **9** & 0100001010010000001 \\
**10** & 011000100010010011001 & **10** & 0110001000100011001 \\ \end{tabular}
Figure 1 The direct method (schematic). The same stimulus sequence is repeated for \(N\!=\!10\) trials and the \(N\) response sequences are recorded; a spike is represented as 1 and no spike as 0.
(a) Total entropy \(H(y)\) is estimated from the probability of particular spike trains within a long unique spike train sequence (which is the concatenation of 10 trials here). The probability \(p(y)\) of a particular \(T\)-element spike train \(y\) is estimated as the number of instances of \(y\) expressed as a proportion of all \(T\)-element spike trains. For example, in the data above, there are 170 places where a three-element spike train could occur, and there are 35 instances of the spike sequence \(y\!=\!\)[100] (marked in bold), so \(p(y)\!=\!\)35/170\(\approx\)0.206.
(b) Noise entropy \(H(y|x)\) is estimated from the conditional probability of particular spike trains. The same stimulus value occurs at the same time in each of \(N\!=\!10\) trials. Therefore, the conditional probability \(p(y|x)\) of the response \(y\) to a stimulus subsequence \(x\) which starts at time \(t\) is the number \(N_{y}\) of trials which contain \(y\) at time \(t\) expressed as a proportion of the number \(N\) of spike trains that begin at time \(t\) (i.e. \(p(y|x)\!=\!p(y|t)\)). For example, there are \(N_{y}\!=\!9\) instances of the spike sequence \(y\!=\!\)[100] at \(t\!=\!3\) (marked in bold), so the conditional probability is \(p(y\!=\!\)[100]\(|t\!=\!\)3)\(=\)9/10\(=\)0.9.
converges to the entropy \(H(y)\) for large values of \(T\); specifically,
\[H(y) = \lim_{T\rightarrow\infty}\frac{H(T,\Delta t)}{T}\ \ \ \ \mbox{bits/s}. \tag{4}\]
Strong et al. (1998)[17] use arguments from statistical mechanics to show that a graph of \(H(T,\Delta t)/T\) versus \(1/T\) should yield a straight line (see also Appendix A.8 in Bialek, 2012[1]). The \(x\)-intercept of this line is at \(1/T\!=\!0\), corresponding to a \(y\)-intercept of \(H(T,\Delta t)/T\) at \(T\!=\!\infty\), which is therefore the entropy \(H(y)\).
The direct method usually involves two types of output sequences: _unique_ and _repeated_. The unique spike train is a response to a long sequence of inputs; this is used to estimate the total spike train entropy. The repeated spike train sequence consists of spike trains obtained in response to \(N\) repeats of a stimulus sequence; these are used to estimate the entropy of the noise in the spike train. However, if the repeated sequence is sufficiently long then the set of \(N\) response sequences can be treated as a unique spike train, as in Figure 1.
Figure 2 The direct method. Entropy and noise entropy rates for a visual neuron (H1 in the fly), responding to a randomly moving visual image. The filled circles in the upper trace show the full spike-train entropy rate for different values of \(1/T\) (with \(\Delta t\!=\!3\,\)ms). The straight line is an extrapolation to \(1/T\!=\!0\) (i.e. \(T\!\rightarrow\!\infty\)) and yields \(H(y)\). The lower trace shows the spike-train noise entropy rate for different values of \(1/T\), and the straight line is again an extrapolation to \(1/T\!=\!0\) and yields \(H(y|x)\). The difference between the ordinate intercepts of the two straight lines is \(H(y)-H(y|x)\) and is therefore the mutual information rate (Equation 1). Reproduced with permission from Strong et al. (1998)[17].
**Estimating Total Entropy**\(H(y)\). The entropy \(H(T,\Delta t)\) for one value of \(T\) is estimated from the probability \(p(y^{i})\) of the \(m_{T}\) different observed sequences \(y^{1},\)...,\(y^{m_{T}}\) of length \(T\):
\[H(T,\Delta t) = \sum_{i=1}^{m_{T}}p(y^{i})\log\frac{1}{p(y^{i})}, \tag{5}\]
where \(p(y^{i})\) is the number of instances of the sequence \(y^{i}\), expressed as a proportion of the number of different sequences of length \(T\) observed anywhere in the unique output sequence (see Figure 1a).
The entropy of the output sequence is found by estimating \(H(T,\Delta t)/T\) for successively larger values of \(T\) and then extrapolating to find the entropy at \(1/T\!=\!0\) (i.e. at \(T\!=\!\infty\)). In the limit \(T\!\rightarrow\!\infty\),
\[H(y) = \lim_{T\rightarrow\infty}\frac{H(T,\Delta t)}{T} \tag{6}\] \[= \lim_{T\rightarrow\infty}\frac{1}{T}\!\sum_{i=1}^{m_{T}}\!\!p(y^ {i})\log\frac{1}{p(y^{i})}, \tag{7}\]
as shown by the upper line in Figure 2.
**Estimating Noise Entropy**\(H(y|x)\). The stimulus sequence \(x\) is repeated \(N\) times, so there are a total of \(N\) similar response sequences. The conditional (i.e. noise) entropy is estimated as
\[H(y|x) \approx {\rm E}_{t}[H(y|x^{t})], \tag{8}\]
where \(x^{t}\) is the stimulus subsequence starting at time \(t\) and \(y\) is the corresponding response. Note that this average is taken over successive time indices between \(t\!=\!1\) and \(t\!=\!n-T\). \(H(y|x^{t})\) is the entropy of the output sequences \(y^{i}\) given \(x^{t}\) (analogous to Equation 7):
\[H(y|x^{t}) = \lim_{T\rightarrow\infty}\frac{1}{T}\sum_{i=1}^{m_{t}}\!\!p(y^{i }|x^{t})\log\frac{1}{p(y^{i}|x^{t})}, \tag{9}\]
where \(p(y^{i}|x^{t})\) is the number of instances of the sequence \(y^{i}\) expressed as a proportion of the number of different sequences of length \(T\) observed at time \(t\) in the output sequences (see Figure 1b). Note that the same stimulus value occurs at the same time in each trial, so \(p(y|x^{t})\!=\!p(y|t)\). As above, \(H(y|x^{t})\) is found by evaluating the right-hand side of Equation 9 for successively larger values of \(T\) and extrapolating to find the entropy at \(1/T\!=\!0\) (i.e. at \(T\!=\!\infty\)), as shown by the lower line in Figure 2. Finally, mutual information is estimated from Equation 1. See also Nemenman, Shafee, and Bialek (2002) [10].
**Assumptions**. Inputs are repeated many times. Data are spike trains. The estimation process makes no assumptions regarding the distribution of variables and therefore requires large amounts of data.
The Upper Bound Method
If the noise \(\eta\) in the output \(y\) has an independent Gaussian distribution then the mutual information between \(x\) and \(y\) is maximised provided \(x\) also has an independent Gaussian distribution. Thus, if the input \(x\) is Gaussian and independent then the estimated mutual information provides an upper bound. Additionally, if each variable is Gaussian (but not necessarily independent) with a bandwidth of \(W\,\)Hz then its entropy is the sum of the entropies of its Fourier components [15].
In common with the direct method, input sequences need to be repeated many times, but the number \(N\) of trials (repeats) required here is fewer. This is because a Gaussian distribution is defined in terms of its mean and variance, so, in effect, we only need to estimate a few means and variances from the data.
**Estimating Output Signal Power**
1. Find the average output sequence \(\overline{y}\!=\!1/N\!\sum_{i=1}^{N}\!y^{i}\).
2. Obtain Fourier coefficient (\(a(f)\),\(b(f)\)) of \(\overline{y}\) at each frequency \(f\).
3. Estimate the power of each frequency \(f\) as \({\cal S}(f)\!=\!a(f)^{2}+b(f)^{2}\).
**Estimating Output Noise Power**
1. Estimate the noise \(\eta^{i}\!=\!y^{i}-\overline{y}\) in each of the \(N\) output sequences.
2. Find the Fourier coefficient (\(a(f)\),\(b(f)\)) of \(\eta^{i}\) at each frequency \(f\).
3. Estimate the power at each frequency \(f\) as \({\cal N}^{i}(f)\!=\!a(f)^{2}+b(f)^{2}\).
4. Find the average power of each Fourier component \[{\cal N}(f) = \frac{1}{N}\sum_{i=1}^{N}{\cal N}^{i}(f).\] (10)
Assuming a Nyquist sampling rate of \(2W\,\)Hz, estimate the mutual information \(I(x\),\(y)\) by summing over frequencies
\[R_{info} = \sum_{f=0}^{W}\!\log\!\left(1+\frac{{\cal S}(f)}{{\cal N}(f)} \right)\ \mbox{bits/s}, \tag{11}\]
where \(R_{info}\geq I(x\),\(y)\), with equality if each variable is iid Gaussian.
**Assumptions**. The response sequences to each of \(N\) repeats of the same stimulus sequence are continuous. Each output sequence is Gaussian, but not necessarily independent (iid).
The Lower Bound Method
Unlike previous methods, this method does not rely on repeated presentations of the same stimulus, and it can be used for spiking or continuous outputs. In both cases, we can use the neuron inputs \(x\) and outputs \(y\) to estimate a linear decoding filter \(\mathbf{w}_{d}\). When the output sequence is convolved with this filter, it provides an estimate \(x_{est}\!\!=\!\!\mathbf{w}_{d}\otimes y\) of the stimulus \(x\), where \(\otimes\) is the convolution operator. We assume that \(x\!=\!x_{est}+\xi_{est}\), so that the estimated noise in the estimated stimulus sequence is \(\xi_{est}\!=\!x-x_{est}\).
Assuming a bandwidth of \(W\,\mathrm{Hz}\) and that values are transmitted at the Nyquist rate of \(2W\,\mathrm{Hz}\), we Fourier transform [15] the stimulus sequence \(x\) to find the signal power \(\mathcal{X}(f)\) at each frequency \(f\) and Fourier transform \(\xi_{est}\) to find the power in the estimated noise \(\mathcal{M}(f)\) at each frequency. The mutual information is estimated by summing over frequencies:
\[R_{min} = H(x)-H(\xi_{est}) \tag{12}\] \[= \sum_{f}\!\log\mathcal{X}(f)-\sum_{f}\!\log\!\mathcal{M}(f)\] (13) \[= \sum_{f=0}^{W}\!\log\frac{\mathcal{X}(f)}{\mathcal{M}(f)}\ \ \mathrm{ bits/s}, \tag{14}\]
where \(R_{min}\!\leq\!I(x,y)\), with equality if each variable is iid Gaussian.
**Assumptions**. The stimulus sequence \(x\) is Gaussian, but not necessarily independent (iid). Outputs are spiking or continuous.
**Further Reading**. This is an extract from Principles of Neural Information Theory (2018) [14], and is based on Strong et al. (1998) [17], Rieke et al. (1997) [12], Borst and Theunissen (1999) [2], Dayan and Abbot (2001) [4], and Niven et al. (2007) [11]. Relevant developments can be found in Nemenman, Shafee, and Bialek (2002) [10], Juusola et al. (2003, 2016) [8, 9], Ince et al. (2009) [7], Goldberg et al. (2009) [6], Crumiller et al. (2013) [3], Valiant and Valiant (2013) [18], and Dettner et al. (2016) [5]. A tutorial account of information theory can be found on arxiv [13], and in these books [14, 16].
|
2301.00688 | Active Learning for Neural Machine Translation | The machine translation mechanism translates texts automatically between
different natural languages, and Neural Machine Translation (NMT) has gained
attention for its rational context analysis and fluent translation accuracy.
However, processing low-resource languages that lack relevant training
attributes like supervised data is a current challenge for Natural Language
Processing (NLP). We incorporated a technique known Active Learning with the
NMT toolkit Joey NMT to reach sufficient accuracy and robust predictions of
low-resource language translation. With active learning, a semi-supervised
machine learning strategy, the training algorithm determines which unlabeled
data would be the most beneficial for obtaining labels using selected query
techniques. We implemented two model-driven acquisition functions for selecting
the samples to be validated. This work uses transformer-based NMT systems;
baseline model (BM), fully trained model (FTM) , active learning least
confidence based model (ALLCM), and active learning margin sampling based model
(ALMSM) when translating English to Hindi. The Bilingual Evaluation Understudy
(BLEU) metric has been used to evaluate system results. The BLEU scores of BM,
FTM, ALLCM and ALMSM systems are 16.26, 22.56 , 24.54, and 24.20, respectively.
The findings in this paper demonstrate that active learning techniques helps
the model to converge early and improve the overall quality of the translation
system. | Neeraj Vashistha, Kriti Singh, Ramakant Shakya | 2022-12-30T17:04:01Z | http://arxiv.org/abs/2301.00688v1 | # Active Learning for Neural Machine Translation
###### Abstract
The machine translation mechanism translates texts automatically between different natural languages, and Neural Machine Translation (NMT) has gained attention for its rational context analysis and fluent translation accuracy. However, processing low-resource languages that lack relevant training attributes like supervised data is a current challenge for Natural Language Processing (NLP). We incorporated a technique known Active Learning with the NMT toolkit Joey NMT to reach sufficient accuracy and robust predictions of low-resource language transla- tion. With active learning, a semi-supervised machine learning strategy, the training algorithm determines which unlabeled data would be the most beneficial for obtaining labels using selected query techniques. We implemented two model-driven acquisition functions for selecting the samples to be validated. This work uses transformer-based NMT systems; baseline model (BM), fully trained model (FTM), active learning least confidence based model (ALLCM), and active learning margin sampling based model (ALMSM) when translating English to Hindi. The bilingual Evaluation Understudy (BLEU) metric has been used to evaluate system results. The BLEU scores of BM, FTM, ALLM and ALMSM systems are 16.26, 22.56, 24.54, and 24.20, respectively. The findings in this paper demonstrate that active learning techniques helps the model to converge early and improve the overall quality of the translation system.
neural machine translation, natural language processing, active learning, semi-supervised machine learning, acquisition functions
## I Introduction
One of the sub-field of computational linguistics known as Machine translation looks into the software utilization for speech or text language translation. [14], began the advances in machine translation. Machine translation systems developed in a progression of systems from rule-based to corpus-based approaches. Corpora-based machine translation systems are classified into Example-Based Machine Translation (EBMT), Statistical Machine Translation (SMT) and Neural Machine Translation (NMT). The scope of EBMT is quite limited because, it requires a large corpus, and not everything can be covered by example, the spoken languages are too vivid, diverse and ambiguous. Hence, SMT came into existence which relies upon bayesian inference. SMT predicts translation probabilities of phrase pairs in corresponding source-target languages. By increasing the size of the dataset, the probability of a certain pair of phrases can be enhanced. However, the inability to achieve context information, differently trained components and system complexity are the weak points of SMT, which led to the development of the NMT system [16]. The NMT system can handles sequence-to-sequence learning problems for variable length source and target sentences and long-term dependency problems. The NMT system improves translation prediction and has excellent context-analyzing properties.
The superiority of NMT over phrase-based SMT is undeniable, and neural networks are used in most online machine translation engines. However, in spite of the growth achieved in the domain of machine translation, the idea of NMT system development being data-hungry continues to be a vital issue in expanding the work for any low-resource languages. In order to train a high-quality translation model, NMT requires a large bilingual corpus. However, creating parallel corpora for most low-resource language pairs is costly and requires human effort. Nevertheless, the language and geographical coverage of NMT have yet to hit new heights due to resource accessibility concerns and a preference for well-established assessment benchmarks. This compels additional research into the present state of NMT utilizing novel techniques for such languages, particularly at a point when an efficient and equal exchange of knowledge across borders and cultures is a pressing need.
In this research, we are utilising active learning, an iterative semi-supervised framework to reduce the cost of data acquisition for machine translation. Active learning provides solutions to two challenging problems of data, namely, its quantity and quality. In active learning, the learner is able to query an oracle for labelling new data points. The quantity of labelling required to understand a concept can be substantially lower than annotating the entire unlabelled dataset because the learner selects the data points for annotation [1]. This approach is helpful in low-resource scenarios where unlabeled data is abundant, but manual labelling is expensive. Numerous NLP domains, including categorization, sequence labelling, spoken language comprehension, and machine translation, have benefited from the application of active learning. [15][10][14][15][16][17][18][19][20][21][22]. In machine translation, active learning was first applied to SMT [11], which proposed statistical algorithms and the effectiveness of active learning from the perspective of data coverage. With a large monolingual corpus availability, an active learning strategy is employed to select the most informative sentences for human translation.
The main aim of the current work is to extend Joey NMT, a neural machine translation toolkit based on PyTorch, with active learning techniques for low-resource language translation using the English-Hindi language corpus. We implemented two active learning sampling strategies, least confidence and margin sampling, to obtain the most useful samples to be supervised. First, we trained the transformer-based NMT ar
chitecture to obtain the baseline model. We added the active learning technique to generate active learning NMT models. Further, we trained our transformer-based NMT model with the data we provided to our baseline and active learning models to evaluate the performance of the active learning models. As a result of this, we have four models, baseline model, fully trained model, active learning least confidence based model, and active learning margin sampling based model. We also used Byte Pair Encoding (BPE) to enable open vocabulary translation and analysis on the grounds of different BLEU scores to improve the quality of the existing NMT output [1][1]. We discuss the literature survey in the following section and briefly describe our baseline and active learning model architecture methodology. We further explain the experimental settings used in this research. Then, we document a comparative analysis of results acquired from various models. Finally, we end the document with a conclusion and remarks on future scope of active learning in machine translation.
## II Literature Survey
### _Machine Translation_
Machine Translation is a branch of computational linguistics that uses a computing device to convert text between languages. Machine translation was first presented by Petr Petrov Troyanskii [13]. Machine translation has been thoroughly researched using various models. Rule-based systems were the subject of earlier studies, which gave space to example-based systems in the 1980s. Starting in the late 1980s, we could see that statistical machine translation attained popularity, and various word-based and phrase-based methods requiring little or no linguistic knowledge were implemented. The application of deep neural networks in machine translation became a significant field of research after introducing deep neural networks in 2012.
### _Neural Machine Translation_
Neural machine translation (NMT) is a machine translation technique that uses an artificial neural network to estimate the likelihood of a word sequence and often models full sentences in a single integrated model. A neural network-based translation approach has been used to overcome Statistical Machine Translation (SMT) limitations, such as accuracy and context analysing ability [1]. NMT use only a fraction of the memory compared to conventional SMT models. Additionally, unlike conventional translation systems, the neural translation model is trained end-to-end to maximise translation performance [1][15][16]. NMT is based on a simple encoder-decoder based network. The encoder aims to decompose the sentence logic and word pairs into embeddings, which can then be stored. The decoder then uses those embeddings to produce the translated sentence. NMT models require a wide corpus of training data based on translations or annotated data created by language specialists. The data used for training in popular languages like English, Spanish, and French has already been processed in huge amounts. Nevertheless, little or no translated data is available for less popular languages and dialects. Unique architectures and methods are required to support low-resource language NMT. The vanilla architecture of encoder-decoder mechanism [15] is commonly used in modern NMT models. The encoder-decoder mechanism learns to optimise conditional log-likelihood after being jointly trained. Numerous encoder-decoder architectures have been created, each modelling the probability distribution differently. In this research, we are using Joey NMT which is built on encoder-decoder architecture.
#### Ii-A1 Joey NMT
Joey NMT [10] is a PyTorch-based, simple neural machine translation toolkit. Joey NMT provides many popular NMT features in a small and simple code base. Despite its focus on simplicity, Joey NMT supports standard network architectures (RNN, transformer, different attention mechanisms, input feeding, configurable encoder/decoder bridge), label smoothing, standard learning techniques (dropout, learning rate scheduling, weight tying, early stopping criteria), beam search decoding, an interactive translation mode, visualization/monitoring of learning progress and attention, checkpoint averaging, and more, and achieves performance comparable to more complex toolkits on standard benchmarks. Table I shows that Joey NMT performs very well compared to other shallow, deep and transformer models. This experiment was conducted using the settings of [10], using the exact same data, pre-processing, and evaluation using WMT17-compatible Sacre- BLEU scores [20].
\begin{table}
\begin{tabular}{|l|l|} \hline
**System** & **dc-en** \\ \hline \hline
[1] & 22.5 \\
[1] & 27.6 \\
**Joey NMT** (RNN, word) & 27.1 \\
**Joey NMT** (RNN, BPE32k) & 27.3 \\
**Joey NMT** (Transformer, BPE32k) & 31.0 \\ \hline \end{tabular}
\end{table} TABLE II: Table II IWSLT14 test results.
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|l|} \hline
**System** & \multicolumn{2}{c|}{**Grounding RNN**} & \multicolumn{3}{c|}{**Best RNN**} & \multicolumn{2}{c|}{**Transformer**} \\ \hline & **en-de** & **lv-en** & **layers** & **en-de** & **lv-en** & **en-de** & **lv-en** \\ \hline \hline NeuralMonkey & 13.7 & 10.5 & 1/1 & 13.7 & 10.5 & - & - \\ OpenNMT-Py & 18.7 & 10.0 & 4/4 & 22.0 & 13.6 & - & - \\ Nematus & 23.9 & 14.3 & 8/8 & 23.8 & 14.7 & - & - \\ Sockey & 23.2 & 14.4 & 4/4 & 25.6 & 15.9 & 27.5 & 18.1 \\ Marian & 23.5 & 14.4 & 4/4 & 25.9 & 16.2 & 27.4 & 17.6 \\ Tensor2Tensor & - & - & - & - & - & 26.3 & 17.7 \\ \hline
**Joey NMT** & 23.5 & 14.6 & 4/4 & 26.0 & 15.8 & 27.4 & 18.0 \\ \hline \end{tabular}
\end{table} TABLE I: Results on WMT17 newstest2017
In another study, data, pre-processing, and word-based vocabulary of Wiseman and Rush [21] and evaluated with SacreBLEU [17]. Table II shows that Joey NMT performs well here, with both recurrent and Transformer models.
JoeyNMT toolkit has been used to implement the transformer and RNN [14].
### _Active Learning_
Active learning is a semi-supervised machine learning to select the most informative sentences that need to be labelled to have the highest impact on training a supervised model using acquisition functions. Model-driven and data-driven are the two categories of acquisition functions. All of the techniques we employ for the model-related function are predicated on the concept of uncertainty. We develop a word frequency-based technique that considers linguistic factors for the data driven function. It has been found that active NMT training is advantageous for both varieties of acquisition functions [15]. In this way, we can build a non-redundant translated corpus on which NMT can be trained to achieve a better performance than models trained on randomly built corpora. The selective sampling approach proposed by [11] is based on the principle of membership questions. However, from an example-driven perspective, the learner asks the instructor about data that it is unsure about, data about which it thinks misclassifications are feasible [18]. In the next section, various approaches of active learning have been discussed.
#### Ii-C1 Query by uncertainty
Query by uncertainty such as uncertainty sampling and reduction queries the learning instances about which the present theory is least optimistic. Based on the ideas presented by [11] the learner chooses which occurrences to question the oracle. First, a single classifier is learned from classified data and then used to examine the unlabeled data in question by uncertainty. Next, a human annotator can classify the instances in the unlabeled data set for which the classifier is least certain. The third stage involves the use of trust ratings. The base learner must have a score showing how sure it is in each prediction it makes in this simple process [18].
#### Ii-C2 Query by committee
Query by committee is a selective sampling procedure similar to query by uncertainty, with the only difference being that query by committee is a multi-classifier technique. First, several queries are randomly sampled from the version space in the initial conception of the query by committee [20]. The committee is then used to review the collection of unlabeled data. Finally, the difference between the hypotheses about the class of a given instance is used to determine whether or not the human annotator can classify that instance. In the original context, query by committee is only available with base learners for whom access and sampling from the version space are possible.[19][10][11].
#### Ii-C3 Active learning with redundant views
Using redundant views is similar to the query by committee approach specified above. However, redundant views mean dividing the feature set into many sub-sets or views. Each is sufficient to explain the underlying problem to some degree, rather than arbitrarily sampling the version space or otherwise tampering with the original training data to expand it to achieve a committee [18].
#### Ii-C4 Related work
In terms of NLP, named entity recognition and text classification using active learning have been extensively researched [23]. [15] used acquisition functions based on attention for NMT. Reinforcement learning was introduced to actively train an NMT model by [16]. One study proposes two new and effective sentence selection techniques for active learning: selection based on semantic similarity and decoder probability. Experiments on Indonesian-English and Chinese-English show that selection approaches are superior to random selection and two conventional selection methods [15]. Further, work has been conducted in which a comprehensive evaluation of different active learning algorithms on a publicly available dataset (WMT'13) using the SotA NMT architecture has been done [18]. Information retrieval, named object identification, document categorization, part-of-speech marking, decoding, word meaning disambiguation, spoken language comprehension, phone sequence recognition, automated transliteration, and sequence segmentation have all been effectively used with active learning [18].
## III Methodology
In this section, we outline the NMT and Active Learning NMT architecture. First, we discuss the RNN and transformer-based NMT models and how we use them in baseline models. Then, we describe how we incorporate an active learning framework in our baseline NMT models. In an active learning framework, we utilise oracle for labelling new data points. We propose acquisition functions, which have been most commonly worked upon in active learning, where the learner queries the instance about which it has the least certainty [19][16]. These queries technique are based on a model-driven approach. The model, the labelled dataset, and the unlabeled dataset are all used in model-driven techniques to sample sentences. These techniques receive direct input from the model, which may aid in sampling more sentences from weakly modelled areas of the input space. We describe two model-driven approaches, least confidence and margin query, which select sample instances where the model is least certain about the prediction. Further, to understand how our baseline and active learning models perform, we utilise evaluation metrics like BLEU (Bilingual Evaluation Understudy) and perplexity score described in [19][10] are governing metrics for NMT tasks.
### _NMT Architecture_
The purpose of NMT, a specific form of sequence-to-sequence learning, is to produce another sequence of words in the target language from a source language word sequence. This work uses autoregressive recurrent and fully attentional
models from Joey NMT. In this, a source sentence of length \(lx\) is represented by a sequence of one-hot encoded vectors \(x_{1},x_{2},..,x_{l_{x}}\) for each word. Analogously, a target sequence of length \(l_{y}\) is represented by a sequence of one-hot encoded vectors \(y_{1},y_{2},..,y_{l_{y}}\).
#### Ii-A1 Rnn
The baseline NMT architecture implements RNN encoder-decoder variant from [10]. The embeddings matrix \(E_{src}\) and a recurrent computation of states allow the encoder RNN convert the input sequence \(x_{1},x_{2},..,x_{l_{x}}\) into a sequence of vectors \(h_{1},h_{2},..,h_{l_{x}}\).
\[h_{i}=RNN(E_{src}x_{i},h_{i-1})\]
\[h_{0}=0\]
We could either use LSTM or GRU to built RNN model. Hidden states from both sides are combined to generate \(h_{i}\) for a bidirectional RNN. A vector of zeros makes up the initial encoder hidden state, or \(h_{0}\). Each resulting output sequence, \(h_{1},h_{2},..,h_{l_{x}}\), can be used as the input to the subsequent RNN layer to create several layers. The decoder employs input feeding, in which an attentional vector \(\tilde{s}\) is joined to the representation of the preceding word as input to the RNN. Decoder states are calculated as follows:
\[\mathbf{s}_{t}=\mathrm{RNN}\left(\left[E_{trg}\mathbf{y}_{t-1};\tilde{\mathbf{ s}}_{t-1}\right],\mathbf{s}_{t-1}\right)\]
\[\mathbf{s}_{0}=\begin{cases}\tanh\left(W_{\text{bridge}}\ \mathbf{h}_{l_{x}}+\mathbf{b}_{\text{bridge}}\ \right)&\text{if bridge}\\ \mathbf{h}_{l_{x}}&\text{if last}\\ \mathbf{0}&\text{otherwise}\end{cases}\]
\[\tilde{\mathbf{s}}_{t}=\tanh\left(W_{att}\left[\mathbf{s}_{t};\mathbf{c}_{t} \right]+\mathbf{b}_{att}\right)\]
The starting decoder state can be set to be a vector of zeros, a non-linear transformation of the last encoder state (referred to as "bridge"), or the same as the last encoder state (referred to as "last"). The previous decoder state \(s_{t-1}\) and each encoder state \(h_{i}\) are scored by an attention mechanism, and the scoring function is either a multi-layer perceptron [1] or a bilinear transformation [10]. A vector \(o_{t}=W_{out}\tilde{s}_{t}\), which holds a score for each token in the target language, is created by the output layer. These scores can be understood as a probability distribution over the target vocabulary \(\mathcal{V}\) that defines an index over the target tokens \(v_{j}\) using a softmax transformation [17].
\[p\left(y_{t}=v_{j}\ |\ x,y_{<t}\right)=\frac{\exp\left(\mathbf{o}_{t}[j]\right) }{\sum_{k=1}^{|\mathcal{V}|}\exp\left(\mathbf{o}_{t}[k]\right)}\]
#### Ii-A2 Transformer
Joey NMT uses code from The Annotated Transformer [14] to implement the Transformer from [14]. First, given the \(x_{1},x_{2},..,x_{l_{x}}\) input sequence, create the matrix \(X\in R^{l_{x}\times d}\), where \(l_{x}\) is the length of the sentence and \(d\) is the dimensionality of the embeddings. Next, we use \(E_{src}x_{i}\) to look up the word embedding for each input word, then we apply a position encoding and stack the word embeddings that result. The following learnable parameters are defined:
\[A\in R^{d\times d_{a}}\quad B\in R^{d\times d_{a}}\quad C\in R^{d\times d_{a}}\]
where \(d_{o}\) is the output dimensionality, and \(d_{a}\) is the attention space's dimension. These matrices transform the input matrix into new word representations (\(H\)) by paying attention to all other source words.
\[H=softmax(XAB^{T}X^{T})XC\]
Multi-headed attention is implemented using NMT, where this transformation is computed \(k\) times, once for each head, with various parameters \(A\), \(B\), and \(C\). We concatenate the results of computing all \(k\)\(H\)s in parallel, apply layer normalisation, and then add a final feed-forward layer.
\[H=[H^{(1)};...;H^{(k)}]\]
\[H^{{}^{\prime}}=layer\text{-}norm(H)+X\]
\[H^{(enc)}=feed\text{-}forward(H^{{}^{\prime}})+H^{{}^{\prime}}\]
To ensure that \(H\in R^{l_{x}\times d}\), we set \(d_{o}=d/k\). By setting \(X=H^{(enc)}\) and rerunning the calculation, several layers can be piled on top of one another. In contrast to the encoder, the transformer decoder receives as input the stacked target embeddings \(Y\in R^{l_{y}\times d}\).
\[H=softmax(YAB^{T}Y^{T})YC\]
Setting those attention scores to \(-inf\) before the softmax for each target position prevents the user from paying attention to subsequent input words. We compute multi-headed attention again, but this time between intermediate decoder representations \(H^{{}^{\prime}}\) and final encoder representations \(H^{(enc)}\), after obtaining \(H^{{}^{\prime}}=H+Y\) and before the feed-forward layer.
\[Z=softmax(H^{{}^{\prime}}AB^{T}H^{(enc)T})H^{(enc)}C\]
\[H^{(dec)}=feed\text{-}forward(layer\text{-}norm(H^{{}^{\prime}}+Z))\]
Using \(H^{(dec)}W_{out}\), we predict the target words.
### _Active Learning NMT_
When dealing with low-resource language, it becomes prohibitively expensive to train the NMT model. Aiming to address this problem, in the AL framework, an acquisition function selects a subset of sentences worth being selected for NMT training. Streaming and pooling are the procedures through which we could send the data to our acquisition function. In a streaming scenario, the acquisition function is presented with training samples, one at a time. The acquisition function will either ignore the sample or send the sample for the query to its label. In a pooling scenario, the acquisition function assesses the log loss probabilities for the unlabeled data and chooses a portion of it for oracle labelling. We select the pooling method as it is more practical to send batches of data for labelling rather than a single sentence at disjoint intervals of time.
#### Iii-B1 Oracle
Oracle plays a crucial part in a machine learning task. When given a source sentence for NMT, an oracle can produce the ground truth translation (specifically, an expert human translator). A parallel corpus is gradually constructed using an oracle to translate the selected sentences. In our study, we created the algorithm such that the oracle could be either the human annotator or extract the corresponding target sentences from the parallel corpus if we set the interaction parameter as False. In our study, unlabeled data is the source sentences of a parallel corpus whose target sentences we hide. We extract the corresponding target sentences to label new data points from unlabeled data. We could specify the interaction as True if we want labels from a human annotator.
#### Iii-B2 Acquisition Function
Sentences with higher scores are more likely to be chosen as the training corpus. There are two categories of acquisition functions: model-driven and data-driven. A model-driven acquisition function uses a sentence as the model input and output and assigns a score accordingly. The informativeness of the sentence itself is frequently a concern of a data-driven acquisition function, which can score each sentence before training the model.
#### Iii-B3 Active Learning Framework
Algorithm 1 describes the active learning NMT implementation [15]. It anticipates a labelled parallel corpus (\(\mathcal{L}\)) for training the baseline NMT system (\(\mathcal{M}\)), an unlabeled monolingual corpus (\(\mathcal{U}\)) for sampling new data points for translation and an acquisition function \(\varphi(.)\) for estimating the significance of data points in (\(\mathcal{U}\)) and batch size (\(\mathcal{B}\)) for selecting the number of data points to sample in each iteration. The budget is the number of the queries iterations set during training process. Our experiment already has reference translations for all the unlabeled data points. We repeat this process until we have used all the data points in \(\mathcal{U}\). We initially train an NMT system with \(\mathcal{L}\) for each iteration. Then, using an acquisition function that accounts for \(\mathcal{L}\), \(\mathcal{U}\), and \(\mathcal{M}\), we assign a score to each sentence in \(\mathcal{U}\). The next section goes into great detail about the acquisition function and its variations, which is a crucial part of all active learning algorithms. Finally, each sentence in the monolingual source corpus is scored using an acquisition function.
```
1:Given: Parallel data \(\mathcal{L}\), Monolingual source language data \(\mathcal{U}\), Sampling strategy \(\psi(.)\), Sampling batch size \(\mathcal{B}\).
2:while Budget \(\neq\) EMPTY do
3:\(\mathcal{M}=\) Train \(NMT\) system \((\mathcal{L})\);
4:for\(x\in\mathcal{U}\)do
5:\(f(x)=\psi(x,\mathcal{U},\mathcal{L},\mathcal{M})\)
6:endfor
7:\(X_{B}=\) TopScoringSamples \((f(x),\mathcal{B})\);
8:\(Y_{B}=\) HumanTranslation \((X_{B})\)
9:\(\mathcal{U}=\mathcal{U}-X_{B}\);
10:\(\mathcal{L}=\mathcal{L}\cup\ \{X_{B},Y_{B}\}\);
11:endwhile
12:return\(\mathcal{L}\)
```
**Algorithm 1** Batch Active Learning for NMT
Then, the best \(\mathcal{B}\) sentences are picked to be translated. Finally, the references translations of these sentences are added to \(\mathcal{L}\) along with their removal from \(\mathcal{U}\). As a result, a parallel corpus is gradually constructed by utilising an oracle to translate the sentences with high scores. After that, the process continues for the specified amount of iterations. The NMT model is then retrained using the parallel corpus. As a result, the model is trained on those labelled examples before being evaluated to see how well it is doing. As a result, new data is gradually added to the NMT system.
### _Model-driven Query Strategies_
One of the key elements of active learning is to have a meaningful strategy for obtaining the most useful samples to be supervised. For this, we require an evaluation of the informativeness of unlabeled samples. Model-driven approaches estimate the prediction uncertainty of a source sentence given the machine translation model parameters and select sentences with high uncertainty for training the model. The sampling strategies used in this work are based on uncertainty. Settles and Craven [11] tried these methods on sequence labelling tasks. The idea behind the uncertainty sampling method is to select those instances for which the model has the least confidence to be correctly translated. Therefore, all techniques compute, for each sample, an uncertainty score. The selected sentences will be those with the highest scores.
#### Iii-C1 Least Confidence
[12] describe a simple uncertainty-based approach for least confidence-based sequence models. The active learner is least certain of its prediction for the most likely label in the instance chosen by the least confident sampling technique. For sequence-based models, uses an earlier least confidence sampling strategy where the input sequence is designated by \(x\) and the label sequence is represented by \(y\). Its query strategy formulation \(\phi^{LC}(x)\) can be written as follows:
\[\phi^{LC}(x)=1-P(y^{*}|x;\theta).\]
Where most likely label sequence according to the learner is represented by \(y*\) and posterior probability of \(y\) given x is denoted by \(P(y*|x;\theta)\).
#### Iii-C2 Margin Sampling
Another uncertainty technique put forth by [11] involves querying the instance with the smallest margin between the posteriors for its two most likely labellings. This strategy is known as margin \((M)\).
\[\phi^{M}(x)=-(P(y^{*}_{1}|x;\theta)-P(y^{*}_{2}|x;\theta).\]
Here, the first and second best label sequences are \(y^{*}_{1}\) and \(y^{*}_{2}\), respectively. Their posterior probabilities are by \(P(y^{*}_{1}|x;\theta)\) and \((P(y^{*}_{2}|x;\theta)\) respectively. The model cannot distinguish between the best and inferior translations with a narrow margin. This concept is incorporated into active learning to select the samples.
### _Evaluation_
Evaluation is highly challenging in various NLP tasks and non-trivial. In NMT, we evaluate by comparing the model's
hypothesis with the actual reference sentence. We should only compare scores between a language and not across different languages. We grade our models using the following two metrics.
#### Iii-C1 Bilingual Evaluation Understudy (BLEU)
Bilingual Evaluation Understudy (BLEU) is the standard evaluation metric in NMT proposed by [15]. This technique is less expensive, quicker, and linguistically unrestricted than human evaluation. It is an algorithm that evaluates the quality of the machine-translated text. The main idea behind BLEU is that the closer a machine translation is to a professional human translation, the better it is. BLEU calculates an average mean of the precision of the n-grams from the hypothesis that appear in the reference sentence. BLEU employs a modified form of precision to compare output text to various reference phrases. The reference sentences are human-translated text. Additionally, it imposes a brevity penalty on short translations. Usually given as a value between 0 and 1, output values can be quickly changed to percentages if necessary. A larger number of reference sentences will result in higher BLEU scores. A higher BLEU score indicates greater machine translation quality. BLEU is computed using a couple of ngram modified precisions. Specifically,
\[BLEU=BP\cdot\exp\left(\sum_{n=1}^{N}w_{n}\log p_{n}\right)\]
where \(p_{n}\) is the modified precision for ngram, the base of log is the natural base e, \(w_{n}\) is weight between 0 and 1 for \(\log p_{n}\) and \(\sum_{n=1}^{N}w_{n}=1\), and BP is the brevity penalty to penalize short machine translations.
\[BP=\begin{cases}1&\text{if c$>$r}\\ \exp(1-r/c)&\text{c$\leq$r}\end{cases}\]
where c is the number of unigrams (length) in all the candidate sentences, and r is the best match lengths for each candidate sentence in the corpus. Here the best match length is the closest reference sentence length to the candidate sentences.
#### Iii-C2 Perplexity
How well a probability distribution or probability model predicts a sample is measured by perplexity. Perplexity is a metric used in natural language processing to assess language models. A language model is a probability distribution applied to complete texts or sentences. A low perplexity value suggests that the probability distribution may accurately predict the sample. The reciprocal of the (geometric) average probability that the model allocated to each word in the test set \(T\) is the perplexity \(PP_{p}(T)\) of the model p. It is related to cross-entropy by the below equation.
\[PP_{p}(T)=2^{H_{p}(T)}\]
where \(H_{p}\) is cross-entropy. Lower cross-entropies and perplexities are preferable [15].
## IV Experiments
### _Dataset_
In this research, we use English-Hindi language pair dataset. For training our NMT models, we use IIT Bombay English-Hindi Parallel Corpus [10]. The parallel corpus has been compiled from a variety of existing sources (primarily OPUS [16], HindEn [1] and TED [1]) as well as corpora developed at the Center for Indian Language Technology (CFILT), IIT Bombay over the years. The corpus consists of sentences, phrases, and dictionary entries, spanning many applications and domains. This data is publicly available and we use an open-source platform, Huggingface, as it provides a consistent way of accessing data using an API. This reduces the burden on our end to maintain a data repository. The dataset can also be accessed in raw format from IIT Bombay online repository1. The training, dev and test corpora consist of 1.6 million, 520 and 2507 sentences of English and Hindi Language pair. We are not using dev and test corpora because of its small size. We construct our own dev and test corpora from a 1.6 million dataset in the data preprocessing stage, with 40K sentences in each, see data split in table III.
Footnote 1: [http://www.cfili.iitb.ac.in/iitb_parallel](http://www.cfili.iitb.ac.in/iitb_parallel)
For active learning we randomly picked 30% of data from training set. See table IV
### _Data Preprocessing_
The first step of after acquiring data from Huggingface Data API is to perform the pre-processing of data in which we prepare and cleans the dataset and reduces the noise of the dataset. This task included the conversion of all sentences into lowercase, removing special and bad characters, removing stop words, removing extra white-spaces and tabs, remove characters not related to language like we came across Urdu characters in our Hindi corpus. Further, to avoid data leakage, we checked missing and duplicated sentences and ensured test data was filtered from the training and dev sets.
We use the IIT Bombay English-Hindi Parallel Corpus consisting of 1.6 M sentence pairs. In order to remove bias,
\begin{table}
\begin{tabular}{|l|l|} \hline
**Data Split** & **Size** \\ \hline \hline Baseline Training Data & 1086795 \\ \hline Active Learning Training Data & 465768 \\ \hline Dev Data & 40856 \\ \hline Test Data & 40858 \\ \hline Total Data & 1634277 \\ \hline \end{tabular}
\end{table} TABLE IV: Active learning dataset split
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline
**Training Data** & **Dev Data** & **Test Data** & **Total Data** \\ \hline \hline
1552563 & 40856 & 40858 & 1634277 \\ \hline \end{tabular}
\end{table} TABLE III: Full dataset split
the data was shuffled and test and validation split of 40k each parallel corpus sentences were used. The remaining parallel corpus was utilised for training the models. From the training data, we randomly sampled 70% of the whole bilingual training dataset, this we called Baseline Training Data,(see IV). We used Baseline Training Data to train an initial NMT model (Baseline NMT model) and the remaining data was purposed as active learning corpus (\(\sim\)465k) used for simulating the active learning experiments. We performed random sampling initially and fix the labelled and unlabelled datasets for all the experiments for a fair comparison.
Since, we experiment in a simulated active learning framework, the target sentences in active learning dataset are hidden while scoring source sentences with different active learning strategies. Once the active learning algorithm samples a batch from 465k source sentences from active learning dataset, the sampled sentences, along with their corresponding "hidden" translations, are added to labelled dataset. We calculated the various data statistics to know our sentences better, which includes the vocabulary size and length of the sentences. The vocabulary size for English and Hindi languages is 21K and 37K respectively. The table V shows the summary data statistics.
Next, the key function preprocessing step is to tokenise source and target sentences and create a dictionary, which indexes the words in the training process. The dictionary lists out all unique words. We used Moses [13] toolkit for tokenisation and cleaning the English dataset. The Hindi dataset is first normalised with Indic NLP library2, followed by tokenisation with the same library. Spell normalisation is cardinal while processing the Hindi text. A single word in Hindi can be written in many forms with the same underlying meaning. In normalisation stage, all such similar words are mapped to a single word to mitigate lexical redundancy. We employ byte-pair encoding (BPE) [15] to generate a shared vocabulary for each language pair. It is a BPE tokenisation technique whose definition is based on the number of merges. The algorithm separates the corpus into words by removing white space, counting all nearby character pairs, combining the characters in the most common pair, and then adding them to the vocabulary. The resulting BPE tokenisation had 16k merge operations.
Footnote 2: [https://anoopkunchukuttan.github.io/indic_nlp_library/](https://anoopkunchukuttan.github.io/indic_nlp_library/)
### _Baseline NMT Experimental Setup_
For all of our experiments, we used JoeyNMT toolkit. We used the Transformer model in our submissions. For our transformer model, we used 6 layers in both encoder and decoder with 256 hidden units in each layer. The word embedding size was set to 256 with 4 heads. We used Byte Pair Encoding (BPE) to learn the vocabulary with 16k merge operations. We used the subword-nmt for learning the BPE vocabulary. Since the writing systems and vocabularies of English and Hindi are separate, BPE models are trained separately. The figure 1 show how our baseline architecture operates.
The model is trained using Adam optimiser [11] with \(\beta\)1 = 0.9, \(\beta\)2 = 0.98, with a learning rate of 0.0003, and a warm-up for the 1K steps with a maximum length of 60. We used early-stopping based on perplexity(ppl) with patience=5. Cross-entropy loss was calculated, and the dropout probability is set to 0.3. We used the minimum learning rate of \(1\times 10^{-8}\) and the number of epochs to control the training. Here, we also utilised the plateau scheduler, which lowers the initial learning rate by a factor of 0.7 whenever the ppl has not improved for patience validations. Every time a new high score in ppl is attained, checkpoints for the model parameters are saved. We are only keeping the last three best checkpoints to save memory on old checkpoints. Later, we use the best checkpoint parameters to train our active learning NMT model. Xavier weight initialisation has been used. Each layer in our encoder and decoder includes attention sub-layers in addition to a fully linked feed-forward network that is applied to each position individually and identically.
It is composed of two linear transformations separated by a ReLU activation. First, we transform the input and output tokens into vectors of model dimension using learnt embeddings. To translate the output of the decoder into estimated next-token probabilities, we also employ the standard learnt linear transformation and softmax function. Our model's pre-softmax linear transformation and the two embedding layers share the same weight matrix [16]. We used label smoothing with a value of \(\epsilon(ls)\) = 0.1 during training. As a result, the model becomes less perplexing, which reduces perplexity but increases accuracy and BLEU score. All models are trained with a batch size of 4096 sentences for 40 epochs. After 1000 mini-batches, the model will get validated using the dev dataset. Using greedy sampling to decode the test dataset, we evaluated our model and ran inference with a beam size of 5 and a batch size of 1024. We use the BELU score and PPL as our evaluation metric.
### _Active Learning NMT Experimental Setup_
The active learning model is a selective sampling technique. Referring to the architecture presented in section, Active Learning Framework, in our approach, we first train a baseline model (\(\mathcal{M}\)). Then, this model is built on top of the NMT
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline & **Max length** & **Avg length** & **Median length** \\ \hline \hline en & 1681.0 & 13.94 & 10.0 \\ hi & 1291.0 & 13.74 & 10.0 \\ \hline \end{tabular}
\end{table} TABLE V: Sentence length
Fig. 1: Baseline NMT Architecture
model, which utilises base parallel corpus (\(\mathcal{L}\)) until it acquires a local minimum. Finally, we train this baseline model and set the stage for the active learning model. The figure 2 show how our Active learning NMT architecture operates.
To build the active learning model, we separate the target language and create a monolingual corpus (\(\mathcal{U}\)) from the active learning corpus. On the subset of this corpus, we do random sampling on 20k samples and find the top N predictions (\(X_{B}\)) in the form of log loss probabilities. Then, an oracle iteratively queries these predictions(\(\mathcal{U}\)), applying different acquisition functions. The acquisition function's role is to select the most informative samples based on the algorithm, thresholds and heuristics. The selected samples are then paired up with the actual target translation(\(\mathcal{L}\)). It can be done by oracle either interactively or using the existing parallel corpus. These samples are then fed into the baseline model to improve its score.
The top N log loss probabilities are selected using the beam search algorithm. The algorithm selects multiple tokens for a position in a given sequence based on conditional probability. It can take any number of N best alternatives through a hyper-parameter known as beam width, which in our experiment is set to 5. In addition, we implemented two uncertainty-based acquisition functions, namely least confidence and margin sampling. These sampling techniques are written for a classical problem involving probabilities of different classes. In our work, we transformed and implemented them for machine translations problem.
Active learning is an iterative process, where in each iteration few samples are added to training data. First, the oracle randomly picks up 6% of the active learning monolingual corpus (pool size), which turns around to be 20K samples. From there, the acquisition function returns 10K samples (query size) which are added to training data each time when oracle is queried. This active learning loop continues for 2 epochs over 20 iterations to the oracle. In each epoch, the training data is added with some part of the active learning data pool until the pool is exhausted.
In order to add active learning to joyeNMT, we extended JoeyNMT features. JoeyNMT inherits dataset classes from the transformer, we implemented a new class with which we could separate the parallel corpus from the monolingual corpus. The base JoeyNMT architecture had issues with beam search, where it would fail to provide N top log loss probabilities for sentences. It would essentially be fatal for our acquisition; we successfully fixed this obstacle. In addition, the predictions returned by the baseline architecture did not support batch loss reporting. We added this functionality to allow the acquisition function to perform and scale in order of hundreds. Like the baseline model provides different execution modes such as train, test and translate, we created another mode called active learning mode, which is an addition to the existing pipeline and extends the functionality of the baseline joeyNMT. We use BLEU scores and perplexity(ppl), on the same dev and test dataset for all our models to perform validation and evaluation, which we used in our baseline model.
## V Result and Analysis
We evaluate the effectiveness of active learning on two parameters: bleu and perplexity score. In NMT tasks, these are standard metrics, unlike other classical neural networks, which are based on accuracy, precision and recall performance parameters. The figure 3 shows model-wise bleu and perplexity scores. All the models are trained on 20 epochs. BLEU score for the fully trained model peaks at 22.56, while the baseline model, which trains on 70% of the trainset, struggles and attains a local minima of around 16.26.
Active Learning models, margin and least confidence are built on top of the baseline model. It is clearly depicted in the figure 3 and the table VI, they are more performant. The BLEU scores of both models, signal that providing sentences that perform poorly on the model gives us better performances and help model attain local minima quickly.
The table VI describes our final findings on the test dataset, and it gives us a good contrast of how active learning has achieved better results without training on a full data set. We only use \(\sim\)40,000 samples of 4,65,768 total active learning data samples and have achieved a better performing model at a lesser cost both in training time and in less amount of data.
The figure 5 gives us a better understanding of how different models perform against each other on different steps of epochs on the validation dataset. It is evident that margin sampling
\begin{table}
\begin{tabular}{|l|l|l|l||l|} \hline & **Fully Trained** & **Baseline** & **Margin** & **Least Confidence** \\ \hline \hline BLEU & 22.56 & 16.26 & 24.20 & 24.54 \\ Perplexity & 12.71 & 21.00 & 11.71 & 11.70 \\ \hline \end{tabular}
\end{table} TABLE VI: Test set BLEU and perplexity score after training.
Fig. 5: Model comparison.
Fig. 2: Active Learning NMT Architecture
and least confidence attain better BELU scores early than their fully trained model counterpart. Furthermore, the least confidence model has a better learning capability than the margin sampling model.
To understand the learnability of models, we deep dive into the training loss and pick out peculiar patterns. In figure 4, the baseline model loss describes an exponential decay as it becomes almost constant around 150K steps, describing that the model stopped learning around 200K. This pattern is acceptable as we had only provided 70% of training data. A fully trained model and other active learning models appear to learn even 500K steps which means that model convergence is good and a good scope of learning is achieved.
Next, to understand our models' capabilities and performance on different sentence lengths, we run our models over different test datasets. In figure 6, we carefully analyze that the active learning models (least confidence and margin) fare better than the fully trained and baseline model. The models are trained on a sentence length of 60 tokens. This provide us concrete evidence that the baseline and others models have low performance. Margin and least confidence models give similar BLEU scores, but, on deeper inspection, the margin sampling outranks the least confidence.
## VI Conclusions
In this work, we demonstrate an active learning strategy for incorporating knowledge from monolingual data into the NMT system using the acquisition function. The idea was to supervise the most useful samples from potentially monolingual data. We developed two uncertainty-based acquisition
Fig. 4: Training Loss comparison.
Fig. 3: Validation set BELU and Perplexity score during training.
Fig. 6: BLEU score for different sentence length.
function algorithms, least confidence and margin sampling and studied their effect on improving the translation quality for low resource language Hindi. The research builds upon previous research in the field, using the transformer architecture [22][23] and data to translate English to Hindi. Our experiment results strongly prove that active learning benefits NMT for low-resource languages. Further, the results show improvements in the previous BLEU scores obtained for our parallel corpus by a large margin [21]. Moreover, we obtained consistent reductions of approximately a 25% of the effort required for reaching the desired translation quality.
### _Future Work_
In the future, we plan to investigate various areas of work. To see if the findings of this work are still valid, we first hope to apply our methodology to other datasets involving linguistically mixed language pairs. We also wish to research how bandit learning or reinforcement can be included in our architecture. Recent studies [23] that are orthogonal to our work have already demonstrated the value of these learning paradigms. Future work would also involve fine-tuning the training of long and rare sentences using smaller data sets. Finally, since the grammatical structures of many Indian languages are similar, we would like to investigate Active Learning NMT for more low-resource Indian languages like Bengali or Marathi in the future. The code we implemented to train and evaluate our models is available on Github 3.
Footnote 3: [https://github.com/kritisingh24/active_learning_nmt](https://github.com/kritisingh24/active_learning_nmt)
|
2310.20219 | An esoteric identity with many parameters and other elliptic extensions
of elementary identities | We provide elliptic extensions of elementary identities such as the sum of
the first $n$ odd or even numbers, the geometric sum and the sum of the first
$n$ cubes. Many such identities, and their $q$-analogues, are indefinite sums,
and can be obtained from telescoping. So we used telescoping in our study to
find elliptic extensions of these identities. In the course of our study, we
obtained an identity with many parameters, which appears to be new even in the
$q$-case. In addition, we recover some $q$-identities due to Warnaar. | Gaurav Bhatnagar, Archna Kumari, Michael J. Schlosser | 2023-10-31T06:41:47Z | http://arxiv.org/abs/2310.20219v1 | # An esoteric identity with many parameters and other elliptic extensions of elementary identities
###### Abstract.
We provide elliptic extensions of elementary identities such as the sum of the first \(n\) odd or even numbers, the geometric sum and the sum of the first \(n\) cubes. Many such identities, and their \(q\)-analogues, are indefinite sums, and can be obtained from telescoping. So we used telescoping in our study to find elliptic extensions of these identities. In the course of our study, we obtained an identity with many parameters, which appears to be new even in the \(q\)-case. In addition, we recover some \(q\)-identities due to Warnaar.
Key words and phrases:\(q\)-series, elliptic extensions 2020 Mathematics Subject Classification: Primary 11B65; Secondary 05A20, 11B83, 33D52
## 1. Introduction
The geometric sum is a staple of high school algebra. It can be written as
\[\sum_{k=0}^{n-1}q^{k}=\frac{1-q^{n}}{1-q}=:[n]_{q}, \tag{1.1}\]
where \([n]_{q}\) denotes the \(q\)-number of \(n\). This notation is justified because the limit as \(q\to 1\) is
\[\sum_{k=0}^{n-1}1=n.\]
More generally, we can define \([z]_{q}:=(1-q^{z})/(1-q)\) for any complex \(z\) and observe that \(\lim_{q\to 1}[z]_{q}=z\). Thus, we call \([z]_{q}\) the \(q\)-analogue of \(z\).
The objective of this paper is to extend several classical and elementary identities to the so-called elliptic numbers--which are even more general than the \(q\)-numbers--defined in [19]. Rather surprisingly, these lead to new identities even in the \(q\)-case. This work is in the context of the rapidly developing field of elliptic combinatorics. Some recent references are [1, 2, 3, 5, 6, 11, 14, 15, 16, 18, 19, 20, 21, 22, 23, 24].
To be able to define an elliptic number, we need some notation. The **modified Jacobi theta function** of the complex number \(a\) with (fixed) none \(p\) is defined as
\[\theta(a;p):=\prod_{j=0}^{\infty}(1-ap^{j})(1-p^{j+1}/a)\]
where \(a\neq 0\) and \(|p|<1\). When the nome \(p=0\), the modified theta function \(\theta(a;p)\) reduces to \((1-a)\). We use the shorthand notation
\[\theta(a_{1},a_{2},\ldots,a_{r};p):=\theta(a_{1};p)\,\theta(a_{2};p)\cdots \theta(a_{r};p)\,.\]
The elliptic analogue of a complex number \(z\) is defined by [19] as
\[[z]_{a,b;q,p}:=\frac{\theta(q^{z},aq^{z},bq^{2},a/b;p)}{\theta(q,aq,bq^{z+1}, aq^{z-1}/b;p)}. \tag{1.2a}\]
This has additional (complex) parameters \(a\) and \(b\), in addition to the _base_\(q\) and none \(p\). Note that \([0]_{a,b;q,p}=0\) and \([1]_{a,b;q,p}=1\). Let the elliptic weight be defined by
\[W_{a,b;q,p}(k):=\frac{\theta(aq^{2k+1},bq,bq^{2},aq^{-1}/b,a/b;p)}{\theta(aq,bq^ {k+1},bq^{k+2},aq^{k-1}/b,aq^{k}/b;p)}q^{k}, \tag{1.2b}\]
for any \(k\). By the Weierstrass addition formula for theta functions (see (1.6b), below) we have
\[[x+y]_{a,b;q,p}=[x]_{a,b;q,p}+W_{a,b;q,p}(x)[y]_{aq^{2x},bq^{x};q,p}. \tag{1.2c}\]
Note that if we set \(p=0\) and subsequently take \(a=0\) and then \(b=0\), the elliptic weight in (1.2b) reduces to \(q^{k}\). In this case (1.2c) reduces to the recurrence relation
\[[x+y]_{q}=[x]_{q}+q^{x}[y]_{q}.\]
This, along with the initial conditions \([0]_{q}=0\) and \([1]_{q}=1\), is used to define the \(q\)-number for integers. Thus, the elliptic number is indeed an extension of the \(q\)-number \([x]_{q}\) for any complex \(x\).
The analogue of the geometric sum (1.1)--obtained by iterating (1.2c)--is as follows. (Here \(n\) is assumed to be a non-negative integer.)
\[1+W_{a,b;q,p}(1)+W_{a,b;q,p}(2)+\cdots+W_{a,b;q,p}(n-1)=[n]_{a,b;q,p}. \tag{1.3}\]
Many such elementary results (even in the \(q=1\) case) are examples of indefinite sums, and can be proved by telescoping, as has been shown in [4]. This motivates the study of elliptic extensions using these techniques. In doing so, we naturally came across the following result, which is somewhat esoteric, but appears to be new even in the \(q\)-case. At this point, we would like to emphasize that the parameters in our identities should be chosen such that not-removable singularities and poles are avoided, so that the identities make sense.
**Theorem 1**.: _For any non-negative integer \(n\) and complex numbers \(c\), \(d\), \(g\) and \(h\), we have the following identity:_
\[\sum_{k=0}^{n} \Bigg{(}\frac{\big{[}2(gk+c)(hk+d)\big{]}_{a,b;q,p}\left[2ghk+ch+ dg\right]_{aq^{2(gk-g+c)(hk+d),bq(gk-g+c)(hk+d);q,p}}}{\big{[}2cd\big{]}_{a,b;q,p} \left[ch+dg\right]_{aq^{2(c-g)d,bq(c-g)d;q,p}}}\] \[\times\prod_{j=0}^{k-1}\frac{\big{[}(gj+g+c)(hj+d)\big{]}_{aq^{2 (gj-g+c)(hj+d),bq(gj-g+c)(hj+d);q,p}}}{\big{[}(gj+g+c)(hj+d)\big{]}_{aq^{2(gj+ g+c)(hj+2h+d),bq(gj+g+c)(hj+2h+d);q,p}}}\] \[\times\prod_{j=0}^{k-1}W_{aq^{2(gj+c)(hj+h+d),bq(gj+c)(hj+h+d);q, p}\big{(}2ghj+2gh+ch+dg\big{)}^{-1}}\Bigg{)}\] \[=\frac{\big{[}(gn+c)(hn+h+d)\big{]}_{a,b;q,p}\left[(g+c)d\right]_ {aq^{2(c-g)d,bq(c-g)d;q,p}}}{\big{[}2cd\big{]}_{a,b;q,p}\left[ch+dg\right]_{ aq^{2(c-g)d,bq(c-g)d;q,p}}}\] \[\times\prod_{j=1}^{n}\frac{\big{[}(gj+g+c)(hj+d)\big{]}_{aq^{2(gj -g+c)(hj+d),bq(gj-g+c)(hj+d);q,p}}}{\big{[}(gj+c)(hj-h+d)\big{]}_{aq^{2(gj+c )(hj+h+d),bq(gj+c)(hj+h+d);q,p}}}\] \[\times\prod_{j=1}^{n}W_{aq^{2(gj-g+c)(hj+d),bq(gj-g+c)(hj+d);q, p}\big{(}2ghj+ch+dg\big{)}^{-1}}\] \[-\frac{\big{[}(c-g)d\big{]}_{a,b;q,p}\left[c(d-h)\right]_{aq^{2c( h+d),bq(c+h+d);q,p}}}{\big{[}2cd\big{]}_{a,b;q,p}\left[ch+dg\right]_{aq^{2(c-g)d,bq(c-g)d ;q,p}}}\,W_{aq^{2(c-g)d,bq(c-g)d;q,p}}\big{(}ch+dg\big{)}. \tag{1.4}\]
The "hypergeometric version" of (1.4) is given by
\[\sum_{k=0}^{n}\frac{(gk+c)(hk+d)(2ghk+ch+dg)}{cd(ch+dg)}=\frac{(gn+c )(hn+h+d)(gn+g+c)(hn+d)}{2cd(ch+dg)}\] \[-\frac{(d-h)(c-g)}{2(ch+dg)}.\]
Note that this extends the well-known formula for the sum of the first \(n\) cubes. Multiply both sides by \(cd(ch+dg)/2\) and then take \(c=d=0\), and \(h=g=1\), to obtain
\[\sum_{k=0}^{n}k^{3}=\bigg{(}\frac{n(n+1)}{2}\bigg{)}^{2}.\]
This is indeed an elementary identity, but its extension given in (1.4) involves some rather unusual factors. Note, for example, the product
\[\prod_{j=0}^{k-1}\big{[}(gj+g+c)(hj+d)\big{]}_{aq^{2(gj-g+c)(hj+d)},bq^{(gj-g+ c)(hj+d)};q,p}\]
appearing with index \(k\) in the sum. The associated \(q\)-product (obtained by first letting \(p\to 0\), followed by \(a\to 0\) and \(b\to 0\))
\[t(k):=\prod_{j=0}^{k-1}\big{[}(gj+g+c)(hj+d)\big{]}_{q}\]
is rather unusual as it is not a \(q\)-hypergeometric term. In particular, the ratio \(t(k+1)/t(k)\) of this product, that is, \([(gk+g+c)(hk+d)]_{q}=(1-q^{(gk+g+c)(hk+d)})/(1-q)\), is not a rational function in \(q^{k}\); it is a rational function in \(q^{k^{2}}\) and \(q^{k}\), and contains quadratic powers of \(q\).
Nevertheless, (1.4) contains various extensions of well-known elementary identities. The following identities appear as special cases.
\[\sum_{k=1}^{n}q^{n-k}\frac{[2k]_{q}}{[2]_{q}}=\genfrac{[}{]}{0.0 pt}{}{n+1}{2}_{q}\,; \tag{1.5a}\] \[\sum_{k=1}^{n}q^{n^{2}-k^{2}+n-k}\frac{[2k^{2}]_{q}[2k]_{q}}{[2]_ {q}^{2}}=\bigg{(}\frac{[n(n+1)]_{q}}{[2]_{q}}\bigg{)}^{2}. \tag{1.5b}\]
Here we have used the notation
\[\genfrac{[}{]}{0.0pt}{}{n+1}{2}_{q}=\frac{[n]_{q}[n+1]_{q}}{[2]_{2}}.\]
The first of these is a \(q\)-analogue of the sum of the first \(n\) natural numbers; the second is a \(q\)-analogue of the sum of the first \(n\) cubes, which is equivalent to a formula of Cigler [7, Theorem 1, \(q\mapsto q^{2}\)].
We now provide some background information and list some notation used in this paper.
**Background information.**
1. Two important properties of the modified theta function are [9, Equation (11.2.42)] \[\theta(a;p)=\theta(p/a;p)=-a\theta(1/a;p)\,,\] (1.6a)
and [26, p. 451, Example 5] \[\theta(xy,x/y,uv,u/v;p)-\theta(xv,x/v,uy,u/y;p)=\frac{u}{y}\,\theta(yv,y/v,xu,x/u;p )\,.\] (1.6b) This last formula is called the Weierstrass addition formula. This formula is used extensively in this paper.
2. The following general theorem serves as a justification of referring to \([z]_{a,b;q,p}\), defined in (1.2a), as an "elliptic number". **Proposition 2** ([12, Theorem 1.3.3]).: _Let \(g(x)\) be an elliptic function, that is, a doubly periodic meromorphic function of the complex variable \(x\). Then \(g(x)\) is of the form:_ \[g(x)=\frac{\theta(a_{1}q^{x},a_{2}q^{x},\ldots,a_{r}q^{x};p)}{\theta(b_{1}q^{x },b_{2}q^{x},\ldots,b_{r}q^{x};p)}c,\] _where \(c\) is a constant, and_ \[a_{1}a_{2}\cdots a_{r}=b_{1}b_{2}\cdots b_{r}.\] This last condition is the _elliptic balancing condition_. If we write \(q=e^{2\pi i\sigma}\), \(p=e^{2\pi i\tau}\), with complex \(\sigma\), \(\tau\), then \(g(x)\) is indeed doubly periodic in \(x\) with periods \(\sigma^{-1}\) and \(\tau\sigma^{-1}\).
3. Using Proposition 2, it is easy to see that elliptic number \([z]_{a,b;q,p}\) is elliptic in \(z\), and also elliptic in \(\log_{q}a\) and in \(\log_{q}b\).
4. Similarly, the elliptic weight function \(W_{a,b;q,p}(k)\) is elliptic in \(\log_{q}a\), \(\log_{q}b\) and \(k\) (regarded as a complex variable).
5. The following useful properties readily follow from the definitions. 1. For any \(k\) and \(l\), \(W_{a,b;q,p}(k+l)=W_{a,b;q,p}(k)W_{aq^{2k},bq^{k};q,p}(l)\). 2. \(W_{a,b;q,p}(0)=1\), and for any \(k\), \(W_{a,b;q,p}(-k)=W_{aq^{-2k},bq^{-k};q,p}(k)^{-1}\). 3. For any \(x\), \([-x]_{a,b;q,p}=-W_{a,b;q,p}(-x)[x]_{aq^{-2x},bq^{-x};q,p}=-W_{aq^{-2x},bq^{-x };q,p}(x)^{-1}[x]_{aq^{-2x},bq^{-x};q,p}\). 4. For any \(x\) and \(y\), \([xy]_{a,b;q,p}=[x]_{a,b;q,p}[y]_{a,bq^{1-x};q^{x},p}\). 5. For any \(r\), \(x\) and \(y\), \[[x]_{a,b;q,p}[y]_{aq^{2r+2x-2y},bq^{r+x-y};q,p}-[x+r]_{a,b;q,p}[y -r]_{aq^{2r+2x-2y},bq^{r+x-y};q,p}\] \[=[r+x-y]_{a,b;q,p}[r]_{aq^{2x},bq^{2};q,p}W_{aq^{2r+2x-2y},bq^{r+x -y};q,p}(y-r).\] (1.7) The property (1.7) is a consequence of the Weierstrass addition formula in (1.6b).
6. In SS3, we require the notation of \(q\)-rising factorials and their elliptic analogues. We define the \(q\)_-shifted factorials_, for \(k=0,1,2,\ldots\), as \[\left(a;q\right)_{k}:=\prod_{j=0}^{k-1}\big{(}1-aq^{j}\big{)},\] and for \(|q|<1\), \[\left(a;q\right)_{\infty}:=\prod_{j=0}^{\infty}\big{(}1-aq^{j}\big{)}.\] The parameter \(q\) is called the _base_. With this definition, we can write the modified Jacobi theta function as \[\theta(a;p)=\left(a;p\right)_{\infty}(p/a;p)_{\infty},\]
where \(a\neq 0\) and \(|p|<1\). We define the \(q,p\)_-shifted factorials_ (or _theta shifted factorials_), for \(k\) an integer, as \[\left(a;q,p\right)_{k}:=\prod_{j=0}^{k-1}\theta\big{(}aq^{j};p\big{)}\,.\] When the name \(p=0\), \(\left(a;q,p\right)_{k}\) reduces to \(\left(a;q\right)_{k}\). We use the shorthand notations \[\left(a_{1},a_{2},\ldots,a_{r};q,p\right)_{k}:=\left(a_{1};q,p \right)_{k}\left(a_{2};q,p\right)_{k}\cdots\left(a_{r};q,p\right)_{k},\] \[\left(a_{1},a_{2},\ldots,a_{r};q\right)_{k}:=\left(a_{1};q\right)_{k} \left(a_{2};q\right)_{k}\cdots\left(a_{r};q\right)_{k}.\]
7. Most of the proofs of the theorems in this paper use the following technique, explained in detail in [4, Theorem 3.3]. **Lemma 3** (Euler's telescoping lemma).: _Let \(u_{k}\), \(v_{k}\) and \(t_{k}\) be three sequences, such that_ \[t_{k}=u_{k}-v_{k}.\] _Then we have:_ \[\sum_{k=0}^{n}\frac{t_{k}}{t_{0}}\frac{u_{0}u_{1}\cdots u_{k-1}}{v_{1}v_{2} \cdots v_{k}}=\frac{u_{0}}{t_{0}}\left(\frac{u_{1}u_{2}\cdots u_{n}}{v_{1}v_{ 2}\cdots v_{n}}-\frac{v_{0}}{u_{0}}\right),\] (1.8) _provided none of the denominators in (_1.8_) are zero._
**Some important specializations of the elliptic numbers and elliptic weights.** It is helpful to explicitly write out some important special cases of the elliptic numbers and the elliptic weights. These cases correspond to \(p=0\) (the "\(a,b;q\)-case"); \(p=0\) and \(b\to 0\) (the "\(a;q\)-case"); and, \(p=0\) and \(a\to 0\) (the "\((b;q)\)-case").
The three special cases of the elliptic numbers are
\[[z]_{a,b;q} =\frac{(1-q^{z})(1-aq^{z})(1-bq^{2})(1-a/b)}{(1-q)(1-aq)(1-bq^{z+1 })(1-aq^{z-1}/b)}; \tag{1.9a}\] \[{}_{a;q} =\frac{(1-q^{z})(1-aq^{z})}{(1-q)(1-aq)}q^{1-z};\] (1.9b) \[{}_{b;q} =\frac{(1-q^{z})(1-bq^{2})}{(1-q)(1-bq^{z+1})}, \tag{1.9c}\]
and called \(a,b;q\)-numbers, \(a;q\)-numbers, and \((b;q)\)-numbers, respectively. (We place parentheses in "\((b;q)\)-numbers" but none in "\(a;q\)-numbers", to avoid confusion between the two special cases. This follows the notation used in [17].)
The corresponding special cases for the elliptic weight \(W_{a,b;q,p}(k)\) are as follows:
\[W_{a,b;q}(k) =\frac{(1-aq^{2k+1})(1-bq)(1-bq^{2})(1-aq^{-1}/b)(1-a/b)}{(1-aq)( 1-bq^{k+1})(1-bq^{k+2})(1-aq^{k-1}/b)(1-aq^{k}/b)}q^{k}; \tag{1.10a}\] \[W_{a;q}(k) =\frac{(1-aq^{2k+1})}{(1-aq)}q^{-k};\] (1.10b) \[W_{(b;q)}(k) =\frac{(1-bq)(1-bq^{2})}{(1-bq^{k+1})(1-bq^{k+2})}q^{k}. \tag{1.10c}\]
This paper is organized as follows. In Section 2, we use Euler's telescoping lemma to find elliptic extensions of three elementary identities and discuss some interesting special cases. In Section 3, we consider elliptic extensions of several elementary identities that are obtained
in an analogous way to the \(q\)-identities previously obtained by one of us in [13]. Finally, in Section 4, we give the proof of Theorem 1 (achieved by combining Lemma 3 with the difference equation (1.7)), and explicitly state a few noteworthy special cases.
## 2. Elementary Examples
The purpose of this section is to extend three elementary identities to corresponding identities containing elliptic numbers. For each of these elliptic identities, we give some special cases for illustration. The three identities are:
\[\sum_{k=0}^{n-1}(2k+1)=n^{2}; \tag{2.1a}\] \[\sum_{k=1}^{n}k(k+1)\cdots(k+m-1)=\frac{1}{m+1}\left(n(n+1)\cdots(n+m)\right);\] (2.1b) \[\sum_{k=1}^{n}\frac{1}{k(k+1)\cdots(k+m)}=\frac{1}{m}\left(\frac{1}{m!}-\frac{1 }{(n+1)(n+2)\cdots(n+m)}\right), \tag{2.1c}\]
where \(m=1,2,3,\dots\).
First, we give an elliptic extension of the sum of the first \(n\) odd integers.
**Theorem 4**.: _For \(n\) a non-negative integer, we have_
\[\sum_{k=0}^{n}W_{a,b;q,p}(k)\biggl{(}[k+1]_{a,b;q,p}[2]_{aq^{2k},bq^{k};q,p}- 1\biggr{)}=W_{a,b;q,p}(1)[n+1]_{a,b;q,p}[n+1]_{aq^{2},bq;q,p}. \tag{2.2}\]
Proof.: We apply Lemma 3 and take
\[u_{k}=[k+1]_{a,b;q,p}[k+1]_{aq^{2},bq;q,p};\] \[v_{k}=u_{k-1}=[k]_{a,b;q,p}[k]_{aq^{2},bq;q,p}=[k+1]_{a,b;q,p}[k- 1]_{aq^{2},bq;q,p}+W_{aq^{2},bq;q,p}(k-1).\]
Thus
\[t_{k}=u_{k}-v_{k}=W_{aq^{2},bq;q,p}(k-1)\biggl{(}[k+1]_{a,b;q,p}[2]_{aq^{2k}, bq^{k};q,p}-1\biggr{)}\text{ and }t_{0}=u_{0}-v_{0}=1.\]
We thus obtain (1.8) with these choices of \(u_{k}\), \(v_{k}\) and \(t_{k}\). Multiplication of both sides of the identity by \(W_{a,b;q,p}(1)\) gives the result.
_Remark_.: The elliptic analogue of \(n\), namely, \([n]_{a,b;q,p}\) contains extensions of \(n^{2}\) and of \(\binom{n+1}{2}\), besides other extensions. Take \(z=n\) in (1.9b), the \(a;q\)-number of \(n\). For \(a\to\infty\) this reduces to \([n]_{q}\), for \(a=1\) to \(([n]_{q})^{2}q^{1-n}\), and for \(a=q\) to \(q^{1-n}[n]_{q}[n+1]_{q}/[2]_{q}\). That is, the telescoping sum over odd elliptic numbers also extends a sum over odd squares, and to a sum over binomial coefficients. The examples in this section illustrate some of the possibilities to obtain interesting identities by specialization.
**Special cases of (2.2).**
1. Three immediate specializations of (2.2) are as follows. \(\bullet\) For the \(a,b;q\)-analogue, take \(p=0\). \[\sum_{k=0}^{n}W_{a,b;q}(k)([k+1]_{a,b;q}[2]_{aq^{2k},bq^{k};q}-1)=W_{a,b;q}(1 )[n+1]_{a,b;q}[n+1]_{aq^{2},bq;q}.\] (2.3) This identity has two parameters, \(a\) and \(b\), in addition to the base \(q\).
* For the \(a;q\)-analogue, take \(b\to 0\) or \(b\to\infty\) in (2.3). This gives \[\sum_{k=0}^{n}W_{a;q}(k)([k+1]_{a;q}[2]_{aq^{2k};q}-1)=W_{a;q}(1)[n+1]_{a;q}[n+1 ]_{aq^{2};q}.\] (2.4)
* For the \((b;q)\)-analogue, take \(a\to 0\) or \(a\to\infty\) in (2.3). This gives \[\sum_{k=0}^{n}W_{(b;q)}(k)([k+1]_{(b;q)}[2]_{(bq^{k};q)}-1)=W_{(b;q)}(1)[n+1]_{ (b;q)}[n+1]_{(bq;q)}.\] (2.5)
* We further specialize \(a\) and \(b\) to obtain two new \(q\)-analogues of (2.1a).
* Take \(a\to\infty\) in (2.4), or \(b\to 0\) in (2.5), to get \[\sum_{k=0}^{n}q^{k-1}([2]_{q}[k+1]_{q}-1)=[n+1]_{q}^{2}.\]
* When \(a\to 0\) in (2.4), or \(b\to\infty\) in (2.5), to obtain \[\sum_{k=0}^{n}q^{2n-2k}([2]_{q}[k+1]_{q}-q^{k+1})=[n+1]_{q}^{2}.\]
* Take \(a\to 1\) in (2.4), respectively, \(b\to 1\) in (2.5), to obtain the following pair of identities: \[\sum_{k=0}^{n}q^{2n-2k}\bigg{(}[2]_{q}[k+1]_{q}^{2}[2k+2]_{q}-q^{k+ 1}[2k+1]_{q}\bigg{)}=[n+1]_{q}^{3}[n+3]_{q};\] \[\sum_{k=0}^{n}\frac{q^{k-1}}{[k+1]_{q}[k+2]_{q}}\bigg{(}\frac{[k+1]_{q}[2 ]_{q}^{2}}{[k+3]_{q}}-1\bigg{)}=\frac{[n+1]_{q}^{2}}{[n+2]_{q}[n+3]_{q}}.\]
* Next, take \(a\to q\) in (2.4), respectively, \(b\to q\) in (2.5), to obtain the following pair of identities: \[\sum_{k=0}^{n}q^{2n-2k}\bigg{(}[k+1]_{q}[k+2]_{q}[2k+3]_{q}-q^{k+ 1}[2k+2]_{q}\bigg{)}=\frac{[n+1]_{q}^{2}[n+2]_{q}[n+4]_{q}}{[2]_{q}};\] \[\sum_{k=0}^{n}\frac{q^{k-1}}{[k+2]_{q}[k+3]_{q}}\bigg{(}\frac{[k+1]_{q }[2]_{q}[3]_{q}}{[k+4]_{q}}-1\bigg{)}=\frac{[n+1]_{q}^{2}}{[n+3]_{q}[n+4]_{q}}.\]
Next, we give an elliptic extension of (2.1b).
**Theorem 5**.: _For \(n,m\) non-negative integers, we have_
\[\sum_{k=0}^{n} W_{a,b;q,p}(k)[m+1]_{aq^{2k},bq^{k};q,p}\Big{(}[k+1]_{a,b;q,p}[k+2 ]_{a,b;q,p}\ldots[k+m]_{a,b;q,p}\Big{)}\] \[=[n+1]_{a,b;q,p}[n+2]_{a,b;q,p}\ldots[n+m+1]_{a,b;q,p}. \tag{2.6}\]
Proof.: We apply Lemma 3 and take
\[u_{k} =[k+1]_{a,b;q,p}[k+2]_{a,b;q,p}\ldots[k+m+1]_{a,b;q,p};\] \[v_{k}=u_{k-1} =[k]_{a,b;q,p}[k+1]_{a,b;q,p}\ldots[k+m]_{a,b;q,p},\]
so that,
\[t_{k} =W_{a,b;q,p}(k)[m+1]_{aq^{2k},bq^{k};q,p}\Big{(}[k+1]_{a,b;q,p}[k+2]_{a,b; q,p}\ldots[k+m]_{a,b;q,p}\Big{)}.\]
With these substitutions, we have (1.8) which immediately gives us (2.6).
We take \(m=1\) and shift the index \(k\mapsto k-1\) and replace \(n\) by \(n-1\) in (2.6), to get the elliptic analogue of the sum of first \(n\) even integers.
\[\sum_{k=1}^{n}W_{a,b;q,p}(k-1)[2]_{aq^{2k-2},bq^{k-1};q,p}[k]_{a,b;q,p}=[n]_{a,b ;q,p}[n+1]_{a,b;q,p}. \tag{2.7}\]
This can be regarded to be an elliptic extension of the formula for the sum of the first \(n\) natural numbers:
\[1+2+3+\cdots+n=\frac{n(n+1)}{2}. \tag{2.8}\]
We list further special cases of the elliptic analogue of this elementary identity below.
**Special cases of (2.7).**
1. For the \(a,b;q\)-analogue, take \(p=0\). \[\sum_{k=1}^{n}W_{a,b;q}(k-1)[2]_{aq^{2k-2},bq^{k-1};q}[k]_{a,b;q}=[n]_{a,b;q}[n +1]_{a,b;q}.\] (2.9)
2. For the \(a;q\)-analogue, take \(b\to 0\) or \(b\to\infty\) in (2.9). This gives \[\sum_{k=1}^{n}W_{a;q}(k-1)[2]_{aq^{2k-2};q}[k]_{a;q}=[n]_{a;q}[n+1]_{a;q}.\] (2.10)
3. For the \((b;q)\)-analogue, take \(a\to 0\) or \(a\to\infty\) in (2.9). This gives \[\sum_{k=1}^{n}W_{(b;q)}(k-1)[2]_{(bq^{k-1};q)}[k]_{(b;q)}=[n]_{(b;q)}[n+1]_{(b ;q)}.\] (2.11)
4. Two \(q\)-analogues of (2.8) * Take the limit \(a\to\infty\) in (2.10), or \(b\to 0\) in (2.11): \[\sum_{k=1}^{n}q^{k-1}[k]_{q}=\genfrac{[}{]}{0.0pt}{}{n+1}{2}_{q}.\] * A \(q\)-analogue due to Warnaar [25, Eq. 2]: Take the limit \(a\to 0\) in (2.10), or \(b\to\infty\) in (2.11). \[\sum_{k=1}^{n}q^{2n-2k}[k]_{q}=\genfrac{[}{]}{0.0pt}{}{n+1}{2}_{q}.\]
5. Some assorted \(q\)-analogues. * A \(q\)-analogue of the formula for the sum of cubes due to Warnaar [25, Eq. 2]: take \(a\to 1\) in (2.10). \[\sum_{k=1}^{n}q^{2n-2k}\frac{[k]_{q}^{2}[2k]_{q}}{[2]_{q}}=\genfrac{[}{]}{0.0 pt}{}{n+1}{2}_{q}^{2}.\] (2.12)
* Take \(b\to 1\) in (2.11). \[\sum_{k=1}^{n}q^{k-1}\frac{[2]_{q}}{[k+1]_{q}[k+2]_{q}}=\frac{[n]_{q}}{[n+2]_ {q}}.\]
* Take \(a\to q\) in (2.10). \[\sum_{k=1}^{n}q^{2n-2k}[k]_{q}[k+1]_{q}[2k+1]_{q}=\frac{[n]_{q}[n+1]_{q}^{2}[n+2]_ {q}}{[2]_{q}}.\]
* Take \(b\to q\) in (2.11). \[\sum_{k=1}^{n}q^{k-1}\frac{[2]_{q}^{2}[k]_{q}}{[k+1]_{q}[k+2]_{q}[k+3]_{q}}= \frac{[n]_{q}[n+1]_{q}}{[n+2]_{q}[n+3]_{q}}.\]
_Remark_.: There is another \(q\)-analogue of the sum of the first \(n\) cubes given by Garrett and Hummel [8, Equation 2]. This can also be obtained by telescoping (take \(u_{k}=(1-q^{k+2})\) and \(v_{k}=-(1-q^{k})\) in Lemma 3). Their elliptic extensions are immediate and are not included here. Further such \(q\)-analogues are obtained by Cigler [7], again by telescoping.
Now, instead of taking \(m=1\) in (2.6), we take \(m=2\), shift the index \(k\mapsto k-1\) in (2.6) and replace \(n\) by \(n-1\). We then obtain
\[\sum_{k=1}^{n} W_{a,b;q,p}(k-1)[3]_{aq^{2k-2},pq^{k-1};q,p}[k]_{a,b;q,p}[k+1]_{a, b;q,p}\] \[=[n]_{a,b;q,p}[n+1]_{a,b;q,p}[n+2]_{a,b;q,p}. \tag{2.13}\]
**Some special cases of (2.13).** We note some special cases of the \(a;q\)-special case of (2.13) (which is obtained by first letting \(p\to 0\), followed by letting \(b\to 0\) in (2.13)), i.e.,
\[\sum_{k=1}^{n}W_{a;q}(k-1)[3]_{aq^{2k-2};q}[k]_{a;q}[k+1]_{a;q}=[n]_{a;q}[n+1] _{a;q}[n+2]_{a;q}. \tag{2.14}\]
* Take \(a\to 0\) in (2.14) to obtain \[\sum_{k=1}^{n}q^{3n-3k}[k]_{q}[k+1]_{q}=\frac{[n]_{q}[n+1]_{q}[n+2]_{q}}{[3]_{q}}.\]
* Take \(a\to 1\) in (2.14) to obtain \[\sum_{k=1}^{n}q^{3n-3k}([k]_{q}[k+1]_{q})^{2}[2k+1]_{q}=\frac{([n]_{q}[n+1]_{q} [n+2]_{q})^{2}}{[3]_{q}}.\]
* Next, take \(a\to q\) in (2.14) to obtain \[\sum_{k=1}^{n}q^{3n-3k}[k]_{q}[k+1]_{q}^{2}[k+2]_{q}[2k+2]_{q}=\frac{[n]_{q}[n+ 1]_{q}^{2}[n+2]_{q}^{2}[n+3]_{q}}{[3]_{q}}.\]
* The following pair of identities is obtained by first replacing \(q\) by \(q^{2}\) and then letting \(a\to q\), respectively, \(a\to q^{-1}\): \[\sum_{k=1}^{n} q^{6n-6k}[2k]_{q}[2k+1]_{q}[2k+2]_{q}[2k+3]_{q}[4k+3]_{q}\] \[=\frac{[2n]_{q}[2n+1]_{q}[2n+2]_{q}[2n+3]_{q}[2n+4]_{q}[2n+5]_{q}}{[6 ]_{q}}.\] \[\sum_{k=1}^{n} q^{6n-6k}[2k-1]_{q}[2k]_{q}[2k+1]_{q}[2k+2]_{q}[4k+1]_{q}\]
\[=\frac{[2n-1]_{q}[2n]_{q}[2n+1]_{q}[2n+2]_{q}[2n+3]_{q}[2n+5]_{q}}{[6]_{q}}.\]
Finally, before closing this section, we note the elliptic extension of (2.1c).
**Theorem 6**.: _For \(n,m\) non-negative integers, we have,_
\[\sum_{k=1}^{n} \frac{W_{a,b;q,p}(k)\left[m\right]_{aq^{2k},bq^{k};q,p}}{[k]_{a,b;q,p}[k+1]_{a,b;q,p}\ldots[k+m]_{a,b;q,p}}\] \[=\bigg{(}\frac{1}{[m]_{a,b;q,p}!}-\frac{1}{[n+1]_{a,b;q,p}[n+2]_{a, b;q,p}\ldots[n+m]_{a,b;q,p}}\bigg{)}, \tag{2.15}\]
_where \([m]_{a,b;q,p}!:=[m]_{a,b;q,p}[m-1]_{a,b;q,p}\cdots[1]_{a,b,q,p}\) is an elliptic analogue of the factorial of \(m\)._
Proof.: We apply Lemma 3 and take
\[u_{k} =\frac{1}{[k+2]_{a,b;q,p}[k+3]_{a,b;q,p}\ldots[k+m+1]_{a,b;q,p}},\] \[v_{k} =u_{k-1} =\frac{1}{[k+1]_{a,b;q,p}[k+2]_{a,b;q,p}\ldots[k+m]_{a,b;q,p}};\]
so that
\[t_{k} =\frac{-W_{a,b;q,p}(k+1)[m]_{aq^{2k+2},bq^{k+1};q,p}}{[k+1]_{a,b;q, p}[k+2]_{a,b;q,p}\ldots[k+m+1]_{a,b;q,p}}\text{ and }t_{0} =\frac{-W_{a,b;q,p}(1)[m]_{aq^{2},bq;q,p}}{[m+1]_{a,b;q,p}!}.\]
With these substitutions, we have (1.8), and after replacing \(n\) by \(n-1\) and shifting the index of the sum (such that \(k\) runs from \(1\) to \(n\), instead of from \(0\) to \(n-1\)) we readily obtain (2.15).
## 3. Special cases of elliptic and multibasic hypergeometric series identities
In [13], the indefinite summation formula
\[\sum_{k=0}^{n}\frac{(1-aq^{2k})}{(1-a)}\frac{\left(a,b;q\right)_{k}}{\left(q, aq/b;q\right)_{k}}b^{n-k}=\frac{\left(aq,bq;q\right)_{n}}{\left(q,aq/b;q \right)_{n}} \tag{3.1}\]
is used to obtain \(q\)-analogues of several elementary sums. This includes Warnaar's [25]\(q\)-analogue of the sum of the first \(n\) cubes. In this section, we use the same idea, but use the following elliptic analogue of (3.1):
\[\sum_{k=0}^{n}\frac{\theta(aq^{2k};p^{2})}{\theta(a;p^{2})} \frac{\left(a,b,cp;q;p^{2}\right)_{k}}{\left(q,aq/b,bcpq;q,p^{2} \right)_{k}}\frac{\left(bcp/a;q^{-1},p^{2}\right)_{k}}{\left(cp/aq;q^{-1},p^{2 }\right)_{k}}b^{n-k}\] \[=\frac{\left(aq,bq,cpq;q,p^{2}\right)_{n}}{\left(q,aq/b,bcpq;q,p^ {2}\right)_{n}}\frac{\left(bcp/aq;q^{-1},p^{2}\right)_{n}}{\left(cp/aq;q^{-1},p^{2}\right)_{n}}. \tag{3.2}\]
We first give some remarks before the proof. Clearly, (3.2) reduces to (3.1) when \(p=0\). We cannot take \(c=0\) in (3.2) while keeping the name \(p^{2}\), as \(c=0\) appears as an essential singularity on each side of (3.2). The extra parameter \(c\) ensures that the elliptic balancing condition holds for the terms appearing in (3.1). The way the \(q\)-series identity (3.1) is extended to the elliptic identity in (3.2) is analogous to the way the of the \(q\)-Saalschutz summation is extended to the elliptic case as described in [9, Sec. 11.4, p. 323]. Notice that the indefinite summation (3.2) can also be obtained by telescoping (just as (3.1)).
Proof of (3.2).: A direct way to obtain (3.2) is to deduce it from the Frenkel and Turaev \({}_{10}V_{9}\) summation [9, Eq. (11.4.1)], which is an elliptic analogue of Jackson's very-well-poised \({}_{8}\phi_{7}\) summation. Specifically, taking \(e\to aq^{n+1}\) in [9, Equation (11.4.1)] we obtain
\[\sum_{k=0}^{n}\frac{\theta(aq^{2k};p)}{\theta(a;p)}\frac{(a,b,c,a/bc;q,p)_{k}} {(q,aq/b,aq/c,bcq;q,p)_{k}}q^{k}=\frac{(aq,bq,cq,aq/bc;q,p)_{n}}{(q,aq/b,aq/c, bcq;q,p)_{n}}.\]
Now replace \(p\) by \(p^{2}\) and subsequently replace \(c\) by \(cp\) and use
\[\frac{\big{(}a/bcp;q,p^{2}\big{)}_{k}}{(aq/cp;q,p^{2}\big{)}_{k}}=\frac{1}{b^ {k}q^{k}}\frac{\big{(}bcp/a;q^{-1},p^{2}\big{)}_{k}}{(cp/aq;q^{-1},p^{2})_{k}}.\]
This immediately gives (3.2).
It is easy to use (3.2) to obtain elliptic extensions of results from [13]. However, these results necessarily have the additional parameter \(c\), which cannot be specialized to \(0\) or \(\infty\) before letting \(p=0\). As an example, we give another extension of Warnaar's result in [25, Equation (2)], which is a \(q\)-analogue of the sum of cubes.
Replace \(n\) by \(n-1\), shift the index of summation \(k\to k-1\), and set \(a=b=q^{2}\) in (3.2) to obtain:
\[\sum_{k=1}^{n}\frac{\theta(q^{2k};p^{2})}{\theta(q^{2};p^{2})} \frac{\big{(}q^{2},q^{2},cp;q;p^{2}\big{)}_{k-1}}{(q,q,cpq^{3};q,p^{2})_{k-1} }\frac{\big{(}cp;q^{-1},p^{2}\big{)}_{k-1}}{(cp/q^{3};q^{-1},p^{2})_{k-1}}q^{ 2(n-k)}\] \[=\frac{\big{(}q^{3},q^{3},cpq;q,p^{2}\big{)}_{n-1}}{(q,q,cpq^{3}; q,p^{2})_{n-1}}\frac{\big{(}cp/q;q^{-1},p^{2}\big{)}_{n-1}}{(cp/q^{3};q^{-1},p^{2} )_{n-1}}.\]
When \(p=0\), this reduces to (2.12).
_Remark_.: A special case of (3.1) is the following \(q\)-analogue of the formula for the sum of the first \(n\) odd numbers (cf. [13, Equation (3.9)]):
\[\sum_{k=0}^{n-1}[2k+1]_{q}q^{-k}=[n]_{q}^{2}q^{1-n}. \tag{3.3}\]
An extension of (3.3) to cubic basic hypergeometric series can be given as follows:
\[\sum_{k=0}^{n-1}q^{-k}\frac{(aq;q^{3})_{k}}{(aq^{5};q^{3})_{k}}\frac{(1-q^{2k +1})}{1-q}\frac{(1-aq^{2k+1})^{2}}{(1-aq)^{2}}=\frac{(1-q^{n})^{2}(1-aq^{n})}{ (1-q)^{2}(1-aq)}\frac{\big{(}aq^{4};q^{3}\big{)}_{n-1}}{(aq^{5};q^{3})_{n-1}}q ^{1-n}. \tag{3.4}\]
(For \(a\to 0\) this reduces to (3.3).) It is easy to verify that this sum telescopes.
_Remark_.: Another general indefinite elliptic summation that can be specialized to obtain various extensions of classical results is the following special case of a multibasic theta function identity by Gasper and Schlosser [10, Equation (3.19), \(t=q\)]:
\[\sum_{k=0}^{n}\frac{\theta(ad(rs)^{k},br^{k}/dq^{k},cs^{k}/dq^{k} ;p)}{\theta(ad,b/d,c/d;p)}\] \[\quad\times\frac{(ad^{2}/bc;q,p)_{k}(b;r,p)_{k}(c;s,p)_{k}(a;rs/q,p)_{k}}{(dq;q,p)_{k}(adr/c;r,p)_{k}(ads/b;s,p)_{k}(bcrs/dq;rs/q,p)_{k}}q^{k}\] \[=\frac{\theta(a,b,c,ad^{2}/bc;p)}{d\,\theta(ad,b/d,c/d,ad/bc;p)}\]
\[\times\frac{(ad^{2}q/bc;q,p)_{n}(br;r,p)_{n}(cs;s,p)_{n}(ars/q;rs/q,p)_{n}}{( dq;q,p)_{n}(adr/c;r,p)_{n}(ads/b;s,p)_{n}(bcrs/dq;rs/q,p)_{n}}\]
\[-\frac{\theta(d,ad/b,ad/c,bc/d;p)}{d\,\theta(ad,b/d,c/d,ad/bc;p)}. \tag{3.5}\]
## 4. The proof of Theorem 1 and some special cases
We have seen that telescoping leads to several elementary identities. All the telescoping identities are special cases of Euler's telescoping lemma, Lemma 3. In order to apply the telescoping lemma, we would like to use sequences \(u_{k}\), \(v_{k}\) such that \(t_{k}=u_{k}-v_{k}\) can be simplified.
We now turn to the proof of Theorem 1. The motivation behind this theorem is to use (1.7) so that \(t_{k}=u_{k}-v_{k}\) becomes an analogue of a factorized product of linear factors in \(k\).
Proof of Theorem 1.: We combine Lemma 3 with a special instance of the the difference equation (1.7). Let
\[x =(gk-g+c)(hk+d),\] \[y =-(gk+c)(hk-h+d),\] \[r =2ghk+ch+dg.\]
With this assignment of variables, we have \(x+r=(gk+c)(hk+h+d)\), \(y-r=-(gk+g+c)(hk+d)\), and \(r+x-y=2(gk+c)(hk+d)\). Substituting these values into (1.7), we have
\[\big{[}(gk-g+c)(hk+d)\big{]}_{a,b;q,p}\,\big{[}-(gk+c)(hk-h+d) \big{]}_{aq^{4}(gk+c)(hk+d),bq^{2}gk+c)(hk+d);q,p}\] \[-\,\big{[}(gk+c)(hk+h+d)\big{]}_{a,b;q,p}\,\big{[}-(gk+g+c)(hk+d) \big{]}_{aq^{4}(gk+c)(hk+d),bq^{2}(gk+c)(hk+d);q,p}\] \[=\big{[}2(gk+c)(hk+d)\big{]}_{a,b;q,p}\,\big{[}2ghk+ch+dg\big{]}_ {aq^{2}(gk-g+c)(hk+d),bq^{(gk-g+c)(hk+d);q,p}}\] \[\quad\times W_{aq^{4}(gk+c)(hk+d),bq^{2}(gk+c)(hk+d);q,p}\big{(}-( gk+g+c)(hk+d)\big{)},\]
which is equivalent to
\[-\,\big{[}(gk-g+c)(hk+d)\big{]}_{a,b;q,p}\,\big{[}(gk+c)(hk-h+d) \big{]}_{aq^{2}(gk+c)(hk+h+d),bq^{(gk+c)(hk+h+d);q,p}}\] \[\quad\times W_{aq^{2}(gk+c)(hk+h+d),bq^{(gk+c)(hk+h+d);q,p}}\big{( }(gk+c)(hk-h+d)\big{)}^{-1}\] \[+\big{[}(gk+c)(hk+h+d)\big{]}_{a,b;q,p}\,\big{[}(gk+g+c)(hk+d) \big{]}_{aq^{2}(gk-g+c)(hk+d),bq^{(gk-g+c)(hk+d);q,p}}\] \[\quad\times W_{aq^{2}(gk-g+c)(hk+d),bq^{(gk-g+c)(hk+d);q,p}}\big{(} (gk+g+c)(hk+d)\big{)}^{-1}\] \[=\big{[}2(gk+c)(hk+d)\big{]}_{a,b;q,p}\,\big{[}2ghk+ch+dg\big{]}_ {aq^{2}(gk-g+c)(hk+d),bq^{(gk-g+c)(hk+d);q,p}}\] \[\quad\times W_{aq^{2}(gk-g+c)(hk+d),bq^{(gk-g+c)(hk+d);q,p}}\big{( }(gk+g+c)(hk+d)\big{)}^{-1}.\]
Multiplication of both sides of this relation by the factor
\[W_{aq^{2}(gk-g+c)(hk+d),bq^{(gk-g+c)(hk+d);q,p}}\big{(}(gk+g+c)(hk+d)\big{)}\]
and application of the reduction
\[\frac{W_{aq^{2}(gk-g+c)(hk+d),bq^{(gk-g+c)(hk+d);q,p}}\big{(}(gk+g+c)(hk+d) \big{)}}{W_{aq^{2}(gk+c)(hk+h+d),bq^{(gk+c)(hk+h+d);q,p}}\big{(}(gk+c)(hk-h+d) \big{)}}\] \[=W_{aq^{2}(gk-g+c)(hk+d),bq^{(gk-g+c)(hk+d);q,p}}\big{(}2ghk+ch+dg \big{)}\]
gives the identity
\[\big{[}(gk+c)(hk+h+d)\big{]}_{a,b;q,p}\,\big{[}(gk+g+c)(hk+d)\big{]}_{aq^{2}( gk-g+c)(hk+d),bq^{(gk-g+c)(hk+d);q,p}}\]
\[-\big{[}(gk-g+c)(hk+d)\big{]}_{a,b;q,p}\left[(gk+c)(hk-h+d)\right]_{ aq^{2(gk+c)(hk+h+d)},bq^{(gk+c)(hk+h+d)};q,p}\\ \times W_{aq^{2(gk-g+c)(hk+d)},bq^{(gk-g+c)(hk+d)};q,p}\big{(}2ghk +ch+dg\big{)}\\ =\big{[}2(gk+c)(hk+d)\big{]}_{a,b;q,p}\left[2ghk+ch+dg\right]_{ aq^{2(gk-g+c)(hk+d)},bq^{(gk-g+c)(hk+d)};q,p}. \tag{4.1}\]
Thus, in order to apply Lemma 3, we let
\[t_{k} =\big{[}2(gk+c)(hk+d)\big{]}_{a,b;q,p}\left[2ghk+ch+dg\right]_{ aq^{2(gk-g+c)(hk+d)},bq^{(gk-g+c)(hk+d)};q,p},\] \[u_{k} =\big{[}(gk+c)(hk+h+d)\big{]}_{a,b;q,p}\left[(gk+g+c)(hk+d)\right] _{aq^{2(gk-g+c)(hk+d)},bq^{(gk-g+c)(hk+d)};q,p},\] \[v_{k} =\big{[}(gk-g+c)(hk+d)\big{]}_{a,b;q,p}\left[(gk+c)(hk-h+d)\right] _{aq^{2(gk+c)(hk+h+d)},bq^{(gk+c)(hk+h+d)};q,p}\] \[\quad\times W_{aq^{2(gk-g+c)(hk+d)},bq^{(gk-g+c)(hk+d)};q,p} \big{(}2ghk+ch+dg\big{)}.\]
Now by (4.1) we have \(t_{k}=u_{k}-v_{k}\), and (1.8) gives the desired result.
**Some special cases of (1.4).**
1. An \(a;q\)-analogue: take \(p\to 0\) and \(b\to 0\). \[\sum_{k=0}^{n}\bigg{(}\frac{[2(gk+c)(hk+d)]_{a;q}[2ghk+ch+dg]_{ aq^{2(gk-g+c)(hk+d)};q}}{[2cd]_{a;q}[ch+dg]_{aq^{2(c-g)d};q};q}\\ \times\prod_{j=0}^{k-1}\frac{[(gj+g+c)(hj+d)]_{aq^{2(gj-g+c)(hj+ d)};q}}{[(gj+g+c)(hj+d)]_{aq^{2(gj+g+c)(hj+2h+d)};q}}\\ \times\prod_{j=0}^{k-1}W_{aq^{2(gj+c)(hj+h+d)};q}(2ghj+2gh+ch+dg) ^{-1}\bigg{)}\\ =\frac{[(gn+c)(hn+h+d)]_{a;q}[(g+c)d]_{aq^{2(c-g)d};q}}{[2cd]_{a; q}[ch+dg]_{aq^{2(c-g)d};q}}\prod_{j=1}^{n}\frac{[(gj+g+c)(hj+d)]_{aq^{2(gj-g+c)(hj+ d)};q}}{[(gj+c)(hj-h+d)]_{aq^{2(gj+c)(hj+h+d)};q}}\\ \times\prod_{j=1}^{n}W_{aq^{2(gj-g+c)(hj+d)};q}(2ghj+ch+dg)^{-1}\\ -\frac{[(c-g)d]_{a;q}[c(d-h)]_{aq^{2c(h+d)};q}}{[2cd]_{a;q}[ch+dg] _{aq^{2(c-g)d};q}}W_{aq^{2(c-g)d};q}(ch+dg).\] (4.2)
2. A \(q\)-analogue. Take \(a\to 0\) in (4.2). \[\sum_{k=0}^{n}\bigg{(}\frac{[2(gk+c)(hk+d)]_{q}[2ghk+ch+dg]_{q}}{[2 cd]_{q}[ch+dg]_{q}}q^{-\big{(}ghk^{2}+(ch+dg+gh)k\big{)}}\bigg{)}\\ =\frac{[(gn+c)(hn+h+d)]_{q}[(gn+g+c)(hn+d)]_{q}}{[2cd]_{q}[ch+dg] _{q}}q^{-\big{(}ghn^{2}+(ch+dg+gh)n\big{)}}\\ -\frac{[c(d-h)]_{q}[(c-g)d]_{q}}{[2cd]_{q}[ch+dg]_{q}}q^{ch+dg}.\] (4.3)
3. We can further specialize \(c,d,g\) and \(h\) in (4.3) to obtain more \(q\)-analogues, highlighted in SS1. In particular, we have the following: 1. Take \(c,d,g\to 1\) and \(h\to 0\), shift the index to run from \(k=1\) to \(n+1\), and replace \(n+1\) by \(n\) to obtain (1.5a). 2. Take \(c,d,g,h\to 1\) to get (1.5b).
## Acknowledgements
The research of Michael J. Schlosser was partially supported by the Austrian Science Fund (FWF), grant P 32305.
|
2309.13968 | Global analysis and LHC study of a vector-like extension of the Standard
Model with extra scalars | We perform a global analysis of a vector-like extension of the Standard
Model, which also features additional doublet and singlet scalars. The usual
Yukawa interactions are forbidden in this setup by an extra U(1) global
symmetry and the masses of the second and third family quarks and leptons are
generated via the mixing with the vector-like sector. We identify three
best-fit benchmark scenarios which satisfy the constraints imposed by the
stability of the scalar potential, the perturbativity of the coupling
constants, the measurement of the muon anomalous magnetic moment and the
non-observation of the flavor violating tau decays. We show that dominant
contributions to the muon $(g-2)$ originate in this model from the charged
Higgs/neutral lepton one-loop diagrams, thus correcting an inaccurate statement
than can be found in the literature. We also perform a detailed LHC analysis of
the benchmark scenarios. We investigate the experimental constraints stemming
from direct searches for vector-like quarks, vector-like leptons and exotic
scalars. While we show that the model is not currently tested by any collider
experiment, we point out that decays of a heavy Higgs boson into two tau
leptons may offer a smoking gun signature for the model verification in
upcoming runs at the LHC. | A. E. Cárcamo Hernández, Kamila Kowalska, Huchan Lee, Daniele Rizzo | 2023-09-25T09:06:28Z | http://arxiv.org/abs/2309.13968v1 | # Global analysis and LHC study of a vector-like extension of the Standard Model with extra scalars
###### Abstract
We perform a global analysis of a vector-like extension of the Standard Model, which also features additional doublet and singlet scalars. The usual Yukawa interactions are forbidden in this setup by an extra U(1) global symmetry and the masses of the second and third family quarks and leptons are generated via the mixing with the vector-like sector. We identify three best-fit benchmark scenarios which satisfy the constraints imposed by the stability of the scalar potential, the perturbativity of the coupling constants, the measurement of the muon anomalous magnetic moment and the non-observation of the flavor violating tau decays. We show that dominant contributions to the muon (\(g-2\)) originate in this model from the charged Higgs/neutral lepton one-loop diagrams, thus correcting an inaccurate statement than can be found in the literature. We also perform a detailed LHC analysis of the benchmark scenarios. We investigate the experimental constraints stemming from direct searches for vector-like quarks, vector-like leptons and exotic scalars. While we show that the model is not currently tested by any collider experiment, we point out that decays of a heavy Higgs boson into two tau leptons may offer a smoking gun signature for the model verification in upcoming runs at the LHC.
###### Contents
* I Introduction
* II Generation of fermion masses and mixing
* II.1 Hierarchy of masses
* II.2 CKM mixing matrix
* III Scalar potential constraints
* III.1 Scalar masses in the alignment limit
* III.2 Bounded-from-below limits
* III.3 Vacuum stability
* IV Flavor physics constraints
* IV.1 Muon anomalous magnetic moment
* IV.2 Lepton flavor violating decays
* IV.3 CKM anomaly
* V Perturbativity constraints
* VI Numerical analysis and benchmark scenarios
* VI.1 Scanning methodology
* VI.2 Benchmark scenarios
* VII LHC study of the benchmark scenarios
* VI.1 Vector-like quarks
* VII.2 Vector-like leptons
* VII.3 Exotic scalars
* VIII Conclusions
* A Fermion mass matrices
* A.1 Charged leptons
* A.2 Up-type quarks
* A.3 Down-type quarks
* A.4 Neutrino sector
* B Scalar mass matrices
* C Derivation of the bounded-from-below conditions
* D Renormalization group equations
Introduction
The origin of the flavor structure of the Standard Model (SM), i.e. the observed hierarchy between fermion masses and mixing angles of the Cabibbo-Kobayashi-Maskawa (CKM) matrix, is one of the greatest mysteries of particle physics that still lacks a convincing and commonly accepted explanation. A number of New Physics (NP) ideas have been put forward in recent decades to address the flavor puzzle, among which the Froggatt-Nielsen (FN) mechanism [1] and extra dimensions [2; 3; 4] are those that admittedly received the most attention and applications. The underlying concept is to introduce a new quantity that in some sense would be "larger" than the electroweak symmetry breaking (EWSB) scale. This could be the vacuum expectation value (vev) of a flavon field, or a distance of a fermion field from the infra-red brane. Such a hierarchy of scales can be then translated into a hierarchy of masses and mixing angles of the SM quarks and leptons. A similar idea gave rise to the famous seesaw mechanism of neutrino mass generation [5; 6; 7; 8; 9; 10; 11], where tiny values of the SM neutrino masses arise as a result of suppression of the EWSB scale by a very large Majorana mass.
In Ref. [12] a FN-inspired model was proposed to explain the observed masses and mixing patterns of the SM fermions. The SM Yukawa interactions are forbidden in this setup by an extra abelian symmetry U(1)\({}_{X}\), which could be either global or local. The particle content of the model corresponds to the two-Higgs-doublet model (2HDM) extended by a full family of vector-like (VL) fermions charged under U(1)\({}_{X}\), and one U(1)\({}_{X}\)-breaking singlet scalar which plays the role of a FN flavon. The large third-family Yukawa couplings are then effectively generated via mixing of the SM quarks and leptons with the SU(2)\({}_{L}\) doublet VL fermions, while the Yukawa couplings of the second family emerge from a seesaw-like construction, mediated by the heavy VL SU(2)\({}_{L}\) singlets.
The rich structure of the model introduced in Ref. [12] makes it a perfect framework for providing a combined explanation both for the flavor pattern of the SM and for the miscellaneous anomalies which emerged in recent years in collider experiments. In this context, lepton-flavor violating anomalies in the rare semi-leptonic decays of the \(B\) mesons were analyzed in Refs. [12; 13], \(Z\)-mediated Flavour Changing Neutral Currents in Ref. [14], and a deviation from the SM prediction in the measured value of the anomalous magnetic moment of muon in Refs. [15; 13; 16]. In the latter study, in which the extra U(1)\({}_{X}\) symmetry was assumed to be global, five benchmark points were identified that could account for the muon (\(g-2\)) anomaly and, at the same time, give rise to the mass and mixing patterns of the SM fermions. The scenarios pinpointed in Ref. [16] were characterized by relatively low (\(\sim 200\) GeV) masses of the VL lepton doublets and large (\(\sim 10\)) quartic couplings of the scalar potential, which may indicate a loss of perturbativity at scales very close to the typical scale set by the masses of the NP particles in the analyzed model.
In this study, we reassess the findings of Ref. [16] improving and extending its analysis in several different directions. Firstly, we thoroughly discuss the impact of the most recent bounds from direct NP searches at the Large Hadron Collider (LHC) on the allowed parameter space of the model, a topic which was not addressed in detail in Refs. [13; 14; 15; 16]. While we show that the model is not currently tested by any collider experiment, we point out that decays of a heavy Higgs boson into two tau leptons may offer a smoking gun signature for the detection of the model in the upcoming runs of the LHC.
Secondly, we demonstrate that the quartic and Yukawa couplings of the model are subject to strong constraints from their renormalization group (RG) running. In Ref. [16] it was required that all the dimensionless parameters of the lagrangian remain perturbative (in a loose sense of being smaller than \(\sqrt{4\pi}\) for the gauge/Yukawa and smaller than \(4\pi\) for the scalar potential couplings) at the characteristic energy scale of the model. We argue that such a simplistic implementation of the perturbativity bounds should be taken with a grain of salt. The breakdown of perturbativity usually calls for an extension of the theoretical setup by extra degrees of freedom in order to
cure pathological behavior of the running couplings, or/and for an inclusion of non-perturbative effects (like bound-state formation). If any of those arose at the scale specific to the original NP model, they would most likely affect its phenomenological predictions. Therefore, it is more correct to apply the perturbativity bounds to the running couplings evaluated at an energy scale which is high enough that the phenomenology of the specific NP model can be trusted. Once this improvement had been implemented in our study, we discovered that all the benchmark points found previously in Ref. [16] were disfavored.
Last but not least, we refine the derivation of the stability conditions for the scalar potential which in Refs. [13; 16] was simplified to the 2HDM case by integrating out the singlet flavon field. In the current work we derive all the relevant stability conditions in the full three-scalar setup, obtaining additional constraints on the quartic couplings.
With all the improvements in place, we identify three benchmark scenarios that satisfy our theoretical and experimental requirements. While these best-fit points emerge from a random numerical scan, they present features that are generic for the model in study. Most importantly, we point out that a charged Higgs/heavy neutrino loop is a dominant contribution to the muon (\(g-2\)) anomaly. This results from the fact that the competing neutral scalar/heavy charged lepton contributions are governed by the same Yukawa coupling that determines the tree-level muon mass and is thus required to be small. Once more, this finding is qualitatively different from the conclusions obtained in Refs. [13; 16], where only the charged lepton loops were considered.
The structure of the paper is the following. In Sec. II we briefly review the field content of the model. We also show how the SM fermion masses and the CKM matrix are generated in this framework. Sec. III is dedicated to the scalar sector of the theory. Tree-level scalar masses in the alignment limit are presented, as well as three-field potential stability conditions. Experimental constraints from the flavor physics observables (muon (\(g-2\)), rare \(\tau\) decays, CKM anomaly) are examined in Sec. IV. In Sec. V we discuss the RG flow of the model couplings and we derive the corresponding perturbativity bounds. Sec. VI comprises the numerical analysis of the model. We discuss the setup of our numerical scan and we identify three benchmark scenarios that satisfy all the theoretical and phenomenological constraints. In Sec. VII we present a detailed analysis of the LHC searches that may test the parameter space of the model. We summarize our findings in Sec. VIII. Appendices feature, respectively, explicit forms of the fermion (Appendix A) and scalar (Appendix B) mass matrices, derivation of the bounded-from-below constraints (Appendix C), and the RG equations (Appendix D).
## II Generation of fermion masses and mixing
We begin our study by reviewing the structure and the main properties of the model introduced in Ref. [12]. In the following, we focus mostly on these features of the model which play a pivotal role in the subsequent phenomenological analysis. Technical details of the model, including the analytical diagonalization of the fermion mass matrices and the derivation of the interaction vertices in the mass basis, can be found in Refs. [12; 13; 14; 16].
The particle content of the model is summarized in Table 1. The SM fermion sector, collectively denoted as \(\psi_{i}\) (\(\psi_{i}=Q_{iL}\), \(u_{iR}\), \(d_{iR}\), \(L_{iL}\), \(e_{iR}\) and \(i=1,2,3\) stands for a generation index) is extended by one full family of VL fermions, indicated collectively as (\(\psi_{4}\), \(\widetilde{\psi}_{4}\)). We adopt the convention of using the left-chiral two-component Weyl spinors, therefore the subscripts \(L,R\) indicate the names of the fermions, not the chiralities. The scalar sector contains, besides the usual SU(2)\({}_{L}\) Higgs doublet dubbed as \(H_{u}\), an extra scalar doublet \(H_{d}\) and a scalar singlet \(\phi\). Note that all the NP particles and the Higgs doublet \(H_{u}\) are charged under an extra global gauge symmetry U(1)\({}_{X}\), while the SM fermions are U(1)\({}_{X}\) singlets. As a result, the ordinary SM Yukawa interactions are
forbidden.
All the renormalizable Yukawa interactions between the SM and NP fermions which are allowed by the extended gauge symmetry can be schematically written as:
\[\mathcal{L}_{\text{ren}}^{\text{Yukawa}}=y_{i4}^{\psi}\psi_{iL}H \psi_{4R}+y_{4j}^{\psi}\psi_{4L}H\psi_{jR}+x_{i4}^{\psi}\psi_{iL}\phi\,\widetilde {\psi}_{4R}+x_{4j}^{\psi}\widetilde{\psi}_{4L}\phi\,\psi_{jR}\\ +M_{4}^{\psi}\,\psi_{4L}\widetilde{\psi}_{4R}+M_{4}^{\widetilde{ \psi}}\,\widetilde{\psi}_{4L}\psi_{4R}+\text{h.c.}, \tag{1}\]
where \(H\) is either \(H_{u}\) or \(H_{d}\) and \(M_{4}^{\psi}\) (\(M_{4}^{\widetilde{\psi}}\)) denotes the VL doublet (singlet) mass parameter. Note that with the U(1)\({}_{X}\) charges given in Table 1 the scalar \(H_{u}\) only couples to the up-type quarks, while \(H_{d}\) to the down-type quarks and charged leptons, reminiscent of the 2HDM Type-II model.
### Hierarchy of masses
Once the neutral components of the scalar fields develop their vevs, the \(5\times 5\) fermions mass matrices are generated. Since their upper \(3\times 3\) blocks contain only zeros (we recall that the SM Yukawa couplings are forbidden by the U(1)\({}_{X}\) symmetry), one has the freedom to rotate the first three families. It can easily be shown [12] that this allows one to choose a flavor basis in which the fermion mass matrices acquire the following form:
\[\mathcal{M}_{\psi}=\left(\begin{array}{c|ccc}&\psi_{1R}&\psi_{2R}&\psi_{3R}& \psi_{4R}&\widetilde{\psi}_{4R}\\ \hline\psi_{1L}&0&0&0&(y_{14}^{\psi}\langle H^{0}\rangle)&0\\ \psi_{2L}&0&0&0&y_{24}^{\psi}\langle H^{0}\rangle&0\\ \psi_{3L}&0&0&0&y_{34}^{\psi}\langle H^{0}\rangle&x_{34}^{\psi}\langle\phi \rangle\\ \psi_{4L}&0&y_{43}^{\psi}\langle H^{0}\rangle&0&M_{4}^{\psi}\\ \widetilde{\psi}_{4L}&0&x_{42}^{\psi}\langle\phi\rangle&x_{43}^{\psi}\langle \phi\rangle&M_{4}^{\widetilde{\psi}}&0\\ \end{array}\right)\,. \tag{2}\]
In the above, the term in parentheses assumes a non-zero value in the mass matrix of the down-type quarks, while it is zero for the up-type quarks and charged leptons. The exact forms of the matrices \(\mathcal{M}_{u}\), \(\mathcal{M}_{d}\) and \(\mathcal{M}_{e}\) are presented in Appendix A.
In order to calculate the masses of the physical quarks and leptons, the \(5\times 5\) matrices of Eq. (2) need to be diagonalized. Due to a large number of free parameters in the Yukawa sector one may expect that the resulting functional dependence of the eigenvalues of \(\mathcal{M}_{\psi}\) on the couplings \(y_{i4}^{\psi}\), \(y_{43}^{\psi}\), \(x_{4i}^{\psi}\), \(x_{34}^{\psi}\) and the masses \(M_{4}^{\psi(\widetilde{\psi})}\) is highly nontrivial. It turns out, however, that it is not necessarily the case and that simplified expressions for the fermion masses can be derived. Denoting the scalar vevs as
\[\langle H_{u}^{0}\rangle=v_{u}/\sqrt{2},\qquad\langle H_{d}^{0}\rangle=v_{d}/ \sqrt{2},\qquad\langle\phi\rangle=v_{\phi}/\sqrt{2} \tag{3}\]
\begin{table}
\begin{tabular}{c|c c c c|c c c c c c|c c c c c|c c c c} \hline \hline Field & \(Q_{iL}\) & \(u_{iR}\) & \(d_{iR}\) & \(L_{iL}\) & \(e_{iR}\) & \(Q_{4L}\) & \(u_{4R}\) & \(d_{4R}\) & \(L_{4L}\) & \(e_{4R}\) & \(\nu_{4R}\) & \(\widetilde{Q}_{4R}\) & \(\widetilde{u}_{4L}\) & \(\widetilde{d}_{4L}\) & \(\widetilde{L}_{4R}\) & \(\widetilde{e}_{4L}\) & \(\widetilde{\nu}_{4L}\) & \(\phi\) & \(H_{u}\) & \(H_{d}\) \\ \hline SU(3)\({}_{C}\) & \(\mathbf{3}\) & \(\mathbf{\bar{3}}\) & \(\mathbf{\bar{3}}\) & \(\mathbf{1}\) & \(\mathbf{1}\) & \(\mathbf{3}\) & \(\mathbf{\bar{3}}\) & \(\mathbf{\bar{3}}\) & \(\mathbf{1}\) & \(\mathbf{1}\) & \(\mathbf{1}\) & \(\mathbf{1}\) & \(\mathbf{\bar{3}}\) & \(\mathbf{3}\) & \(\mathbf{3}\) & \(\mathbf{1}\) & \(\mathbf{1}\) & \(\mathbf{1}\) & \(\mathbf{1}\) & \(\mathbf{1}\) & \(\mathbf{1}\) & \(\mathbf{1}\) & \(\mathbf{1}\) & \(\mathbf{1}\) \\ SU(2)\({}_{L}\) & \(\mathbf{2}\) & \(\mathbf{1}\) & \(\mathbf{1}\) & \(\mathbf{2}\) & \(\mathbf{1}\) & \(\mathbf{2}\) & \(\mathbf{1}\) & \(\mathbf{1}\) & \(\mathbf{2}\) & \(\mathbf{1}\) & \(\mathbf{1}\) & \(\mathbf{2}\) & \(\mathbf{1}\) & \(\mathbf{1}\) & \(\mathbf{2}\) & \(\mathbf{1}\) & \(\mathbf{1}\) & \(\mathbf{1}\) & \(\mathbf{1}\) & \(\mathbf{2}\) & \(\mathbf{2}\) \\ U(1)\({}_{Y}\) & \(\frac{1}{6}\) & \(-\frac{2}{3}\) & \(\frac{1}{3}\) & \(-\frac{1}{2}\) & \(1\) & \(\frac{1}{6}\) & \(-\frac{2}{3}\) & \(\frac{1}{3}\) & \(-\frac{1}{2}\) & \(1\) & \(0\) & \(-\frac{1}{6}\) & \(\frac{2}{3}\) & \(-\frac{1}{3}\) & \(\frac{1}{2}\) & \(-1\) & \(0\) & \(0\) & \(\frac{1}{2}\) & \(-\frac{1}{2}\) \\ U(1)\({}_{X}\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(-1\) & \(-1\) & \(-1\) & \(-1\) & \(-1\) & \(-1\) & \(1\) & \(-1\) & \(-1\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: The particle content of the NP model considered in this study.
and defining \(\tan\beta=v_{u}/v_{d}\), the masses of the third and second family quarks and leptons are approximately given by (see also Ref. [12] for a related derivation)
\[m_{t} \approx \frac{1}{\sqrt{2}}\frac{y_{43}^{u}x_{34}^{Q}v_{\phi}v_{u}}{\sqrt{ (x_{34}^{Q}v_{\phi})^{2}+2(M_{4}^{Q})^{2}}},\qquad m_{c}\approx\frac{y_{24}^{u }x_{42}^{u}v_{\phi}v_{u}}{2\,M_{4}^{u}} \tag{4}\] \[m_{b} \approx \frac{1}{\sqrt{2}}\frac{y_{43}^{d}x_{34}^{Q}v_{\phi}v_{d}}{\sqrt{ (x_{34}^{Q}v_{\phi})^{2}+2(M_{4}^{Q})^{2}}},\qquad m_{s}\approx\frac{y_{24}^{d }x_{42}^{d}v_{\phi}v_{d}}{2\,M_{4}^{d}}\] (5) \[m_{\tau} \approx \frac{1}{\sqrt{2}}\frac{y_{43}^{e}x_{34}^{L}v_{\phi}v_{d}}{\sqrt {(x_{34}^{L}v_{\phi})^{2}+2(M_{4}^{L})^{2}}},\qquad m_{\mu}\approx\frac{y_{24}^ {e}x_{42}^{e}v_{\phi}v_{d}}{2\,M_{4}^{e}}\,. \tag{6}\]
While Eqs. (4)-(6) allow determination of the SM fermion masses with an accuracy within a factor of \(2-3\) only, they can be used to gain intuition of which NP Yukawa couplings play a dominant role in establishing the correct masses of particular fermions. For example, large \(x_{34}^{Q}\) and \(y_{43}^{u}\) are expected to fit \(m_{t}\), while \(y_{24}^{u}\ll 1\) or \(x_{42}^{u}\ll 1\) would be required to suppress the charm mass. Similarly, large \(y_{43}^{d}\) is needed to generate \(m_{b}=4.18\) GeV. Additionally, in order to obtain the correct value of the top quark mass, the singlet scalar ve \(v_{\phi}\) should be of the same order as the VL mass parameter \(M_{4}^{Q}\). Note, however, that in our phenomenological analysis we always perform the numerical diagonalization of the mass matrices (A2), (A5) and (A8).
One important observation which can be deduced from Eqs. (4) and (5) is that the ratio of the top and bottom masses, \(m_{t}/m_{b}\approx 34\), puts relevant constraints on the allowed parameter space of the model. In fact, we have
\[\frac{m_{t}}{m_{b}}\approx\frac{y_{43}^{u}}{y_{43}^{d}}\tan\beta\,. \tag{7}\]
The relation (7) leads to two distinct classes of solutions. In the first one, with both the Yukawa couplings of order one, \(\tan\beta\sim\mathcal{O}(10)\) is required. In the other one, with \(\tan\beta\sim\mathcal{O}(1)\), a large hierarchy between the up and down sector couplings, \(y_{43}^{d}\ll y_{43}^{u}\), must be imposed.
The masses of VL fermions are given, to a very good approximation, by the corresponding VL mass parameters with small contributions stemming from their mixing with the second and the third family,
\[M_{U_{1}} \approx \sqrt{(M_{4}^{Q})^{2}+\frac{1}{2}(v_{\phi}x_{34}^{Q})^{2}-\frac{ (M_{4}^{Q}y_{43}^{u}v_{u})^{2}}{(x_{34}^{Q}v_{\phi})^{2}+2(M_{4}^{Q})^{2}}} \tag{8}\] \[M_{U_{2}} \approx \sqrt{(M_{4}^{u})^{2}+\frac{1}{2}(v_{\phi}x_{43}^{u})^{2}+\frac{ 1}{2}(v_{\phi}x_{42}^{u})^{2}+\frac{2(M_{4}^{u}y_{43}^{u}v_{u})^{2}}{2(M_{4}^ {u})^{2}+(v_{\phi}x_{43}^{u})^{2}+(v_{\phi}x_{42}^{u})^{2}}}\] (9) \[M_{D_{1}} \approx \sqrt{(M_{4}^{Q})^{2}+\frac{1}{2}(v_{\phi}x_{34}^{Q})^{2}},\qquad M _{D_{2}}=\sqrt{(M_{4}^{d})^{2}+\frac{1}{2}(v_{\phi}x_{43}^{d})^{2}+\frac{1}{2} (v_{\phi}x_{42}^{d})^{2}}\] (10) \[M_{E_{1}} \approx \sqrt{(M_{4}^{L})^{2}+\frac{1}{2}(v_{\phi}x_{34}^{L})^{2}},\qquad M _{E_{2}}=\sqrt{(M_{4}^{e})^{2}+\frac{1}{2}(v_{\phi}x_{43}^{e})^{2}+\frac{1}{2} (v_{\phi}x_{42}^{e})^{2}}. \tag{11}\]
In the neutrino sector, the corresponding mass matrix is \(7\times 7\) and its explicit form can be found in Eq. (A12). The resulting masses of the heavy neutrinos read
\[M_{N_{1}}=M_{N_{2}}\approx M_{4}^{\nu},\qquad M_{N_{3}}=M_{N_{4}}\approx\sqrt{( M_{4}^{L})^{2}+\frac{1}{2}(v_{\phi}x_{34}^{L})^{2}}. \tag{12}\]
By comparing Eq. (12) with Eq. (11), we can pinpoint two generic features of the model considered in this study: heavy neutrinos \(N_{1,2}\) are the lightest VL leptons in the spectrum, while the pair \(N_{3,4}\) is mass-degenerate (at the tree level) with the charged VL lepton \(E_{1}\). We will later see that this mass pattern has important consequences for the resulting phenomenology.
As a final remark, let us notice that one complete VL family allows us to give masses to the second and third family of the SM fermions only. To generate the masses for the first family as well, one extra VL family is required (for an example of such a construction, see Ref. [16]). Since such an extension would only increase the number of free parameters in the model without affecting any phenomenological findings, in this study we limit ourselves to its most economical version.
### CKM mixing matrix
The full \(5\times 5\) mixing matrix takes the following form [14]:
\[V_{\rm mixing}=V_{L}^{u}.\,{\rm diag}\,(1,1,1,1,0)\,.V_{L}^{d\dagger} \tag{13}\]
where \(V_{L}^{u}\) and \(V_{L}^{d}\) are the left-handed mixing matrices of Eqs. (101) and (102) which diagonalize the up- and down-type quark mass matrices \({\cal M}_{u}\) and \({\cal M}_{d}\). The zero element of the matrix (13) indicates the fact that the singlet VL quarks do not interact with the SM gauge bosons \(W^{\pm}\). Following the strategy of Ref. [14] and working under the assumption that \(v_{u,d}/M_{4}^{Q,u,d}\ll 1\), we can approximate the \(3\times 3\) CKM matrix as
\[V_{\rm CKM}^{3\times 3}\approx\left(\begin{array}{ccc}1-x_{ud}^{2}/2&x_{ud}&x_ {ud}x_{d}\\ -x_{ud}&1-x_{ud}^{2}/2&x_{d}-x_{u}\\ -x_{u}x_{ud}&x_{u}-x_{d}&1\end{array}\right)\,, \tag{14}\]
where
\[x_{d}=\frac{y_{24}^{d}x_{43}^{d}M_{4}^{Q}}{y_{43}^{d}x_{34}^{Q}M_{4}^{d}}\,, x_{u}=\frac{y_{24}^{u}x_{43}^{u}M_{4}^{Q}}{y_{43}^{u}x_{34}^{Q}M_{4}^{u }}\,, x_{ud}=\frac{y_{14}^{d}}{y_{24}^{d}}\,. \tag{15}\]
Based on the conclusions from Sec. II.1 one expects \(x_{u},x_{d}\ll 1\). Note also that:
* The element \(V_{us}\) of the CKM matrix is given by \(y_{14}^{d}/y_{24}^{d}\) in our model. The presence of a non-zero coupling \(y_{14}^{d}\) is thus crucial to generate the Cabibbo angle of the right size. We also expect \(y_{14}^{d}\approx 0.22\,y_{24}^{d}\).
* The correct value of the element \(V_{ud}\) is generated automatically once the Cabibbo angle is set.
* To reproduce the correct value of the element \(V_{ub}\), one needs \(x_{d}\approx 0.017\). It then follows that \(x_{u}\approx-0.023\) is required in order to fit the element \(V_{cb}\) (it also implies \(x_{43}^{u}\) of order one).
* The only element of the CKM matrix that can not be accurately reproduced is \(V_{td}\).
To analzye this issue more quantitatively, it is convenient to rewrite the CKM matrix (14) in terms of the Wolfenstein parameters [17]. Defining, for example,
\[x_{d}=A\,\lambda^{3}\,\sqrt{\eta^{2}+\rho^{2}}\,, x_{u}=A\,\lambda^{2}(\sqrt{\eta^{2}+\rho^{2}}\,-1)\,, x_{ud}=\frac{x_{u}\,x_{d}}{\lambda}\,, \tag{16}\]
one obtains
\[|V_{\rm CKM}^{3\times 3}|=\left(\begin{array}{ccc}1-\lambda^{2}/2&\lambda&A\, \lambda^{3}\,\sqrt{\eta^{2}+\rho^{2}}\\ \lambda&1-\lambda^{2}/2&A\,\lambda^{2}\\ A\,\lambda^{3}\left(1-\sqrt{\eta^{2}+\rho^{2}}\right)&A\,\lambda^{2}&1\end{array} \right)+{\cal O}(\lambda^{4})\,. \tag{17}\]
Plugging the Wolfenstein parameters extracted from the global fit [18] into Eq. (17) and comparing it with the experimental determination of the CKM matrix elements reported in Ref. [18], one can estimate to what extent the measured structure of the CKM matrix can be reproduced in our model. One obtains
\[\frac{|V_{\rm CKM}^{\rm exp}|-|V_{\rm CKM}^{3\times 3}|}{\delta|V_{\rm CKM}^{ \rm exp}|}=\left(\begin{array}{ccc}0&0&0\\ 0&0.04&0\\ 8.88&0.23&0.01\end{array}\right)\,. \tag{18}\]
It results from Eq. (18) that in the framework of our model we may not be able to correctly reproduce all the elements of the CKM matrix (this observation will be later confirmed by our numerical scan). 1 Once more, this issue could be solved by introducing an extra VL family.
Footnote 1: Note that modifying the definitions of the parameters \(x_{u}\), \(x_{d}\) and \(x_{ud}\) in Eq. (16), one could be able to fit better the element \(V_{td}\), but at the price of losing the accuracy in reproducing \(V_{cb}\).
To conclude this section, we would like to stress again that the approximation adopted in the foregoing discussion hinges on the assumption of the specific mass hierarchy in the NP sector, which may not be entirely fulfilled. Therefore, in the phenomenological analysis we will be always calculating all the elements of the CKM matrix numerically.
## III Scalar Potential Constraints
In this section, we discuss the constraints stemming from the scalar potential of the model. In particular, we define the alignment limit of the SM-like Higgs boson, we derive the conditions for the scalar potential to be bounded from below in the presence of three independent scalar fields, and we verify whether the electroweak (EW) vacuum is stable.
### Scalar masses in the alignment limit
In the interaction basis, the most generic renormalizable scalar potential of the model defined in Table 1 takes the form [16]:
\[\begin{split} V&=\mu_{u}^{2}(H_{u}^{\dagger}H_{u})+ \mu_{d}^{2}(H_{d}^{\dagger}H_{d})+\mu_{\phi}^{2}(\phi^{*}\phi)-\frac{1}{2}\mu _{\rm sb}^{2}\left(\phi^{2}+\phi^{*2}\right)\\ &+\frac{1}{2}\lambda_{1}(H_{u}^{\dagger}H_{u})^{2}+\frac{1}{2} \lambda_{2}(H_{d}^{\dagger}H_{d})^{2}+\lambda_{3}(H_{u}^{\dagger}H_{u})(H_{d }^{\dagger}H_{d})+\lambda_{4}(H_{u}^{\dagger}H_{d})(H_{d}^{\dagger}H_{u})\\ &-\frac{1}{2}\lambda_{5}(\epsilon_{ij}H_{u}^{i}H_{d}^{j}\phi^{2}+ \text{H.c.})+\frac{1}{2}\lambda_{6}(\phi^{*}\phi)^{2}+\lambda_{7}(\phi^{*} \phi)(H_{u}^{\dagger}H_{u})+\lambda_{8}(\phi^{*}\phi)(H_{d}^{\dagger}H_{d}), \end{split} \tag{19}\]
where \(\mu_{u,d,\phi}^{2}\) are dimensionful mass parameters, \(\lambda_{1,2,\cdots,8}\) denote dimensionless quartic coupling constants, and \(\mu_{\rm sb}^{2}\) is an extra mass term which softly violates the global U(1)\({}_{X}\) symmetry. The main reason to introduce the latter is to prevent a massless Goldstone boson of the spontaneously broken U(1)\({}_{X}\) to appear in the spectrum. As we will see below, the soft-breaking term does not
affect the CP-even and the charged scalar masses since it only enters the mass matrix of the pseudoscalars.
Expanding the fields \(H_{u}\), \(H_{d}\) and \(\phi\) around their vacuum states,
\[H_{u}=\begin{pmatrix}H_{u}^{+}\\ \frac{1}{\sqrt{2}}\left(v_{u}+\operatorname{Re}H_{u}^{0}+i\operatorname{Im}H_ {u}^{0}\right)\end{pmatrix},\qquad H_{d}=\begin{pmatrix}\frac{1}{\sqrt{2}} \left(v_{d}+\operatorname{Re}H_{d}^{0}+i\operatorname{Im}H_{d}^{0}\right)\\ H_{d}^{-}\end{pmatrix},\] \[\phi=\frac{1}{\sqrt{2}}\left(v_{\phi}+\operatorname{Re}\phi+i \operatorname{Im}\phi\right), \tag{20}\]
where the vevs are defined in Eq. (3), one can use the minimization conditions for the scalar potential (19) to express the dimensionful mass parameters in terms of the quartic couplings and the vevs,
\[\mu_{u}^{2}=-\frac{1}{2}\left(\lambda_{1}v_{u}^{2}+\lambda_{3}v_ {d}^{2}+\lambda_{7}v_{\phi}^{2}\right)-\frac{1}{4}\lambda_{5}\left(\frac{v_{d }}{v_{u}}\right)v_{\phi}^{2},\] \[\mu_{d}^{2}=-\frac{1}{2}\left(\lambda_{2}v_{d}^{2}+\lambda_{3}v_ {u}^{2}+\lambda_{8}v_{\phi}^{2}\right)-\frac{1}{4}\lambda_{5}\left(\frac{v_{u }}{v_{d}}\right)v_{\phi}^{2}, \tag{21}\] \[\mu_{\phi}^{2}=-\frac{1}{2}\left(\lambda_{6}v_{\phi}^{2}+\lambda_ {5}v_{d}v_{u}+\lambda_{7}v_{u}^{2}+\lambda_{8}v_{d}^{2}\right)+\mu_{\rm sb}^{ 2}\,.\]
One must have
\[\mu_{u}^{2}<0\,,\qquad\mu_{d}^{2}<0\,,\qquad\mu_{\phi}^{2}<0\,. \tag{22}\]
in order to generate the non-zero vevs for all the scalar fields.
The explicit forms of the scalar mass matrices derived from the potential (19) are collected in Appendix B. The real parts of the scalar fields, \(\operatorname{Re}H_{u}^{0}\), \(\operatorname{Re}H_{d}^{0}\) and \(\operatorname{Re}\phi\), account for three CP-even Higgs bosons. The corresponding mass matrix \(\mathbf{M}_{\rm CP-even}^{2}\) (see Eq. (104)) can be diagonalized with a mixing matrix \(R_{h}\) defined in Eq. (105). The masses of three physical neutral scalars, \(h_{1}\), \(h_{2}\) and \(h_{3}\), correspond to the eigenvalues of \(\mathbf{M}_{\rm CP-even}^{2}\),
\[\operatorname{diag}\{M_{h_{1}}^{2},M_{h_{2}}^{2},M_{h_{3}}^{2}\}=R_{h}( \mathbf{M}_{\rm CP-even}^{2})R_{h}^{T}\,. \tag{23}\]
In the following, we will want to identify the SM Higgs boson with the lightest neutral scalar \(h_{1}\). To this end, we choose to work in the so-called alignment limit, defined as a set of constraints on the quartic couplings \(\lambda_{i}\) under which \(h_{1}\) features the same tree-level couplings with the SM particles as the SM Higgs. We show in Appendix B that this assumption requires
\[\lambda_{8}\,\cos^{2}\beta+\lambda_{7}\,\sin^{2}\beta+\lambda_{5} \,\sin\beta\cos\beta = 0 \tag{24}\] \[\lambda_{2}\,\cos^{2}\beta-\lambda_{1}\,\sin^{2}\beta-\lambda_{ 3}(\cos^{2}\beta-\sin^{2}\beta) = 0\,, \tag{25}\]
where the equality imposes a perfect alignment condition. The masses of the CP-even scalars in the alignment limit read
\[M_{h_{1}}^{2} = v^{2}\left(\lambda_{1}\,\sin^{2}\beta+\lambda_{3}\,\cos^{2}\beta\right) \tag{26}\] \[M_{h_{2}}^{2} = \lambda_{6}\,v_{\phi}^{2}-\frac{1}{8\sin\beta\cos\beta}\left(B_{2 3}+\sqrt{4A_{23}^{2}+B_{23}^{2}}\right)\] (27) \[M_{h_{3}}^{2} = \lambda_{6}\,v_{\phi}^{2}-\frac{1}{8\sin\beta\cos\beta}\left(B_{2 3}-\sqrt{4A_{23}^{2}+B_{23}^{2}}\right)\,, \tag{28}\]
with \(A_{23}\) and \(B_{23}\) defined in Eq. (104) and Eq. (105), respectively, and \(v=\sqrt{v_{u}^{2}+v_{d}^{2}}=246\) GeV.
The CP-odd scalar mass matrix in the basis \(\left(\text{Im}\,H_{u}^{0},\,\text{Im}\,H_{d}^{0},\,\text{Im}\,\phi\right)\), \(\text{M}_{\rm CP-odd}^{2}\), is defined in Eq. (144). After the diagonalization, the physical CP-odd spectrum consists of one massless Goldstone boson and two massive pseudoscalars, \(a_{1}\) and \(a_{2}\),
\[\text{diag}\{0,M_{a_{1}}^{2},M_{a_{2}}^{2}\}=R_{a}(\text{M}_{\rm CP-odd}^{2})R _{a}^{T}\,, \tag{29}\]
with the masses given by
\[M_{a_{1}}^{2} = -\frac{\lambda_{5}}{2\sin 2\beta}\left(v^{2}\sin^{2}2\beta+v_{ \phi}^{2}\right) \tag{30}\] \[M_{a_{2}}^{2} = 2\,\mu_{\rm sb}^{2}\,. \tag{31}\]
Note that \(\lambda_{5}<0\) and \(\mu_{\rm sb}^{2}>0\) are required to guarantee the positivity of \(M_{a_{1}}^{2}\) and \(M_{a_{2}}^{2}\).
Finally, the charged scalar mass matrix in the basis \(\left(H_{u}^{\pm},H_{d}^{\pm}\right)\), \(\text{M}_{\rm Charged}^{2}\), is defined in Eq. (152). After the diagonalization with a mixing matrix \(R_{\beta}\), one is left with a massless charged Goldstone boson and a charged Higgs boson,
\[\text{diag}\{0,M_{h^{\pm}}^{2}\}=R_{\beta}(\text{M}_{\rm Charged}^{2})R_{ \beta}^{T}\,. \tag{32}\]
The corresponding mass reads in this case
\[M_{h^{\pm}}^{2}=\frac{\lambda_{4}v^{2}}{2}-\frac{\lambda_{5}v_{\phi}^{2}}{2 \sin 2\beta}\,. \tag{33}\]
As a closing remark, let us notice that the alignment condition (25) indicates
\[\lambda_{2}=\lambda_{3}+\tan^{2}\beta(\lambda_{1}-\lambda_{3})\,. \tag{34}\]
In order to preserve the perturbativity of \(\lambda_{2}\) (more on this in Sec. V), the term in parentheses needs to be fine-tuned with a precision \(\mathcal{O}(1/\tan^{2}\beta)\) or better, effectively fixing \(\lambda_{3}\approx\lambda_{1}\) with the same accuracy. On the other hand, Eq. (26) implies that we can identify \(\lambda_{1}\) with the quartic coupling of the SM, \(\lambda_{1}=0.258\), as long as \(\tan\beta\gtrsim 3\). Similarly, the alignment condition (24) gives
\[\lambda_{8}=-\tan\beta\left(\lambda_{7}\tan\beta+\lambda_{5}\right)\,. \tag{35}\]
Perturbativity of \(\lambda_{8}\) then requires \(\lambda_{7}\sim\mathcal{O}(1/\tan^{2}\beta)\) and \(\lambda_{5}\sim\mathcal{O}(1/\tan\beta)\).
### Bounded-from-below limits
To guarantee that the minimum around which we expand the scalar potential (19) is physically meaningful, we must ensure that the potential is bounded from below, which means that it cannot tend to negative infinity along any direction in the field space. This requirement puts additional restrictions on the allowed values of the couplings \(\lambda_{i}\). To derive the 'bounded-from-below' constraints, one should analyze all possible directions along which the scalar fields \(H_{u}\), \(H_{d}\) and \(\phi\) can flow towards arbitrarily large values. The details of our derivation are presented in Appendix C. Here we summarize our findings in the form of inequality conditions which need to be satisfied by
the quartic couplings of the potential (19):
\[\begin{split}\lambda_{8}+\sqrt{\lambda_{2}\lambda_{6}}& >0\\ \lambda_{7}+\sqrt{\lambda_{1}\lambda_{6}}&>0\\ \lambda_{3}+\sqrt{\lambda_{2}\lambda_{1}}&>0\\ \lambda_{3}+\lambda_{4}+\sqrt{\lambda_{2}\lambda_{1}}& >0\\ -\frac{1}{4}\frac{(\operatorname{Re}\lambda_{5})^{2}+(\operatorname {Im}\lambda_{5})^{2}}{\lambda_{a}}+\lambda_{4}&>0\\ 4\lambda_{b}^{2}-(\operatorname{Re}\lambda_{5})^{2}+ \operatorname{Re}\lambda_{5}\operatorname{Im}\lambda_{5}&>0\\ 4\lambda_{b}^{2}-(\operatorname{Im}\lambda_{5})^{2}+ \operatorname{Re}\lambda_{5}\operatorname{Im}\lambda_{5}&>0\\ \end{split} \tag{36}\]
where \(\lambda_{a}=\frac{3}{2}\lambda_{6}+\lambda_{3}\frac{\lambda_{6}}{\sqrt{\lambda_ {1}}\lambda_{2}}+\lambda_{7}\sqrt{\frac{\lambda_{6}}{\lambda_{1}}}+\lambda_{8} \sqrt{\frac{\lambda_{6}}{\lambda_{2}}}\) and \(\lambda_{b}=\sqrt{\lambda_{a}\lambda_{4}}\). Since in this study we do not investigate the CP violation, we assume that all the parameters of the lagrangian are real, indicating \(\operatorname{Im}\lambda_{5}=0\). Note also that several novel conditions w.r.t. the findings of Refs. [13; 16] are identified in Eq. (36).
### Vacuum stability
In theories which feature an extended scalar sector, the scalar potential can easily develop more than one local minimum. As a result, the theory may tunnel from one minimum to another. In principle, color and charge breaking minima deeper than the EWSB minimum of Eq. (3) can arise in our model (see, e.g. [19]). Moreover, several charge and color conserving minima can coexist, in which case we do not know _a priori_ which of them corresponds to the desired EWSB minimum.
The strong vacuum stability condition for the scalar potential requires that the EWSB vacuum corresponds to a global minimum. In such a case the potential is said to be stable. If, on the other hand, the EWSB minimum is a local minimum but the tunneling time to a true global minimum exceeds the age of the Universe, the potential is said to be metastable. In this study we employ the publicly available numerical package Vevacious++ [20] (the C++ version of [21]) to find all tree- and one-loop level minima of the scalar potential defined in Eq. (19) and to calculate the tunneling time from the EWSB minimum to the deepest minimum found.
## IV Flavor physics constraints
In this section, we review additional constraints which may affect the allowed parameter space of the analyzed model. These extra restrictions come from the experimental measurements of several flavor observables, including the anomalous magnetic moment of the muon, the lepton flavor violating decays of the tau lepton, and the elements of the CKM matrix. We discuss them in the following one by one.
### Muon anomalous magnetic moment
The discrepancy between the SM prediction [22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43] and the experimental measurement of the anomalous magnetic moment of the muon has been confirmed separately by the Brookhaven National Laboratory [44] and the Fermilab experimental groups [45; 46], giving rise to the combined
5.1 \(\sigma\) anomaly:
\[\Delta a_{\mu}=a_{\mu}^{\rm exp}-a_{\mu}^{\rm SM}=(2.49\pm 0.48)\times 10^{-9}. \tag{37}\]
In a generic NP model which features heavy scalars \(\phi_{i}\) and fermions \(\psi_{j}\) coupled to the SM muons via the Yukawa-type interactions \(y_{L}^{ij}\phi_{i}\,\bar{\psi}_{j}P_{L}\,\mu\) and \(y_{R}^{ij}\phi_{i}\,\bar{\psi}_{j}P_{R}\,\mu\) (where \(P_{L,R}=(1\mp\gamma^{5})/2\) are the usual projection operators), a well-known one-loop contribution to the muon anomalous magnetic moment reads
\[\Delta a_{\mu}=\sum_{i,j}\left\{-\frac{m_{\mu}^{2}}{16\pi^{2}M_{ \phi_{i}}^{2}}\left(|y_{L}^{ij}|^{2}+|y_{R}^{ij}|^{2}\right)\left[Q_{j}{\cal F }_{1}\left(x_{ij}\right)-Q_{i}{\cal G}_{1}\left(x_{ij}\right)\right]\right.\\ \left.-\frac{m_{\mu}\,M_{\psi_{j}}}{16\pi^{2}M_{\phi_{i}}^{2}}{ \rm Re}\left(y_{L}^{ij}y_{R}^{ij*}\right)\left[Q_{j}{\cal F}_{2}\left(x_{ij} \right)-Q_{i}{\cal G}_{2}\left(x_{ij}\right)\right]\right\}, \tag{38}\]
where \(M_{\phi_{i}}\) is the physical mass of a heavy scalar, \(M_{\psi_{j}}\) is the physical mass of a heavy fermion, \(x_{ij}=M_{\psi_{j}}^{2}/M_{\phi_{i}}^{2}\), and the electric charges of \(\phi_{i}\) and \(\psi_{j}\) are related as \(Q_{i}+Q_{j}=-1\). The loop functions are defined in the following way:
\[{\cal F}_{1}(x) =\frac{1}{6\left(1-x\right)^{4}}\left(2+3x-6x^{2}+x^{3}+6x\ln x\right) \tag{39}\] \[{\cal F}_{2}(x) =\frac{1}{\left(1-x\right)^{3}}\left(-3+4x-x^{2}-2\ln x\right)\] \[{\cal G}_{1}(x) =\frac{1}{6\left(1-x\right)^{4}}\left(1-6x+3x^{2}+2x^{3}-6x^{2} \ln x\right)\] \[{\cal G}_{2}(x) =\frac{1}{\left(1-x\right)^{3}}\left(1-x^{2}+2x\ln x\right)\,.\]
The first addend in Eq. (38) captures the loop chirality-conserving contributions to \(\Delta a_{\mu}\). These are known to be generically too small to account for the anomaly (37) when the most recent LHC bounds on the NP masses are taken into account [47; 48]. We will thus focus on the second addend in Eq. (38), which corresponds to the loop chirality-flipping contributions to \(\Delta a_{\mu}\).
In the framework of the model defined in Table 1, two classes of contributions to the anomalous magnetic moment of the muon can arise, induced by one-loop diagrams with an exchange of neutral (pseudo)scalars and charged VL leptons, as shown in Fig. 1(a), or charged scalars and neutral VL leptons, as shown in Fig. 1(b). In the first case, the chirality-flipping contributions to \(\Delta a_{\mu}\) read
\[\Delta a_{\mu}^{Eh^{0}}\approx\frac{1}{16\pi^{2}}\sum_{j=1}^{2}\sum_{i=1}^{3} \left[\frac{m_{\mu}\,M_{E_{j}}}{M_{h_{i}^{0}}^{2}}{\rm Re}\left(c_{L}c_{R}^{* }\right)^{E_{j},h_{i}^{0}}{\cal F}_{2}\left(M_{E_{j}}^{2}/M_{h_{i}^{0}}^{2} \right)\right] \tag{40}\]
for the CP-even scalars and
\[\Delta a_{\mu}^{Ea}\approx\frac{1}{16\pi^{2}}\sum_{j=1}^{2}\sum_{i=1}^{2}\left[ \frac{m_{\mu}\,M_{E_{j}}}{M_{a_{i}}^{2}}{\rm Re}\left(c_{L}c_{R}^{*}\right)^{E_ {j},a_{i}}{\cal F}_{2}\left(M_{E_{j}}^{2}/M_{a_{i}}^{2}\right)\right] \tag{41}\]
for the CP-odd scalars. The one-loop contributions to \(\Delta a_{\mu}\) from the neutral leptons and charged scalars are given by
\[\Delta a_{\mu}^{Nh^{\pm}}\approx-\frac{1}{16\pi^{2}}\sum_{j=1}^{4}\left[\frac{m _{\mu}\,M_{N_{j}}}{M_{h^{\pm}}^{2}}{\rm Re}\left(c_{L}c_{R}^{*}\right)^{N_{j},h ^{\pm}}{\cal G}_{2}\left(M_{N_{j}}^{2}/M_{h^{\pm}}^{2}\right)\right]. \tag{42}\]
The parameters \(c_{L/R}\) denote the effective couplings arising from the muon-(pseudo)scalar-VL fermion vertices in the mass basis. They depend on the lepton Yukawa couplings of Eqs. (1) and (111), as well as on the elements of the mixing matrices \(R_{h}\) (Eq. (23)), \(R_{a}\) (Eq. (29)), and \(V_{L/R}^{e}\) (Eq. (A3)). The explicit forms of \(c_{L/R}\) are rather complex and we refrain from showing them here. Note, however, that in our numerical analysis we are going to compute all the contributions to \(\Delta a_{\mu}\) with the numerical package SPheno[49; 50].
### Lepton flavor violating decays
Due to the non-zero mixing between the second and the third generation of fermions, charged lepton flavour violating processes may occur. The \(\tau\to\mu\gamma\) decay receives contributions from the one-loop diagrams analogous to those of \(\Delta a_{\mu}\). The corresponding branching ratio (BR) is given by [51]
\[\text{BR}(\tau\to\mu\gamma)=\frac{\alpha_{\text{em}}m_{\tau}^{3}}{4\,\Gamma_{ \tau}}\sum_{i,j}\left(|A_{L}^{ij}|^{2}+|A_{R}^{ij}|^{2}\right)\,, \tag{43}\]
where \(\Gamma_{\tau}=2.3\times 10^{-12}\)[18] indicates the total decay width of the tau, \(\alpha_{\text{em}}\) is the fine structure constant, and the decay amplitude \(A_{L}^{ij}\) reads
\[A_{L}^{ij}=\frac{1}{32\pi^{2}M_{\tilde{\phi}_{i}}^{2}}\left\{m_ {\tau}\left(y_{\tau,L}^{ij}y_{\mu,L}^{ij*}\right)\left[Q_{j}\mathcal{F}_{1} \left(x_{ij}\right)-Q_{i}\mathcal{G}_{1}\left(x_{ij}\right)\right]\right.\\ \left.+M_{\psi_{j}}\left(y_{\tau,L}^{ij}y_{\mu,R}^{ij*}\right) \left[Q_{j}\mathcal{F}_{2}\left(x_{ij}\right)-Q_{i}\mathcal{G}_{2}\left(x_{ij }\right)\right]\right\}\,. \tag{44}\]
The corresponding amplitude \(A_{R}^{ij}\) is obtained from Eq. (44) by replacing \(L\leftrightarrow R\). Just like it was in the \(\Delta a_{\mu}\) case, the main contribution to \(\text{BR}(\tau\to\mu\gamma)\) originates from the second addend in Eq. (44). The current experimental 90% confidence level (C.L.) upper bound on \(\text{BR}\left(\tau\to\mu\gamma\right)\) from the Belle collaboration reads [52]:
\[\text{BR}\left(\tau\to\mu\gamma\right)_{\text{exp}}<4.2\times 10^{-8}\,. \tag{45}\]
The \(\tau\to 3\,\mu\) decay can proceed through the one-loop penguin and box diagrams. The latter are subdominant in our model as they do not receive the chiral enhancement. The corresponding formulae for the penguin-diagram BRs are lengthy and not particularly enlightening. They can be
Figure 1: The one-loop chirality-flipping contributions to \(\Delta a_{\mu}\) mediated by (a) a neutral (pseudo)scalar/charged lepton exchange, and (b) a charged scalar/neutral lepton exchange.
found, for example, in Eq. (37) of Ref. [51]. The 90% C.L. upper bound on \(\mathrm{BR}\left(\tau\to 3\,\mu\right)\) by the Belle collaboration reads [53]:
\[\mathrm{BR}\left(\tau\to 3\,\mu\right)_{\mathrm{exp}}<2.1\times 10^{-8}\,. \tag{46}\]
### CKM anomaly
Among the experimental puzzles which are not explained by the SM we should also mention various tensions between three different determinations of the Cabibbo angle. This observable can be extracted from the short distance radiative corrections to the \(\beta\) decay, from the experimental data on kaon decays, and from the lattice calculations [54; 55; 56; 57]. All these measurements are in tension with each other, giving rise to two interesting anomalies.
The first anomaly is related to the violation of the CKM matrix unitarity when one compares the values of \(\left|V_{ud}\right|\) and \(\left|V_{us}\right|\) resulting from the \(\beta\) decay and from the kaon decays. The second anomaly originates from two different measurements of \(\left|V_{us}\right|\): from the semileptonic \(K\to\pi l\nu\) and the leptonic \(K\to\mu\nu\) decay, respectively.
The experimental upper bound on the CKM deviation from the unitarity reads [18]
\[\Delta_{\mathrm{CKM}}=\sqrt{1-V_{ud}^{2}-V_{us}^{2}-V_{ub}^{2}}<0.05. \tag{47}\]
To explain the anomaly of Eq. (47), one can consider extensions of the SM in which the fermion sector is enlarged by VL quarks mixing at the tree level with the SM quarks [55; 58; 59; 60]. In such a setting deviations from the unitarity of the three-dimensional CKM matrix can arise quite naturally. Since the model defined in Table 1 contains all the necessary ingredients to account for the CKM anomaly, we include it in our list of constraints.
## V Perturbativity constraints
The model defined in Table 1 is intended as a phenomenological scenario which correctly describes the physics around the energy scale determined by the typical masses in the NP sector. Nevertheless, it is important to understand what is the range of validity of such a model or, in other words, what is the energy scale at which the model can not be trusted anymore and should be embedded in some more fundamental UV completion. While such a "cut-off" scale lacks a truly rigorous definition, one can try to estimate it by simply requiring that whatever extra degrees of freedom emerge in the theory above this scale to make the model UV complete, they do not affect its phenomenological predictions.
As an example, let us consider the muon anomalous magnetic moment operator, which in the low-energy effective filed theory (EFT) reads
\[\frac{e}{2\,m_{\mu}}\Delta a_{\mu}\left(\bar{\mu}\,\sigma_{\mu\nu}\,F^{\mu\nu} \mu\right)\equiv\frac{C}{\Lambda}\left(\bar{\mu}\,\sigma_{\mu\nu}\,F^{\mu\nu} \mu\right)\,. \tag{48}\]
Here \(\Lambda\) is a cut-off scale of the examined EFT while \(C\) denotes a generic Wilson coefficient. Note that since the operator in Eq. (48) is chirality flipping, it is more convenient to define \(C=\tilde{C}\,m_{\mu}/\Lambda\). One can now derive from Eq. (48) rough estimates of the energy scale associated with a hypothetical NP contributing to \(\Delta a_{\mu}\) at different loop orders,
\[\mathrm{tree\ level}: \tilde{C}\approx 1, \Lambda\approx 3000\ \mathrm{GeV} \tag{49}\] \[1\ \mathrm{loop}: \tilde{C}\approx 1/16\pi^{2}, \Lambda\approx 230\ \mathrm{GeV}\] (50) \[2\ \mathrm{loop}: \tilde{C}\approx(1/16\pi^{2})^{2}, \Lambda\approx 20\ \mathrm{GeV} \tag{51}\]
and so on.
Going beyond the EFT approximation, let us investigate a one-loop chirality flipping contribution to \(\Delta a_{\mu}\) like the one in Eq. (38). Assuming that it arises from an unspecified UV completion of our model above the scale \(\Lambda\), it can be estimated by the corresponding UV mass \(m_{\Lambda}\) and the UV Yukawa couplings \(y_{L/R}(\Lambda)\) as
\[\Delta a_{\mu}^{\Lambda}\sim\frac{1}{16\pi^{2}}\frac{m_{\mu}\,v}{m_{\Lambda}^ {2}}y_{L}(\Lambda)\,y_{R}(\Lambda). \tag{52}\]
By demanding that the new contribution (52) does not shift our phenomenological predictions for \(\Delta a_{\mu}\) by more than \(3\,\sigma\), we can derive a lower bound on the UV mass,
\[m_{\Lambda}\gtrsim\sqrt{y_{L}(\Lambda)y_{R}(\Lambda)}\,15\,{\rm TeV}\,. \tag{53}\]
For the Yukawa couplings at the upper edge of perturbativity, \(y_{L}(\Lambda)=y_{R}(\Lambda)=\sqrt{4\pi}\), Eq. (53) translates into a conservative estimation of the scale of validity of our phenomenological model,
\[m_{\Lambda}\gtrsim 50\ {\rm TeV}. \tag{54}\]
In other words, the model can not be UV completed below \(m_{\Lambda}\).
An immediate consequence of Eq. (54) is that the commonly employed perturbativity bounds, which read \(\lesssim\sqrt{4\pi}\) for the gauge/Yukawa and \(\lesssim 4\pi\) for the quartic couplings, need to be imposed on the running parameters of the model evaluated at the scale \(\Lambda\) rather than on the bare couplings of the lagrangian (as it was done, for example, in Ref. [16]).
To implement the RG-based perturbativity constraints, we follow the RG flow of all the coupling constants from the scale \(\mu_{0}=1.5\) TeV, which is a proxy for the NP scale in our model, to \(\Lambda=50\) TeV. The one-loop RG equations (RGEs) were computed using the publicly available numerical code SARAH[61, 62] and are summarized in Appendix D. Due to a large number of Yukawa and quartic interactions in our model, it is not possible to perform the perturbativity analysis in a generic way as the RGEs are non-linear differential equations that can not be solved analytically. On the other hand, the perturbativity bounds are expected to be relevant only for those couplings whose values must be of order 1 (or larger) for phenomenological reasons. This observation allows us to reduce the RGE system and to simplify the analysis.
In the Yukawa sector, the couplings of interest are \(x_{34}^{Q}\), \(y_{43}^{u}\), \(y_{34}^{u}\) and \(x_{43}^{u}\) (see Sec. II for the discussion). We find that the modulus of their value cannot exceed 1.4 at \(\mu_{0}\) if they are to remain perturbative up to 50 TeV. This conclusion is derived under the assumption that all the other couplings (but two) are set to 1 at the initial scale \(\mu_{0}\). The two exceptions are \(y_{14}^{d}\) and \(y_{24}^{e}\) (expected to be much smaller than 1 as the Yukawas of the second generation), whose values at \(\mu_{0}\) are set to 0.7.
In the scalar sector, the perturbativity bounds are presumably most relevant for the couplings \(\lambda_{1}\), \(\lambda_{6}\) and \(\lambda_{7}\), whose RGEs feature a power-four dependence on the large Yukawa couplings \(y_{43}^{u}\), \(x_{43}^{u}\) and \(x_{34}^{Q}\) (cf. Eq. (45), Eq. (49) and Eq. (50), respectively). In Fig. 2 we illustrate the RG running of \(\lambda_{1}\), \(\lambda_{6}\) and \(\lambda_{7}\) for a randomly chosen benchmark point which satisfies all the constraints discussed in Secs. II and III. The running of all the remaining quartic couplings is very slow in the considered energy range and does not pose any danger from the point of view of their perturbativity. Once the whole system is analyzed with the alignment conditions (24) and (25) in place, it turns out that the perturbativity requires the modulus of the quartic couplings to be smaller than 2. A straightforward consequence of this result is that all the benchmark points found previously in Ref. [16] are disfavored.
Finally, let us comment on another constraint which may arise in our model, the so called perturbative unitarity. Although the \(S\)-matrix for a scattering process must be unitary in the full
theory, it may happen that at some order in the perturbative expansion the unitarity is violated, signaling the breakdown of the expansion. This is usually related to some of the couplings becoming too large. The perturbative unitarity translates into conditions for the partial wave amplitudes, which have to be smaller than \(1/2\). To examine such constraints in our model we use SPheno, which computes the maximal eigenvalue of a \(2\to 2\) scattering matrix at the tree-level. On the other hand, since we already require all the quartic couplings to remain perturbative up to the energy scale of \(50\) TeV, we may suspect that the perturbative unitarity bounds are automatically satisfied. As we will see in the next section, this is indeed the case.
## VI Numerical analysis and benchmark scenarios
In this section we perform a global numerical analysis of the model. We begin by discussing the employed scanning methodology, the definition of the chi-square (\(\chi^{2}\)) statistics and the initial ranges for all the model's parameters. Next, we present three best-fit benchmark scenarios which arise from the minimization of the \(\chi^{2}\) function. Finally, we provide a discussion of some experimental signatures that these benchmark scenarios could produce.
### Scanning methodology
In Table 2 we summarize the scanning ranges for all the parameters of the model. These include the quartic couplings and the soft-breaking term of the scalar potential (19), the non-zero Yukawa couplings and the mass parameters of the lagrangian (1), the vev of the singlet scalar, and \(\tan\beta\).
In the scalar sector the alignment conditions (24) and (25) are imposed, leading to the limited scanning ranges for \(\lambda_{3}\), \(\lambda_{5}\) and \(\lambda_{7}\) (cf. Eqs. (34) and (35)). For all the other quartic couplings the perturbativity bounds discussed in Sec. V are enforced. Similarly, the Yukawa couplings are scanned in the ranges consistent with their RGE perturbativity constraints. Finally, small values of some of the neutrino coupling constants are necessary to generate tiny m
Figure 2: The RG running of the quartic couplings \(\lambda_{1}\), \(\lambda_{6}\) and \(\lambda_{7}\) for a randomly chosen benchmark point which satisfies all the constraints discussed in Secs. II and III. The renormalization scale \(\mu\) ranges from \(1.5\) TeV to \(1000\,\mathrm{TeV}\). \(\mu_{0}=1.5\) TeV is a reference scale. We do not show the RG evolution of other quartic and Yukawa couplings as it is very slow in the considered energy range.
A tentative lower bound of \(1200\,\mathrm{GeV}\) is imposed on the VL quark mass parameters. This is a rough (and conservative) approximation of the constraints from the direct NP searches at the LHC, which will be discussed in more details in Sec. VII.1. The scanning range for \(v_{\phi}\) then follows from the requirement of reproducing the correct mass of the top quark, as discussed in Sec. II.1. Similarly, we adopt \(200\) GeV lower bounds on the VL lepton mass parameters in order to be roughly consistent with the corresponding LHC constraints, which we examine in Sec. VII.2. Finally, the range for \(\mu_{sb}^{2}\) was chosen to make sure that the mass of the associated CP-odd state (cf. Eq. (31)) is not excluded by the current experimental searches [18].
The experimental constraints employed in our numerical scan are listed in Table 3. The central values and the experimental errors for the quark and lepton masses and for the CKM matrix elements are quoted after the PDG report [18]. Since the uncertainties for \(m_{\mu}\) and \(m_{\tau}\) are very small, rendering the fitting procedure numerically challenging, we adopt an error of \(10\%\) for these two observables. The experimental constraints from the flavor physics were discussed in Sec. IV.
We construct the \(\chi^{2}\)-statistic function as
\[\chi^{2}=\sum_{i}\frac{\left(\mathcal{O}_{i}^{\mathrm{model}}-\mathcal{O}_{i} ^{\mathrm{cen}}\right)^{2}}{(\mathcal{O}_{i}^{\mathrm{err}})^{2}}\,, \tag{55}\]
where \(\mathcal{O}_{i}^{\mathrm{model}}\) indicates the value of an observable calculated in our model, \(\mathcal{O}_{i}^{\mathrm{cen}}\) is the central value of its experimental measurement, \(\mathcal{O}_{i}^{\mathrm{err}}\) is the corresponding experimental error, and the sum runs over all the measured observables listed in Table 3. The upper bounds, corresponding to the last three rows of Table 3, are not included in the \(\chi^{2}\) function, but applied as hard-cuts instead (a point in the parameter space is rejected if such a condition is not satisfied).
To minimize the \(\chi^{2}\) function, we adopt the following strategy. First, we perform an initial scan of the parameter space consistent with Table 2. As a result, we obtain a seed which is then used to minimize the \(\chi^{2}\) function by iterating a random walk algorithm with an adaptive step function. The step function is chosen such that at each iteration all input parameters are updated by less than \(\kappa\%\), and \(\kappa\) reduces with an exponential decay law throughout the minimization procedure.
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|l|} \hline \hline \multicolumn{10}{c}{Scalar sector} \\ \hline \(\tan\beta\) & \([2,50]\) & \(v_{\phi}\) & \([1000,1500]\) & \(\mu_{sb}^{2}\) & \([4,64]\times 10^{4}\) & \(\lambda_{2}\) & \([-2.0,+2.0]\) & \(\lambda_{3}\) & \([0.24,0.28]\) \\ \(\lambda_{4}\) & \([-2.0,+2.0]\) & \(\lambda_{5}\) & \([-0.2,0.0]\) & \(\lambda_{6}\) & \([-2.0,+2.0]\) & \(\lambda_{7}\) & \([-0.01,+0.01]\) & \(\lambda_{8}\) & \([-1.0,+1.0]\) \\ \hline \hline \multicolumn{10}{c}{Lepton sector} \\ \hline \(y_{24}^{e}\) & \([-0.7,+0.7]\) & \(y_{34}^{e}\) & \([-1.0,+1.0]\) & \(y_{14}^{\nu}\) & \([-1.0,+1.0]\times 10^{-10}\) & \(y_{14}^{\nu\nu}\) & \([-1.0,+1.0]\) & \(M_{4}^{e}\) & \(\pm[200,1000]\) \\ \(y_{34}^{e}\) & \([-1.0,+1.0]\) & \(x_{42}^{e}\) & \([-1.0,+1.0]\) & \(y_{24}^{\nu}\) & \([-1.0,+1.0]\times 10^{-10}\) & \(y_{24}^{\nu\nu}\) & \([-1.0,+1.0]\) & \(M_{4}^{\nu}\) & \(\pm[200,1000]\) \\ \(x_{34}^{L}\) & \([-1.0,+1.0]\) & \(x_{43}^{e}\) & \([-1.0,+1.0]\) & \(y_{34}^{\nu\nu}\) & \([-1.0,+1.0]\) & \(M_{4}^{L}\) & \(\pm[200,1000]\) \\ \hline \hline \multicolumn{10}{c}{Quark sector} \\ \hline \(y_{24}^{u}\) & \([-1.0,+1.0]\) & \(y_{34}^{u}\) & \([-1.4,+1.4]\) & \(y_{14}^{d}\) & \([-0.7,+0.7]\) & \(y_{43}^{d}\) & \([-1.0,+1.0]\) & \(M_{4}^{d}\) & \(\pm[1200,4000]\) \\ \(y_{34}^{u}\) & \([-1.4,+1.4]\) & \(x_{42}^{u}\) & \([-1.0,+1.0]\) & \(y_{24}^{d}\) & \([-1.0,+1.0]\) & \(x_{42}^{d}\) & \([-1.0,+1.0]\) & \(M_{4}^{u}\) & \(\pm[1200,4000]\) \\ \(x_{34}^{Q}\) & \([-1.0,+1.0]\) & \(x_{43}^{u}\) & \([-1.4,+1.4]\) & \(y_{34}^{d}\) & \([-1.0,+1.0]\) & \(x_{43}^{d}\) & \([-1.0,+1.0]\) & \(M_{4}^{Q}\) & \(\pm[1200,4000]\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Scanning ranges for the input parameters of the model defined in Table 1. The alignment limit (cf. Sec. III), the RGE perturbativity constraints (cf. Sec. V) and a tentative lower bound on the VL mass parameters (see the text) are imposed. In the Yukawa sector only the non-zero couplings are shown. Dimensionful quantities are given in GeV and GeV\({}^{2}\).
During each iteration, we discard all the points that do not satisfy the upper bounds on \(\Delta_{\rm CKM}\), \({\rm BR}(\tau\to\mu\gamma)\) and \({\rm BR}(\tau\to 3\mu)\), as well as the boundedness constraints on the scalar potential given in Eq. (36) and the perturbative unitarity. Moreover, we investigate the vacuum stability with Vevacious++ [20] and we keep only those points whose vacuum is identified as "stable".
### Benchmark scenarios
In Table 4 we present input parameters for three best-fit benchmark scenarios identified by performing the numerical scan discussed in Sec. VI.1. The corresponding mass spectra are summarized in Table 5 while the breakdown of individual contributions to the \(\chi^{2}\) function is shown in Table 6.
In general, the three benchmark scenarios demonstrate quite similar features, both in terms of the input parameters and of the resulting NP spectra. This is largely due to the fact that we aim at reproducing masses and mixings of the SM fermions and this, as we discussed in Sec. II, puts strong constraints on (some of) the model's parameters.
Let us first notice that the masses of all the SM fermions of the third and second generation can be fitted very precisely. Each individual contribution to the \(\chi^{2}\) function is smaller than 0.7, with an exception of \(m_{s}\) in BP1, in which case we have \(\chi^{2}_{b}=1.6\). We can also observe that, as we anticipated in Sec. II, the Yukawa couplings which link the VL sector with the SM fermions of the third generation are in general larger than those associated with the second generation.
On the other hand, fitting the CKM matrix is a little bit more tricky and the bulk of the total \(\chi^{2}\) stems from this very sector. As anticipated in Sec. II.2, the main contribution to the \(\chi^{2}\) function is given by the element \(|V_{td}|\), with the corresponding \(\chi^{2}_{V_{td}}\) ranging from 16 for BP1 to 25 for BP3. Smaller yet still relevant contributions come from the entries \(|V_{ub}|\) and \(|V_{ts}|\). Finally, an order 10 contribution to the \(\chi^{2}\) function from the element \(|V_{us}|\) is mainly due to a very small experimental error associated with this particular observable. All other elements of the CKM matrix are fitted
\begin{table}
\begin{tabular}{c|c|c||c|c|c} \hline \hline Measurement & Central Value & Exp. Error & Measurement & Central Value & Exp. Error \\ \hline \hline \(m_{\mu}\) & 0.10566 & 10\% & \(|V_{ud}|\) & 0.97370 & 0.00014 \\ \(m_{\tau}\) & 1.77686 & 10\% & \(|V_{us}|\) & 0.22450 & 0.00080 \\ \(m_{c}\) & 1.270 & 0.020 & \(|V_{ub}|\) & 0.00382 & 0.00024 \\ \(m_{s}\) & 0.0934 & 0.0034 & \(|V_{cd}|\) & 0.22100 & 0.00400 \\ \(m_{b}\) & 4.18 & 0.02 & \(|V_{cs}|\) & 0.98700 & 0.01100 \\ \(m_{t}\) & 172.76 & 0.30 & \(|V_{cb}|\) & 0.04100 & 0.00140 \\ \(\Delta a_{\mu}\) & \(2.49\times 10^{-9}\) & \(0.48\times 10^{-9}\) & \(|V_{td}|\) & 0.00800 & 0.00030 \\ & & & \(|V_{ts}|\) & 0.03880 & 0.00110 \\ & & & \(|V_{tb}|\) & 1.01300 & 0.03000 \\ \hline Measurement & Upper bound & & & & \\ \hline \({\rm BR}\left(\tau\to\mu\gamma\right)\) & \(<4.2\times 10^{-8}\) & & & & \\ \({\rm BR}\left(\tau\to 3\mu\right)\) & \(<2.1\times 10^{-8}\) & & & & \\ \(\Delta_{\rm CKM}\) & \(<\)0.05 & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 3: The experimental measurements which we employ in our numerical scan. Masses are in GeV.
within their \(1\sigma\) experimental ranges. As an illustration, we present below the full \(5\times 5\) CKM matrix, which is the most important case for the \(\tau\)-lepton decay.
\begin{table}
\begin{tabular}{c|c|c|c||c|c|c|c} \hline \hline & BP1 & BP2 & BP3 & & BP1 & BP2 & BP3 \\ \hline \(\tan\beta\) & 13 & 8 & 12 & \(\lambda_{1}\) & 0.258 & 0.258 & 0.258 \\ \(v_{u}\) & 245.3 & 244.3 & 245.2 & \(\lambda_{2}\) & 0.514 & 0.153 & 0.623 \\ \(v_{d}\) & 18.9 & 30.5 & 20.4 & \(\lambda_{3}\) & 0.257 & 0.260 & 0.256 \\ \(v_{\phi}\) & 1015 & 1077 & 1012 & \(\lambda_{4}\) & 0.552 & 0.304 & 0.167 \\ \(\mu_{u}^{2}\) & \(-7.8\times 10^{3}\) & \(-6.6\times 10^{3}\) & \(-7.6\times 10^{3}\) & \(\lambda_{5}\) & \(-0.039\) & \(-0.072\) & \(-0.061\) \\ \(\mu_{d}^{2}\) & \(-8.2\times 10^{3}\) & \(-8.6\times 10^{4}\) & \(-3.4\times 10^{4}\) & \(\lambda_{6}\) & 0.370 & 0.487 & 0.663 \\ \(\mu_{\phi}^{2}\) & \(-4.9\times 10^{4}\) & \(-9.4\times 10^{4}\) & \(-2.3\times 10^{5}\) & \(\lambda_{7}\) & 0.001 & 0.002 & 0.002 \\ \(\mu_{\rm sb}^{2}\) & \(1.4\times 10^{5}\) & \(1.9\times 10^{5}\) & \(1.1\times 10^{5}\) & \(\lambda_{8}\) & 0.254 & 0.423 & 0.417 \\ \hline \hline \multicolumn{8}{c||}{Quark sector} & \multicolumn{8}{c}{Lepton sector} \\ \hline & BP1 & BP2 & BP3 & & BP1 & BP2 & BP3 \\ \hline \(y_{24}^{u}\) & \(-0.051\) & \(-0.049\) & 0.050 & \(y_{24}^{e}\) & 0.028 & \(-0.015\) & 0.022 \\ \(y_{34}^{u}\) & \(-0.980\) & 1.185 & \(-1.024\) & \(y_{34}^{e}\) & \(-0.895\) & 0.612 & 0.790 \\ \(x_{34}^{Q}\) & 0.924 & \(-0.842\) & \(-0.877\) & \(x_{34}^{L}\) & 0.616 & \(-0.729\) & 0.724 \\ \(y_{43}^{u}\) & 1.382 & 1.093 & \(-1.337\) & \(y_{43}^{e}\) & \(-0.223\) & 0.144 & \(-0.191\) \\ \(x_{42}^{u}\) & 0.550 & 0.821 & \(-0.595\) & \(x_{42}^{e}\) & 0.156 & 0.165 & 0.188 \\ \(x_{43}^{u}\) & 1.286 & 1.261 & 1.263 & \(x_{43}^{e}\) & \(-0.168\) & 0.228 & \(-0.205\) \\ \hline \(y_{14}^{d}\) & \(-0.022\) & 0.035 & 0.026 & \(y_{14}^{\nu}\) & \(-2\times 10^{-11}\) & \(5\times 10^{-11}\) & \(3\times 10^{-11}\) \\ \(y_{24}^{d}\) & 0.096 & 0.151 & \(-0.113\) & \(y_{24}^{\nu}\) & \(3\times 10^{-11}\) & \(8\times 10^{-12}\) & \(6\times 10^{-11}\) \\ \(y_{34}^{d}\) & \(-0.684\) & 0.274 & 0.267 & \(y_{34}^{\nu}\) & \(-5\times 10^{-11}\) & \(9\times 10^{-11}\) & \(9\times 10^{-11}\) \\ \(y_{43}^{d}\) & \(-0.672\) & \(-0.489\) & 0.656 & \(y_{14}^{\nu\nu}\) & \(-0.824\) & \(-0.674\) & \(-0.674\) \\ \(x_{42}^{d}\) & \(-0.371\) & \(-0.110\) & 0.225 & \(y_{24}^{\nu}\) & \(-0.895\) & \(-0.874\) & \(-0.896\) \\ \(x_{43}^{d}\) & \(-0.160\) & 0.072 & \(-0.127\) & \(y_{34}^{\nu\nu}\) & 0.701 & 0.744 & \(-0.812\) \\ \hline \multicolumn{8}{c}{Mass parameters} \\ \hline & BP1 & BP2 & BP3 & & BP1 & BP2 & BP3 \\ \hline \(M_{4}^{u}\) & \(-1317\) & 1405 & 1334 & \(M_{4}^{e}\) & \(-517\) & \(-575\) & 533 \\ \(M_{4}^{d}\) & \(-3644\) & 3068 & \(-2882\) & \(M_{4}^{\nu}\) & 204 & \(-212\) & 217 \\ \(M_{4}^{Q}\) & \(-1384\) & 1443 & 1322 & \(M_{4}^{L}\) & \(-206\) & \(-222\) & \(-202\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Input parameters for three best-fit benchmark scenarios. Dimensionful quantities are given in GeV and GeV\({}^{2}\).
matrix for the benchmark scenario BP1,
\[|V_{\rm CKM}|^{({\rm BP1})}=\left(\begin{array}{cccc|c}0.97394&0.22679&0.00298& 1.4\times 10^{-7}&0.00008\\ 0.22671&0.97301&0.04296&0.00003&0.00042\\ 0.00681&0.04236&0.99821&0.00968&0.00221\\ \hline 0.00054&0.00270&0.02853&0.86532&0.00114\\ 0.00082&0.00390&0.02996&0.50113&0.00076\\ \end{array}\right)\,. \tag{56}\]
The two remaining best-fit points follow the same pattern. Incidentally, note that the CKM anomaly is \({\cal O}(10^{-4})\) in our setup, well below the experimental upper bound of Eq. (47).
Interestingly, in all three cases each quartic (Yukawa) coupling remains smaller than \(4\pi\) (\(\sqrt{4\pi}\)) up to \(1000\,{\rm TeV}\). We can therefore conclude that the validity range of our model extends well beyond the putative scale of \(50\,{\rm TeV}\). We also checked that the maximal eigenvalue of the scattering matrix computed by SPheno is \({\cal O}(10^{-2})\) for all the benchmark scenarios, indicating that the perturbative unitarity bound is satisfied as well.
Masses of the NP leptons are determined, to a large extent, by correctly fitting the experimental value of \(\Delta a_{\mu}\) (an overall contribution from this observable to the total \(\chi^{2}\) function does not exceed 0.7 in all the benchmark scenarios). Contributions to \(\Delta a_{\mu}\) from the individual one-loop diagrams of Fig. (1) are summarized in Table 7. We present separately fractions of \(\Delta a_{\mu}\) generated by the charged scalars \(h^{\pm}\) and the neutral leptons \(N_{1,2,3,4}\), by the CP-odd scalars \(a_{1,2}\) and the charged leptons \(E_{1,2}\), and by the CP-even scalars \(h_{1,2,3}\) and the charged leptons \(E_{1,2}\). We also show the sum of all the contributions of a given type, indicated by a subscript "tot".
\begin{table}
\begin{tabular}{c|c|c|c||c|c|c} \hline \hline & \multicolumn{6}{c}{SM fermions} \\ \hline & BP1 & BP2 & BP3 & & BP1 & BP2 & BP3 \\ \hline \(m_{c}\) & 1.262 & 1.282 & 1.259 & \(m_{\mu}\) & 0.110 & 0.110 & 0.110 \\ \(m_{t}\) & 172.7 & 172.8 & 172.6 & \(m_{\tau}\) & 1.864 & 1.756 & 1.765 \\ \(m_{s}\) & 0.089 & 0.093 & 0.091 & \(m_{\nu_{2}}\left[10^{-10}\right]\) & 4.659 & 6.587 & 0.252 \\ \(m_{b}\) & 4.169 & 4.196 & 4.175 & \(m_{\nu_{3}}\left[10^{-10}\right]\) & 8.253 & 18.38 & 20.95 \\ \hline \multicolumn{6}{c}{NP fermions} \\ \hline \multicolumn{6}{c}{Quark sector} \\ \hline & BP1 & BP2 & BP3 & & BP1 & BP2 & BP3 \\ \hline \(M_{U_{1}}\) & 1495 & 1561 & 1440 & \(M_{E_{1}}\) & 487 & 596 & 554 \\ \(M_{U_{2}}\) & 1708 & 1842 & 1704 & \(M_{E_{2}}\) & 543 & 615 & 570 \\ \(M_{D_{1}}\) & 1534 & 1579 & 1464 & \(M_{N_{1,2}}\) & 205 & 214 & 218 \\ \(M_{D_{2}}\) & 3655 & 3070 & 2888 & \(M_{N_{3,4}}\) & 488 & 598 & 556 \\ \hline \multicolumn{6}{c}{Scalars} \\ \hline & BP1 & BP2 & BP3 & & BP1 & BP2 & BP3 \\ \hline \(M_{h_{1}}\) & 125 & 125 & 125 & \(M_{a_{1}}\) & 362 & 411 & 433 \\ \(M_{h_{2}}\) & 362 & 412 & 435 & \(M_{a_{2}}\) & 532 & 614 & 469 \\ \(M_{h_{3}}\) & 617 & 752 & 824 & \(M_{h^{\pm}}\) & 384 & 423 & 440 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Mass spectra for three best-fit benchmark scenarios. All masses are in GeV.
We observe that the largest contributions to \(\Delta a_{\mu}\) arise from the charged scalar/heavy neutrino loops. We thus disprove the conclusions of Refs. [13; 16] where it was assumed that the charged lepton loops were the only NP contributions to muon (\(g-2\)) present in the model. As we show in our analysis, all possible one-loop diagrams contributing to \(\Delta a_{\mu}\) should be treated at equal footing and none of them should be discarded _a priori_.
Even more interestingly, the observed dominance of the heavy neutrino contributions to the
\begin{table}
\begin{tabular}{c|c|c|c||c|c|c|c} \hline \hline \multicolumn{6}{c||}{Quarks masses} & \multicolumn{6}{c}{CKM elements} \\ \hline & BP1 & BP2 & BP3 & & BP1 & BP2 & BP3 \\ \hline \(\chi^{2}_{c}\) & 0.154 & 0.360 & 0.280 & \(\chi^{2}_{V_{us}}\) & 8.225 & 8.290 & 5.986 \\ \(\chi^{2}_{t}\) & 0.022 & 0.018 & 0.119 & \(\chi^{2}_{V_{ub}}\) & 12.33 & 10.36 & 9.327 \\ \(\chi^{2}_{s}\) & 1.569 & 0.014 & 0.450 & \(\chi^{2}_{V_{td}}\) & 15.69 & 18.30 & 24.58 \\ \(\chi^{2}_{b}\) & 0.330 & 0.640 & 0.052 & \(\chi^{2}_{V_{ts}}\) & 10.45 & 9.796 & 6.559 \\ \hline \(\chi^{2}_{Q}\) & 2.075 & 1.031 & 0.901 & \(\chi^{2}_{V}\) & 55.45 & 55.26 & 55.92 \\ \hline \multicolumn{6}{c||}{Charged leptons masses} & \multicolumn{6}{c}{\(\Delta a_{\mu}\)} \\ \hline \(\chi^{2}_{\mu}\) & 0.210 & 0.170 & 0.207 & \(\chi^{2}_{\Delta a_{\mu}}\) & 0.328 & 0.657 & 0.375 \\ \(\chi^{2}_{\tau}\) & 0.241 & 0.014 & 0.004 & \multicolumn{6}{c}{Total} \\ \hline \(\chi^{2}_{L}\) & 0.451 & 0.183 & 0.211 & \(\chi^{2}_{\rm TOT}\) & 58.30 & 57.31 & 57.41 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Breakdown of the \(\chi^{2}\) contributions from various observables implemented in the \(\chi^{2}\) function of Eq. (55). The CKM contributions which are smaller than 3 are not shown. \(\chi^{2}_{Q}\), \(\chi^{2}_{L}\) and \(\chi^{2}_{V}\) indicate total \(\chi^{2}\) contributions from the quark masses, lepton masses, and the CKM matrix elements, respectively. \(\chi^{2}_{\rm TOT}\) stands for the total \(\chi^{2}\) function of each best-fit scenario.
\begin{table}
\begin{tabular}{c|c|c|c||c|c|c} \hline \hline \multicolumn{6}{c}{Contributions to \(\Delta a_{\mu}\times 10^{9}\)} \\ \hline \multicolumn{6}{c}{Charged scalars} & \multicolumn{6}{c}{CP-even scalars} \\ \hline Loop & BP1 & BP2 & BP3 & Loop & BP1 & BP2 & BP3 \\ \hline \(h^{\pm},N_{1,2}\) & \(-1.076\) & \(-0.792\) & \(-0.942\) & \(h_{1},E_{1}\) & \(-0.003\) & \(-0.001\) & \(-0.009\) \\ \(h^{\pm},N_{3,4}\) & 3.300 & 2.898 & 3.153 & \(h_{1},E_{2}\) & 0.003 & 0.001 & 0.009 \\ \hline \(h^{\pm},N_{\rm tot}\) & 2.225 & 2.106 & 2.211 & \(h_{2},E_{1}\) & \(-0.409\) & \(-0.520\) & \(-0.969\) \\ \hline \multicolumn{6}{c||}{CP-odd scalars} & \multicolumn{1}{c}{\(h_{2},E_{2}\)} & 0.437 & 0.548 & 0.994 \\ \hline \(a_{1},E_{1}\) & 0.425 & 0.528 & 0.938 & \(h_{3},E_{1}\) & 0.018 & 0.115 & 0.076 \\ \(a_{1},E_{2}\) & \(-0.544\) & \(-0.611\) & \(-1.529\) & \(h_{3},E_{2}\) & \(-0.017\) & \(-0.127\) & \(-0.076\) \\ \(a_{2},E_{1}\) & \(-0.033\) & \(-0.135\) & \(-0.071\) & \(h,E_{\rm tot}\) & 0.032 & 0.027 & 0.025 \\ \(a_{2},E_{2}\) & 0.110 & 0.196 & 0.621 & \multicolumn{6}{c}{Total} \\ \hline \(a,E_{\rm tot}\) & \(-0.015\) & \(-0.023\) & \(-0.041\) & \(\Delta a_{\mu}\) & 2.215 & 2.101 & 2.196 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Contributions to \(\Delta a_{\mu}\) from the individual one-loop diagrams shown in Fig. (1). The subscript ”tot” indicates the sum of all the contributions of a given type.
anomalous magnetic moment of the muon seems to be a generic feature of the model which does not pertain exclusively to the identified benchmark scenarios. The charged scalar/heavy neutrino loops are determined, among other parameters, by combinations of the \(y^{\prime\nu}\) couplings which are not constrained by any SM masses and mixing and thus can become relatively large. Contrarily, the same Yukawa coupling is responsible for the generation of the neutral (pseudo)scalar/charged lepton loops and for the correct tree-level mass of the muon. It is thus required to be small and the corresponding contributions to \(\Delta a_{\mu}\) are suppressed. Additionally, one also observes cancellations between the individual contributions to \(\Delta a_{\mu}\) stemming from the (pseudo)scalar diagrams with different VL leptons, which is a known and common feature of many NP models with the VL fermions (see, e.g., [63; 64] for a discussion).
Finally, we should mention the size of the BRs for the lepton flavor violating decays \(\tau\to\mu\gamma\) and \(\tau\to 3\mu\) in our model, which are of the order \((3-4)\times 10^{-8}\) for the former and \((6-9)\times 10^{-10}\) for the latter. Given that in the future the Belle-II collaboration is expected to improve their current exclusion bounds by an order of magnitude or more [65; 66], it may turn out that the tau leptonic decays offer the best experimental way of verifying the predictions of the NP model analyzed in this study.
## VII LHC study of the benchmark scenarios
In this section we confront the benchmark scenarios identified in Sec. VI with the null results of the direct NP searches at the LHC. We analyze, one by one, the constraints originating from considering the production of VL quarks, VL leptons, and exotic scalars.
### Vector-like quarks
The VL quarks (VLQs) can be copiously produced at the LHC, either in pairs through the strong interactions or singly through an exchange of the EW gauge bosons. In the former case, the dominant production channels at the leading order are gluon fusion and quark-antiquark annihilation, whose production cross sections depend on the VLQ mass and its SU(3)\({}_{C}\) quantum numbers only (see Refs. [67; 68] for analytical formulae). Therefore, the experimental lower bounds on VLQ masses are expected to be, to a large extent, model independent, baring only a slight dependence on the relative strength of the individual VLQ decay channels.
The most recent analysis from ATLAS, based on the data from proton-proton collisions at a centre-of-mass energy of \(\sqrt{s}=13\) TeV, corresponding to an integrated luminosity of 139 fb\({}^{-1}\)[69], considered the pair production of VL top partners \(T\) and VL bottom partners \(B\) with the decay channels \(T\to Zt\), \(ht\), \(Wb\) and \(B\to Zb\), \(hb\), \(Wt\) and with large missing transverse momentum. The
Figure 3: Pair production of the VLQs \(T\) and \(B\) via the gluon fusion at the LHC considered by the ATLAS collaboration in Ref. [69].
corresponding Feyman diagrams are depicted in Fig. 3. The strongest 95% C.L. lower bounds on the VLQ mass derived in Ref. [69] read
\[M_{T/B}>1.41\,\text{TeV} \tag{57}\]
for the EW doublets2 and
Footnote 2: For the VLQ mass larger than 800 GeV this indicates BR(\(T\to Zt\))=BR(\(T\to ht\)) = 50% and BR(\(B\to Wt\)) = 100% [69].
\[M_{T}>1.26\,\text{TeV},\qquad M_{B}>1.33\,\text{TeV} \tag{58}\]
for the EW singlets.3 The analogous results from CMS can be found in Ref. [70]. Similarly, by assuming that at least one of the VLQs decays into a \(Z\) boson with the BR=100%, the 13 TeV ATLAS search [71] obtained even stronger bounds,
Footnote 3: For the VLQ mass larger than 800 GeV this indicates BR(\(T\to Zt\)) = 25%, BR(\(T\to ht\)) = 25%, BR(\(T\to Wb\)) = 50%, BR(\(B\to Zb\)) = 25%, BR(\(B\to hb\)) = 25% and BR(\(B\to Wt\)) = 50% [69].
\[M_{T}>1.60\,\text{TeV},\qquad M_{B}>1.42\,\text{TeV}. \tag{59}\]
At first glance it may seem that all our benchmark scenarios are consistent with the VLQ exclusion bounds. On the other hand, in the model considered in this study the couplings of the physical heavy quarks \(U_{1,2}\) and \(D_{1,2}\) with the third-generation SM quarks and the EW gauge (Higgs) bosons are generated via tree-level mixing after the EW and U(1)\({}_{X}\) symmetries are spontaneously broken (cf. Sec. II and Appendix A). Since there is _a priori_ no reason for the resulting BRs to correspond to any of the benchmark cases considered by ATLAS and CMS in their analyses (exotic decays to the charged Higgs are possible, for example), we need to reexamine the experimental results in the framework of our model.
To this end, we calculate with MadGraph5 MC@NLO[72] the cross sections for the pair production of \(U_{1}\), \(U_{2}\), \(D_{1}\) and \(D_{2}\). The results are presented in Table 8. By comparing these numbers with the observed experimental 95% C.L. upper bounds on the signal cross section from Ref. [69] (to give an example, \(\sigma_{95\%}^{\text{exp}}(p\,p\to\bar{T}T)=4\times 10^{-3}\) pb for \(M_{\text{VLQ}}=1.5\) TeV) we conclude that our benchmark scenarios are indeed not excluded by the current LHC searches for the VLQs, irrespectively of the actual sizes of their BRs.
It is instructive to investigate the prospects of testing our model in future runs at the LHC. The total cross section for the VLQ pair production, followed by a decay into the third generation quarks and the EW gauge/Higgs bosons, can be expressed using the narrow width approximation (NWA) as
\[\widetilde{\sigma}\left(pp\to\overline{Q}Q\to f\overline{f}\,VV\right)\approx \sigma\left(pp\to Q\overline{Q}\right)\text{BR}\left(Q\to fV\right)\text{BR} \left(\overline{Q}\to\overline{f}V\right), \tag{60}\]
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c} \hline \hline & \(M_{U_{1}}\) & \(\sigma(pp\to U_{1}U_{1})\) & \(M_{U_{2}}\) & \(\sigma(pp\to U_{2}U_{2})\) & \(M_{D_{1}}\) & \(\sigma(pp\to D_{1}D_{1})\) & \(M_{D_{2}}\) & \(\sigma(pp\to D_{2}D_{2})\) \\ \hline BP1 & 1495 & \(1.3\times 10^{-3}\) & 1708 & \(3.9\times 10^{-4}\) & 1534 & \(1.0\times 10^{-3}\) & 3655 & \(4.5\times 10^{-9}\) \\ BP2 & 1561 & \(8.9\times 10^{-4}\) & 1842 & \(1.9\times 10^{-4}\) & 1579 & \(8.1\times 10^{-4}\) & 3070 & \(1.6\times 10^{-7}\) \\ BP3 & 1440 & \(1.8\times 10^{-3}\) & 1704 & \(4.0\times 10^{-4}\) & 1464 & \(1.6\times 10^{-3}\) & 2888 & \(5.0\times 10^{-7}\) \\ \hline \hline \end{tabular}
\end{table}
Table 8: Cross sections (in pb) for the pair production of the VLQs for our three benchmark scenarios. Masses are in GeV.
where \(Q=U_{1,2},D_{1,2}\), \(f=t,b\) and \(V=W,Z,h_{1}\). Under the assumption that \(\mathrm{BR}\left(Q\to fV\right)=\mathrm{BR}\left(Q\to h_{1}t/b\right)+\mathrm{BR} \left(Q\to Zt/b\right)+\mathrm{BR}\left(Q\to Wb/t\right)=1\), the cross section (60) reduces to the signal cross section constrained by the experimental collaborations, \(\sigma^{\mathrm{exp}}_{95\%}\approx\sigma\left(pp\to Q\overline{Q}\right)\). If, on the other hand, the three BRs do not sum to one, we expect the resulting exclusion bounds to be weaker than the bounds reported by ATLAS and CMS.
The lightest VLQ in our model, \(U_{1}\), is characterized by the following BRs:
\[\mathrm{BP1}: \mathrm{BR}\left(U_{1}\to h_{1}t\right)=0.188,\quad\mathrm{BR} \left(U_{1}\to Zt\right)=0.146,\quad\mathrm{BR}\left(U_{1}\to Wb\right)=0.040 \tag{61}\] \[\mathrm{BP2}: \mathrm{BR}\left(U_{1}\to h_{1}t\right)=0.135,\quad\mathrm{BR} \left(U_{1}\to Zt\right)=0.101,\quad\mathrm{BR}\left(U_{1}\to Wb\right)=0.044\] \[\mathrm{BP3}: \mathrm{BR}\left(U_{1}\to h_{1}t\right)=0.209,\quad\mathrm{BR} \left(U_{1}\to Zt\right)=0.165,\quad\mathrm{BR}\left(U_{1}\to Wb\right)=0.037\]
with the resulting cross sections \(\widetilde{\sigma}^{\mathrm{BP1}}=1.8\times 10^{-4}\) pb, \(\widetilde{\sigma}^{\mathrm{BP2}}=7.0\times 10^{-5}\) pb and \(\widetilde{\sigma}^{\mathrm{BP3}}=3.0\times 10^{-4}\) pb. The second-to-the-lightest VLQ, \(D_{1}\), has
\[\mathrm{BP1}: \mathrm{BR}\left(D_{1}\to h_{1}b\right)=0.001,\quad\mathrm{BR} \left(D_{1}\to Zb\right)=0.001,\quad\mathrm{BR}\left(D_{1}\to Wt\right)=0.375 \tag{62}\] \[\mathrm{BP2}: \mathrm{BR}\left(D_{1}\to h_{1}b\right)=0.001,\quad\mathrm{BR} \left(D_{1}\to Zb\right)=0.001,\quad\mathrm{BR}\left(D_{1}\to Wt\right)=0.221\] \[\mathrm{BP3}: \mathrm{BR}\left(D_{1}\to h_{1}b\right)=0.001,\quad\mathrm{BR} \left(D_{1}\to Zb\right)=0.001,\quad\mathrm{BR}\left(D_{1}\to Wt\right)=0.387\]
with \(\widetilde{\sigma}^{\mathrm{BP1}}=1.4\times 10^{-4}\) pb, \(\widetilde{\sigma}^{\mathrm{BP2}}=4.1\times 10^{-5}\) pb and \(\widetilde{\sigma}^{\mathrm{BP3}}=2.4\times 10^{-4}\) pb. We can thus conclude that in order to probe the VL masses featured by our benchmark scenarios, at least one order of magnitude enhancement of the experimental sensitivity in the VLQ searches is required.
Finally, we analyze the possibility of testing the model via processes in which the VLQs are produced one at the time. The single VL \(T\) quark production was analyzed by ATLAS in Refs. [73; 74; 75], while the single VL \(B\) quark production in Ref. [76]. The corresponding Feynman diagrams are shown in Fig. 4. The 95% C.L. experimental upper bounds on the relevant signal cross sections are of the order (\(10^{-2}-10^{-1}\)) pb.
We calculated the cross sections for the VLQ single productions of our three benchmark points using the NWA. The hadronic cross sections were obtained with MadGraph5 MC@NLO and the BRs with SPheno. We found that the cross section for a single production of the VLQs \(U_{1}\) is \(\mathcal{O}(10^{-5})\) pb, while for \(D_{1}\) it amounts to \(\mathcal{O}(10^{-7})\) pb. We can thus conclude that in our model the single production is a less promising search strategy than the pair production. This was to be expected as the single production is in general less competitive than the pair production for the Yukawa couplings smaller than 1, see e.g. [77; 78].
### Vector-like leptons
At the tree level, the VL leptons (VLLs) are pair produced at the LHC via the Drell-Yan processes. The corresponding cross sections for our three benchmark scenarios are collected in
Figure 4: Single production of the VLQs \(T\) and \(B\) via the EW gauge boson exchange at the LHC, as considered by the ATLAS collaboration in Refs. [73; 74; 75] for \(T\) and in Ref. [76] for \(B\).
Table IX. The analysis of all the possible experimental signatures is in this case much more involved than for the VLQs as the lepton decay BRs strongly depend on the presence in the spectrum of the exotic scalars lighter than the VLLs. The following mass hierarchies are observed:
\[\begin{array}{ll}{\rm BP1}:&M_{N_{1,2}}<M_{h_{2}},M_{a_{1}},M_{h^{\pm}}<M_{E_ {1}},M_{N_{3,4}}<M_{a_{2}}<M_{E_{2}}<M_{h_{3}}\\ {\rm BP2}:&M_{N_{1,2}}<M_{h_{2}},M_{a_{1}},M_{h^{\pm}}<M_{E_{1}},M_{N_{3,4}}<M_{ a_{2}}<M_{E_{2}}<M_{h_{3}}\\ {\rm BP3}:&M_{N_{1,2}}<M_{h_{2}},M_{a_{1}},M_{h^{\pm}}<M_{a_{2}}<M_{E_{1}},M_{N_ {3,4}}<M_{E_{2}}<M_{h_{3}}\,.\end{array} \tag{63}\]
In all the cases the lightest VLLs, neutrinos \(N_{1,2}\), originate predominantly from the SU(2)\({}_{L}\) singlets and their production cross section at the LHC is suppressed, \({\cal O}(10^{-5})\) pb.
The second-to-the-lightest VLLs, \(E_{1}\) and heavy neutrinos \(N_{3,4}\), come from the same SU(2)\({}_{L}\) doublets and are almost degenerate in mass. Therefore, three production channels should be considered simultaneously: \(p\,p\to Z/\gamma\to E_{1}\bar{E}_{1}\), \(p\,p\to Z/\gamma\to N_{3,4}N_{3,4}\) and \(p\,p\to W^{\pm}\to E_{1}N_{3,4}\). The dominant branching ratios for the subsequent decays of \(E_{1}\) and \(N_{3,4}\), evaluated with SPheno, are collected in Table X. In all three cases the VLLs decay predominantly to the SM muons, which is a direct consequence of the fact that we impose \(\Delta a_{\mu}\) as a constraint in our likelihood function and the largish muon-lepton-scalar Yukawa couplings are preferred.
A closer look at Table X reveals that the relative strengths of various VLL decay channels are, to some extent, scenario dependent. Moreover, the final experimental signatures hinge on the subsequent decay channels of the scalar particles, which are also pretty complex (we discuss it in more details in Sec. VII.3). As an example, let us consider a process \(p\,p\to E_{1}\,\bar{E}_{1}\to\mu\,\bar{\mu}\,a_{1}\,a_{1}\) for the benchmark scenario BP1. The lightest pseudoscalar can decay in this case either to a \(b\,\bar{b}\) pair (with the BR of 28%) or to \(\nu\,N_{1,2}\) (with the BR of 69%). The decay of the heavy neutrinos then proceeds as \(N_{1,2}\to e^{\pm}/\mu^{\pm}\,W^{\pm}\) with the BR of 56%, or \(N_{1,2}\to\nu\,Z\) with the BR of 28%. We can thus expect the following distinctive experimental signatures emerging from the \(p\,p\to E_{1}\,\bar{E}_{1}\to\mu\,\bar{\mu}\,a_{1}\,a_{1}\)
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c} \hline \hline BR & BP1 & BR & BP2 & BR & BP3 \\ \hline \(E_{1}\to\mu\,a_{1}\) & 37\% & \(E_{1}\to\mu\,a_{1}\) & 23\% & \(E_{1}\to\mu\,a_{1}\) & 21\% \\ \(E_{1}\to\mu\,h_{2}\) & 37\% & \(E_{1}\to\mu\,h_{2}\) & 24\% & \(E_{1}\to\mu\,h_{2}\) & 25\% \\ & & \(E_{1}\to N_{1,2}\,W^{\pm}\) & 26\% & \(E_{1}\to\tau\,a_{1}\) & 11\% \\ \hline BR & BP1 & BR & BP2 & BR & BP3 \\ \hline \(N_{3,4}\to\mu\,h^{\pm}\) & 70\% & \(N_{3,4}\to\mu\,h^{\pm}\) & 51\% & \(N_{3,4}\to\mu\,h^{\pm}\) & 56\% \\ & & \(N_{3,4}\to N_{1,2}\,Z\) & 12\% & & \\ & & \(N_{3,4}\to N_{1,2}\,h_{1}\) & 11\% & & \\ \hline \hline \end{tabular}
\end{table}
Table X: Dominant BRs for the decays of \(E_{1}\) and \(N_{3,4}\).
process: a) 2 muons + (b)-jets, b) multileptons + missing energy, c) multileptons + jets + missing energy, with the total signal cross section reduced w.r.t. the production cross section reported in Table 9 by the product of the subsequent BRs.
To make the things even worse, there are not many LHC analysis that would explicitly look for the VLLs. The only dedicated ATLAS search, based on the 139 fb\({}^{-1}\) of data from the 13 TeV run [79], looks for the VLLs coupled predominantly to taus. The analogous CMS analysis based on the 77.4 fb\({}^{-1}\) of data can be found in Ref. [80]. The decay chains considered by the two collaborations are shown in Fig. 5. In both cases, the total cross sections for the VLL production can be probed down to \(10^{-3}\) pb (of course, the actual value is mass dependent).
In our model all three benchmark scenarios feature very low BRs for \(E_{1}\) and \(N_{3,4}\) decaying to taus, which do not exceed 10%. We can thus expect a strong suppression of the resulting signal w.r.t. the experimental analysis. Indeed, using the NWA the total cross section for the process considered in Refs. [79] and [80] can be written as follows:
\[\sigma\left(p\,p\to\tau^{-}\tau^{+}l^{-}l^{+}q\,\overline{q}\right) \approx \sigma\left(p\,p\to E_{1}\overline{E}_{1}\right)\mathrm{BR} \left(E_{1}\to\tau^{-}l^{-}l^{+}\right)\mathrm{BR}\left(\overline{E}_{1}\to \tau^{+}q\,\overline{q}\right) \tag{64}\] \[+ \sigma\left(p\,p\to E_{1}N_{3,4}\right)\mathrm{BR}\left(E_{1} \to\tau^{-}l^{-}l^{+}\right)\mathrm{BR}\left(N_{3,4}\to\tau^{+}q\,\overline{ q}\right)\,.\]
Combining the cross sections from Table 9 with the relevant BRs calculated with SPheno, we obtain
\[\mathrm{BP1}:\;\sigma\left(p\,p\to\tau^{-}\tau^{+}l^{-}l^{+}q\, \overline{q}\right) =8.3\times 10^{-7}\ \mathrm{pb} \tag{65}\] \[\mathrm{BP2}:\;\sigma\left(p\,p\to\tau^{-}\tau^{+}l^{-}l^{+}q\, \overline{q}\right) =8.1\times 10^{-6}\ \mathrm{pb}\] \[\mathrm{BP3}:\;\sigma\left(p\,p\to\tau^{-}\tau^{+}l^{-}l^{+}q\, \overline{q}\right) =5.3\times 10^{-7}\ \mathrm{pb}\,.\]
If we now compare the predictions of Eq. (65) with the corresponding experimental 95% C.L. exclusion cross sections from Ref. [79], we can conclude that the benchmark scenarios identified in Sec. VI are not excluded by the dedicated LHC searches for the VLLs.4 Moreover, it may also be challenging to test the VLL sector of our model in future runs at the LHC, if no dedicated experimental strategies for the muon final state signatures are proposed.
[FOOTNO
Figure 5: Pair production of the VLLs \(E_{1}\) and \(N_{3,4}\) at the LHC, as considered by the ATLAS [79] and CMS [80] collaborations.
### Exotic scalars
Finally, we investigate the possibility of testing the predictions of our model via the LHC searches in the scalar sector. There is a plethora of experimental analyses, both by ATLAS and CMS, that look for the non-SM Higgs bosons (see Ref. [82] for a recent review in the framework of the 2HDM). At the same time, in our benchmark scenarios the exotic scalars can decay through a variety of channels (the dominant BRs, obtained with SPheno, are collected in Table 11).
To facilitate the analysis, we use the publicly available code HiggsTools[83], a toolbox for evaluating bounds from the direct searches for the exotic scalar particles at LEP and the LHC, whose database contains 258 different limits. We find that all our benchmark scenarios are tagged as "allowed" by HiggsTools.
It is instructive to take a closer look at the output of HiggsTools, as it indicates which searches are most sensitive to the spectra featured by our best-fit scenarios. This is quantified by a parameter called "observed ratio", \(R_{\rm obs}\), which is the ratio of the predicted cross section and the experimental limit at the 95% C.L. The point in the parameter space is excluded if \(R_{\rm obs}>1\). We observe that the highest values of \(R_{\rm obs}\) (0.6 for BP1 and BP3, 0.14 for BP2) are reached for the \(h_{2}\to\tau^{+}\tau^{-}\) and \(a_{1}\to\tau^{+}\tau^{-}\) decays constrained by the ATLAS \(139\,{\rm fb}^{-1}\) analysis [84], despite very low decay BRs in this channel.
To investigate it in more details, we calculated the \(a_{1}/h_{2}\to\tau^{+}\tau^{-}\) cross sections with MadGraph5 MC@NLO. The results are reported in the last three columns of Table 11. These are to be compared with the 95% C.L. experimental lower bounds on the cross section reported in the third column of Table 11. We find a very good agreement with the output of HiggsTools in terms of the parameter \(R_{\rm obs}\), thus confirming that the decays of the exotic scalars into taus are going to be the most promising way of testing the predictions of the model at the LHC.
In Table 11 we also present other decay channels of \(a_{1}\) and \(h_{2}\) that feature the high sensitivity. While the current experimental bounds on those searches are weaker that those relative to the \(\tau^{-}\tau^{+}\) final state, they may offer complementary signatures of the model in future LHC runs.
Incidentally, note that the BRs for the decays of \(h_{2}\) and \(a_{1}\) to the EW gauge bosons, \(\gamma\gamma\), \(ZZ\)
\begin{table}
\begin{tabular}{c|c|c|c} \hline \hline Process & BP1 & BP2 & BP3 \\ \hline \(a_{1}\to\nu\,N_{1,2}\) & 69\% & 82\% & 72\% \\ \(a_{1}\to\bar{b}\,b\) & 28\% & 14\% & 25\% \\ \(a_{2}\to\bar{t}\,t\) & 73\% & 73\% & 36\% \\ \(a_{2}\to\bar{b}\,b\) & & & 15\% \\ \(a_{2}\to\nu\,N_{1,2}\) & & & 46\% \\ \hline \(h_{2}\to\nu\,N_{1,2}\) & 69\% & 84\% & 72\% \\ \(h_{2}\to\bar{b}\,b\) & 28\% & 14\% & 25\% \\ \(h_{3}\to\bar{t}\,t\) & 50\% & 43\% & 34\% \\ \(h_{3}\to\bar{\tau}\,E_{1}\) & 10\% & 9\% & 20\% \\ \hline \(h^{\pm}\to\bar{\mu}\,N_{1,2}\) & 41\% & 51\% & 48\% \\ \(h^{\pm}\to\bar{e}\,N_{1,2}\) & 34\% & 30\% & 26\% \\ \(h^{\pm}\to\bar{b}\,t\) & 19\% & 13\% & 19\% \\ \hline \hline \end{tabular}
\end{table}
Table 11: Dominant BRs (\(>5\%\)) for the decays of the exotic scalars.
and \(WW\), are \({\cal O}(10^{-8})\) and \({\cal O}(10^{-9})\), respectively, which is orders of magnitude below the current experimental bounds.
Finally, let us comment on the possibility of testing the \(a_{1}/h_{2}\to t^{+}t^{-}\) decay through the measurement of an effective coupling \(g_{a_{1}/h_{2}tt}\). This scenario was investigated by CMS in Ref. [85]. Comparing the values of the \(g_{a_{1}/h_{2}tt}\) coupling evaluated with SPheno (0.084 for BP2, 0.096 for BP3) with the experimental 95% C.L. upper bounds (0.80 for BP2 and 0.70 for BP3)5 we conclude that no additional constraint on our model arises from this particular search.
Footnote 5: The CMS analysis [85] does not cover the scalar masses below 400 GeV.
## VIII Conclusions
In this study, we performed a global analysis of an extension of the SM which contains one full family of VL fermions, an extra SU(2)\({}_{L}\) scalar doublet and an SU(2)\({}_{L}\) scalar singlet. It also features a U(1)\({}_{X}\) global symmetry spontaneously broken by the singlet scalar vev. This scenario was originally proposed in Ref. [12] to generate the masses of the third and the second family of the SM fermions, as well as to account correctly for their mixing patterns.
In our analysis we confronted the model with the experimental bounds from the flavor physics observables, which include the anomalous magnetic moment of the muon and the rare decays of the tau lepton. Additionally, the model was subjected to the theoretical constraints stemming from the stability of the scalar potential and from the perturbativity of the renormalized couplings. Importantly, we revisited and corrected the bounded-from-below and the alignment limits, which in the context of the same model were previously discussed in Refs. [13; 16]. In particular, we showed that additional constraints on the quartic couplings arise if the full three-scalar potential is considered. We also argued that the perturbativity bounds should not be imposed on the low-scale parameters of the lagrangian but on the running couplings evaluated at the renormalization scale
\begin{table}
\begin{tabular}{c|c|c c c|c|c|c} \hline \hline Channel & Experiment & \(\sigma_{95\%}^{\rm exp}\) (BP1, BP2, BP3) & \(\sigma^{\rm BP1}\) & \(\sigma^{\rm BP2}\) & \(\sigma^{\rm BP3}\) \\ \hline \(a_{1}/h_{2}\to\tau^{+}\tau^{-}\) & CMS [86] & 0.060 & 0.030 & 0.020 & & & \\ \cline{2-7} & ATLAS [84] & 0.050 & 0.020 & 0.016 & & & \\ \hline \(a_{1}/h_{2}\to\mu^{+}\mu^{-}\) & CMS [87] & 0.007 & 0.006 & 0.005 & \(1.3\times 10^{-4}\) & \(1.4\times 10^{-5}\) & \(4.4\times 10^{-5}\) \\ \cline{2-7} & ATLAS [88] & 0.009 & 0.004 & 0.003 & & & \\ \hline \(a_{1}/h_{2}\to b^{+}b^{-}\) & CMS [89] & 6.0 & 3.5 & 3.0 & 0.554 & 0.061 & 0.061 \\ \cline{2-7} & ATLAS [90] & \(-\) & \(-\) & \(-\) & & & \\ \hline \hline \(h^{\pm}\to\bar{t}b\) & CMS [91] & 0.40 & 0.30 & 0.25 & \(7.6\times 10^{-3}\) & \(2.1\times 10^{-3}\) & \(4.5\times 10^{-3}\) \\ \cline{2-7} & ATLAS [92] & 0.45 & 0.30 & 0.25 & & & \\ \hline \hline \end{tabular}
\end{table}
Table 11: An overview of the LHC scalar searches which present the highest sensitivity to the benchmark scenarios identified in Sec. VI. The columns show, respectively, the decay channel, the experimental analyses investigating this channel, the experimental 95% C.L. upper bound on the cross section for the mass corresponding to the mass of the scalar in a benchmark scenario, the actual cross section calculated in the benchmark scenario.
which sets an upper limit of the model's validity. These RG-based perturbativity conditions require the low-scale scalar couplings to be smaller than 2 and the Yukawa couplings smaller than 1.5.
With all the constraints in place, we performed a numerical scan of the model's parameter space and we identified three benchmark scenarios that satisfied all the theoretical and experimental requirements. One distinctive feature of these solutions is that the charged scalar/heavy neutrino loops provide dominant contributions to the observable \(\Delta a_{\mu}\). This finding is qualitatively different from the conclusions obtained in Refs. [13; 16] where only the charged lepton loops were considered. We would like to emphasize that the dominance of the heavy neutrino contribution to \(\Delta a_{\mu}\) is a generic characteristic of the model and not a mere artifact of the specific benchmark scenarios. The main reason behind this feature is that the same coupling which generates the neutral scalar/charged lepton loops is also responsible for the correct tree-level mass of the muon and thus it is required to be small.
We also performed a detailed LHC analysis of our three best-fit scenarios. We investigated the experimental constraints stemming from the direct searches for VLQs, VLLs and exotic scalars. We found that none of the currently available exclusion bounds can test the spectra featured by the benchmark scenarios. This provides a proof of concept that the model in study is feasible as an explanation of both the SM masses and mixings and of the relevant experimental phenomena.
Regarding future prospects for experimental verification of the model, several observations can be made. Firstly, both charged and neutral VLLs decay predominantly to muons in our framework. On the other hand, all currently available LHC analyses focus on taus in the final state, for which the cross sections obtained in our model are several orders of magnitude below the experimental sensitivity. Therefore, we would like to encourage the experimental collaborations to provide dedicated analyses of the VLLs coupled to the second family of the SM fermions. Such a study would not only allow to test the predictions of our model, but it would prove very useful in any phenomenological research that aims at explaining the muon \((g-2)\) anomaly in a NP framework with the VLLs.
Secondly, the experimental searches for the VLQs can become a fruitful testing ground for our model already in the current run of the LHC. The cross sections for the pair production of VLQs featured by the benchmark scenarios are one order of magnitude smaller than the current experimental upper bounds and should be in reach of the dedicated VLQs searches based on the larger data samples.
Finally, we observed that the most constraining decay channel for the exotic scalars is \(a_{1}/h_{2}\to\tau^{+}\tau^{-}\), for which the ratio of the predicted to the experimental cross sections is close to 1. It may thus provide complementary signatures of our model in future runs at the LHC.
However, the ultimate verification of the NP scenario considered in this study may come from the flavor physics. The Belle-II collaboration plans on improving, by at least one order of magnitude, their experimental bounds on the rare leptonic decays of the tau lepton. As the corresponding branching ratios featured by our three benchmark scenarios are very close to the current 90% C.L. exclusion limits, the rare decays could be the first experimental signatures to be tested.
## Acknowledgements
K.K. would like to thank Enrico Sessolo for discussions and comments on the manuscript. A.E.C.H is supported by ANID-Chile FONDECYT 1210378, ANID PIA/APOYO AFB180002 and Milenio-ANID-ICN2019_044. K.K., H.L. and D.R. are supported by the National Science Centre (Poland) under the research Grant No. 2017/26/E/ST2/00470. The use of the CIS computer cluster at the National Centre for Nuclear Research in Warsaw is gratefully acknowledged.
## Appendix A Fermion mass matrices
### Charged leptons
The mass matrix for the charged leptons, \({\cal M}_{e}\), can be derived from Eq. (1) after identifying the generic fermions \(\psi\) with the corresponding lepton fields from Table 1 and the generic scalar \(H\) with \(H_{d}\). One thus has
\[\psi_{iR}=e_{iR},\qquad\psi_{iL}=L_{iL},\qquad\psi_{4R}=e_{4R},\qquad\psi_{4L}=L_ {4L},\qquad\widetilde{\psi}_{4R}=\widetilde{L}_{4R},\qquad\widetilde{\psi}_{4L }=\widetilde{e}_{4L} \tag{10}\]
with the following components of the SU(2)\({}_{L}\) doublets: \(L_{iL}=(\nu_{iL},e_{iL})^{T}\), \(L_{4L}=(\nu_{4L},e_{4L})^{T}\) and \(\widetilde{L}_{4R}=(\widetilde{\nu}_{4R},\widetilde{e}_{4R})^{T}\). As a result, the mass matrix reads
\[{\cal M}_{e}=\left(\begin{array}{c|ccccc}&e_{1R}&e_{2R}&e_{3R}&e_{4R}& \widetilde{e}_{4R}\\ \hline e_{1L}&0&0&0&0\\ e_{2L}&0&0&0&y_{24}^{e}\frac{v_{4}}{\sqrt{2}}&0\\ e_{3L}&0&0&0&y_{34}^{e}\frac{v_{4}}{\sqrt{2}}&-x_{34}^{L}\frac{v_{\phi}}{\sqrt {2}}\\ e_{4L}&0&0&y_{43}^{e}\frac{v_{4}}{\sqrt{2}}&0&-M_{4}^{L}\\ \widetilde{e}_{4L}&0&x_{42}^{e}\frac{v_{\phi}}{\sqrt{2}}&x_{43}^{e}\frac{v_{ \phi}}{\sqrt{2}}&M_{4}^{e}&0\end{array}\right)\,, \tag{11}\]
where \(M_{4}^{L}\) (\(M_{4}^{e}\)) denotes the mass of the VL lepton doublet (singlet) and \(x_{34}^{L}\equiv x_{34}^{e}\). To facilitate the comparison with the corresponding mass matrix defined in SARAH, we adopt the sign convention used in the code. Note, however, that such a choice does not affect the conclusions drawn in our study as we allow all the Yukawa couplings and all the VL mass parameters to assume both positive and negative values in our numerical scan.
The \(5\times 5\) charged lepton mass matrix \({\cal M}_{e}\) can be diagonalized by means of two unitary transformations \(V_{L}^{e}\) and \(V_{R}^{e}\),
\[V_{L}^{e}{\cal M}_{e}V_{R}^{e\dagger}={\rm diag}\left(0,m_{\mu},m_{\tau},M_{E_ {1}},M_{E_{2}}\right). \tag{12}\]
In Sec. II the approximate expressions for the eigenvalues \(m_{\mu}\) and \(m_{\tau}\) were provided in Eq. (6), whereas the analogous formulae for the eigenvalues \(M_{E_{1}}\) and \(M_{E_{2}}\) were given in Eq. (11). While those equations are very useful to get a general idea on which lagrangian parameters are relevant for generating the physical charged lepton masses, in our numerical analysis we diagonalize all the fermion mass matrices numerically, employing the SPheno code generated by SARAH.
### Up-type quarks
In analogy to the charged lepton sector, the mass matrix for the up-type quarks, \({\cal M}_{u}\), can be derived from Eq. (1) after taking \(H=H_{u}\) and making the following identification:
\[\psi_{iR}=u_{iR},\qquad\psi_{iL}=Q_{iL},\qquad\psi_{4R}=u_{4R},\qquad\psi_{4L}= Q_{4L},\qquad\widetilde{\psi}_{4R}=\widetilde{Q}_{4R},\quad\widetilde{\psi}_{4L }=\widetilde{u}_{4L}\,. \tag{13}\]
In Eq. (13) the SU(2)\({}_{L}\) doublets have the following components: \(Q_{iL}=(u_{iL},d_{iL})^{T}\), \(Q_{4L}=(u_{4L},d_{4L})^{T}\) and \(\widetilde{Q}_{4R}=(\widetilde{u}_{4R},\widetilde{d}_{4R})^{T}\). The corresponding mass matrix with the SARAH sign con
vention reads
\[\mathcal{M}_{u}=\left(\begin{array}{c|cccccc}&u_{1R}&u_{2R}&u_{3R}&u_{4R}& \widetilde{u}_{4R}\\ \hline u_{1L}&0&0&0&0&0\\ u_{2L}&0&0&0&-y_{24}^{u}\frac{v_{u}}{\sqrt{2}}&0\\ u_{3L}&0&0&0&-y_{34}^{u}\frac{v_{u}}{\sqrt{2}}&x_{34}^{Q}\frac{v_{\phi}}{\sqrt{ 2}}\\ u_{4L}&0&0&-y_{43}^{u}\frac{v_{u}}{\sqrt{2}}&0&-M_{4}^{Q}\\ \widetilde{u}_{4L}&0&x_{42}^{u}\frac{v_{\phi}}{\sqrt{2}}&x_{43}^{u}\frac{v_{ \phi}}{\sqrt{2}}&M_{4}^{u}&0\\ \end{array}\right)\,, \tag{10}\]
with \(x_{34}^{Q}\equiv x_{34}^{u}\). The up-type quark mass matrix \(\mathcal{M}_{u}\) can be diagonalized via the mixing matrices \(V_{L}^{u}\) and \(V_{R}^{u}\) as
\[V_{L}^{u}\mathcal{M}_{u}V_{R}^{u\dagger}=\mathrm{diag}\left(0,m_{c},m_{t},M_{ U_{1}},M_{U_{2}}\right). \tag{11}\]
The approximate expressions for the eigenvalues \(m_{c}\) and \(m_{t}\) can be found in Eq. (4), and for the eigenvalues \(M_{U_{1}}\) and \(M_{U_{2}}\) in Eqs. (8) and (9), respectively.
### Down-type quarks
The down-type quark lagrangian can be obtained from the generic lagrangian (1) by taking \(H=H_{d}\) and making the following replacements of the fermion fields,
\[\psi_{iR}=d_{iR},\qquad\psi_{iL}=Q_{iL},\qquad\psi_{4R}=d_{4R},\qquad\psi_{4L} =Q_{4L},\qquad\widetilde{\psi}_{4R}=\widetilde{Q}_{4R},\quad\widetilde{\psi} _{4L}=\widetilde{d}_{4L}\,. \tag{12}\]
The corresponding down-type quark mass matrix with the SARAH sign convention reads
\[\mathcal{M}_{d}=\left(\begin{array}{c|cccccc}&d_{1R}&d_{2R}&d_{3R}&d_{4R}& \widetilde{d}_{4R}\\ \hline d_{1L}&0&0&0&y_{14}^{d}\frac{v_{d}}{\sqrt{2}}&0\\ d_{2L}&0&0&0&y_{24}^{d}\frac{v_{d}}{\sqrt{2}}&0\\ d_{3L}&0&0&0&y_{34}^{d}\frac{v_{d}}{\sqrt{2}}&-x_{34}^{Q}\frac{v_{\phi}}{\sqrt {2}}\\ d_{4L}&0&0&y_{43}^{d}\frac{v_{d}}{\sqrt{2}}&0&M_{4}^{Q}\\ \widetilde{d}_{4L}&0&x_{42}^{d}\frac{v_{\phi}}{\sqrt{2}}&x_{43}^{d}\frac{v_{ \phi}}{\sqrt{2}}&M_{4}^{d}&0\\ \end{array}\right)\,. \tag{13}\]
Note that, unlike in the case of the up-type quarks and charged leptons, it is impossible to rotate away the \((1,4)\) element of the matrix \(\mathcal{M}_{d}\). The reason is that the mixing between the SM doublets \(Q_{1L}\) and \(Q_{2L}\) has already been used in the up-quark sector to rotate away the corresponding entry of \(\mathcal{M}_{u}\)[12]. As a result, the Yukawa coupling \(y_{14}^{d}\) is present in \(\mathcal{M}_{d}\). The down-type quark mass matrix can be diagonalized by the unitary matrices \(V_{L}^{d}\) and \(V_{R}^{d}\),
\[V_{L}^{d}\mathcal{M}_{d}V_{R}^{d\dagger}=\mathrm{diag}\left(0,m_{s},m_{b},M_{ D_{1}},M_{D_{2}}\right). \tag{14}\]
The approximate formulae for the eigenvalues \(m_{s}\) and \(m_{b}\) can be found in Eq. (5), and for the eigenvalues \(M_{D_{1}}\) and \(M_{D_{2}}\) in Eq. (10).
Incidentally, the presence of the matrix element \(y_{14}^{d}v_{d}\) has important consequences for the phenomenology of the model defined in Table 1. As it was discussed in Sec. II, the first generation of the SM fermions remains massless if only one complete VL family is added to the spectrum. On the other hand, the mixing of the \(d\) quark with the strange and bottom quarks is mediated by \(y_{14}^{d}v_{d}\). As a result, the full CKM matrix can be generated in this setup and one needs to include its elements in the global fit.
### Neutrino sector
Finally, we discuss the neutrino mass matrix which emerges from the particle content given in Table 1. The corresponding lagrangian can be deduced from Eq. (1) after the following identification:
\[\psi_{iL}=L_{iL},\qquad\psi_{4R}=\nu_{4R},\qquad\psi_{4L}=L_{4L},\qquad\widetilde {\psi}_{4R}=\widetilde{L}_{4R},\qquad\widetilde{\psi}_{4L}=\widetilde{\nu}_{4L },\qquad H=H_{u}\,, \tag{111}\]
where the SU(2)\({}_{L}\) doublets \(L_{iL}\), \(L_{4L}\) and \(\widetilde{L}_{4R}\) are defined in Sec. A.1. Note that since there is no \(\nu_{iR}\) field in our model, the couplings \(y^{\nu}_{4j}\) and \(x^{\nu}_{4j}\) vanish. On the other hand, the VL neutrino \(\nu_{4R}\) is a singlet under the SM gauge symmetry, so an extra term with \(H^{*}_{d}\) replacing \(H_{u}\) arises. In the end, the neutrino lagrangian reads:
\[\mathcal{L}^{\rm Yukawa}_{\rm ren,\nu}=y^{\nu}_{i4}L_{iL}H_{u}\nu_{4R}+x^{L} _{i4}L_{iL}\phi\widetilde{L}_{4R}+y^{\nu}_{i4}L_{iL}H^{*}_{d}\widetilde{\nu}_{ 4L}+M^{L}_{4}L_{4L}\widetilde{L}_{4R}+M^{\nu}_{4}\widetilde{\nu}_{4L}\nu_{4R}+ \text{h.c.}. \tag{112}\]
Eq. (112) defines a mixed Majorana-Dirac neutrino sector, which after the EWSB gives rise to a \(7\times 7\) Majorana neutrino mass matrix
\[\mathcal{M}_{\nu}=\left(\begin{array}{c|cccccc}&\nu_{1L}&\nu_{2L}&\nu_{3L} &\nu_{4L}&\nu_{4R}&\widetilde{\nu}_{4L}&\widetilde{\nu}_{4R}\\ \hline\nu_{1L}&0&0&0&0&-y^{\nu}_{14}\frac{v_{u}}{\sqrt{2}}&y^{\nu\nu}_{14} \frac{v_{d}}{\sqrt{2}}&0\\ \nu_{2L}&0&0&0&0&-y^{\nu}_{24}\frac{v_{u}}{\sqrt{2}}&y^{\nu\nu}_{24}\frac{v_{d }}{\sqrt{2}}&0\\ \nu_{3L}&0&0&0&-y^{\nu}_{34}\frac{v_{u}}{\sqrt{2}}&y^{\nu\nu}_{34}\frac{v_{d} }{\sqrt{2}}&x^{L}_{34}\frac{v_{\phi}}{\sqrt{2}}\\ \nu_{4L}&0&0&0&0&0&0&M^{L}_{4}\\ \nu_{4R}&-y^{\nu}_{14}\frac{v_{u}}{\sqrt{2}}&-y^{\nu}_{24}\frac{v_{u}}{\sqrt{2 }}&-y^{\nu}_{34}\frac{v_{u}}{\sqrt{2}}&0&0&M^{\nu}_{4}&0\\ \widetilde{\nu}_{4L}&y^{\nu\nu}_{14}\frac{v_{d}}{\sqrt{2}}&y^{\nu\nu}_{24} \frac{v_{d}}{\sqrt{2}}&y^{\nu\nu}_{34}\frac{v_{d}}{\sqrt{2}}&0&M^{\nu}_{4}&0&0 \\ \widetilde{\nu}_{4R}&0&0&x^{L}_{34}\frac{v_{\phi}}{\sqrt{2}}&M^{L}_{4}&0&0&0 \\ \end{array}\right), \tag{113}\]
where once again we chose to work with the SARAH sign convention.
The neutrino mass matrix is symmetric, it can thus be diagonalized via an orthogonal mixing matrix \(V^{\nu}\),
\[V^{\nu}\mathcal{M}_{\nu}V^{\nu\dagger}=\text{diag}\left(0,m_{\nu_{2}},m_{\nu_ {3}},M_{N_{1}},M_{N_{2}},M_{N_{3}},M_{N_{4}}\right). \tag{114}\]
## Appendix B Scalar mass matrices
In this Appendix we collect the explicit formulae for the scalar mass matrices derived from the scalar potential (19) under the spontaneous symmetry breaking conditions (3).
The CP-even scalar mass matrix in the basis \((\text{Re}\,H^{0}_{u},\,\text{Re}\,H^{0}_{d},\,\text{Re}\,\phi)\) evaluated at the vacuum reads
\[\mathbf{M}^{2}_{\rm CP-even}=\left(\begin{array}{ccc}\lambda_{1}v^{2}_{u}- \lambda_{5}\frac{v_{d}v^{2}_{\phi}}{4v_{u_{\phi}}}&\lambda_{3}v_{u}v_{d}+ \lambda_{5}\frac{v^{2}_{\phi}}{4}&\lambda_{7}v_{u}v_{\phi}+\lambda_{5}\frac{v _{d}v_{\phi}}{2}\\ \lambda_{3}v_{u}v_{d}+\lambda_{5}\frac{v^{2}_{\phi}}{4}&\lambda_{2}v^{2}_{d}- \lambda_{5}\frac{v_{u}v^{2}_{\phi}}{4v_{u}}&\lambda_{8}v_{d}v_{\phi}+\lambda_{5} \frac{v_{u}v_{\phi}}{2}\\ \lambda_{7}v_{u}v_{\phi}+\lambda_{5}\frac{v_{d}v_{\phi}}{2}&\lambda_{8}v_{d}v_ {\phi}+\lambda_{5}\frac{v_{u}v_{\phi}}{2}&\lambda_{6}v^{2}_{\phi}\\ \end{array}\right). \tag{115}\]
The matrix (115) can be diagonalized by an orthogonal matrix \(R_{h}\) parameterized with three mixing angles. We will denote them as \(\alpha_{12}\) for the \((H_{u},H_{d})\) mixing, \(\alpha_{13}\) for the \((H_{u},\phi)\) mixing, and \(\alpha_{23}\) for the \((H_{d},\phi)\) mixing. In this parametrization, the mixing matrix \(R_{h}\) is given by
\[R_{h}=\left(\begin{array}{ccc}c_{12}c_{13}&s_{12}c_{13}&s_{13}\\ -s_{12}c_{23}-c_{12}s_{13}s_{23}&c_{12}c_{23}-s_{12}s_{13}s_{23}&c_{13}s_{23} \\ s_{12}s_{23}-c_{12}s_{13}c_{23}&-c_{12}s_{23}-s_{12}s_{13}c_{23}&c_{13}c_{23} \\ \end{array}\right), \tag{116}\]
with the standard notation \(s_{ij}=\sin\alpha_{ij}\) and \(c_{ij}=\cos\alpha_{ij}\).
The elements of the matrix \(R_{h}\) determine the couplings of the physical Higgs bosons with the SM particles. It is convenient to define a reduced coupling as the ratio between the coupling of the physical Higgs scalar \(h_{i}\) and the corresponding coupling of the SM Higgs,
\[c_{h_{i}XX}=\frac{g_{h_{i}XX}}{g_{h_{\rm SM}XX}}\,, \tag{111}\]
where \(X\) stands for the SM fermions and gauge bosons. For the model defined in Table 1, the reduced couplings to quarks and charged leptons are given by
\[c_{h_{i}tt}=\frac{(R_{h})_{i1}}{\sin\beta},\qquad c_{h_{i}bb}= \frac{(R_{h})_{i2}}{\cos\beta}\qquad c_{h_{i}\tau\tau}=\frac{(R_{h})_{i2}}{ \cos\beta}\,, \tag{112}\]
while the reduced couplings to the EW gauge bosons read
\[c_{h_{i}ZZ}=c_{h_{i}WW}=(R_{h})_{i1}\sin\beta+(R_{h})_{i2}\cos \beta\,. \tag{113}\]
In this study we choose to work in the alignment limit, which is defined as a set of constraints on the quartic couplings \(\lambda_{i}\) under which the lightest CP-even scalar \(h_{1}\) has the same tree-level couplings with the SM particles as the SM Higgs. This means that the reduced couplings to fermions should be very close to 1,
\[\frac{\cos\alpha_{12}\cos\alpha_{13}}{\sin\beta}\approx 1,\qquad\qquad\frac{\sin\alpha_{12}\cos\alpha_{13}}{\cos\beta}\approx 1\,. \tag{114}\]
It can be easily verify that Eq. (114) leads to the following conditions on the CP-even scalars mixing angles,
\[\alpha_{12}+\beta=\frac{\pi}{2}+n\,\pi,\qquad\alpha_{13}=2\,n\pi \,,\quad\text{with }n=0,1,2\ldots \tag{115}\]
indicating no mixing between the doublet \(H_{u}\) and the singlet \(\phi\). In this setting, the two SU(2)\({}_{L}\) scalar doublets mix with the mixing angle \(\frac{\pi}{2}-\beta\), while the doublet \(H_{d}\) mixes with the singlet \(\phi\) with the mixing angle \(\alpha_{23}\). The CP-even scalars mixing matrix thus reduces to
\[R_{h}^{\rm alignment}=\left(\begin{array}{ccc}s_{\beta}&c_{ \beta}&0\\ -c_{\beta}c_{23}&s_{\beta}c_{23}&s_{23}\\ c_{\beta}s_{23}&-s_{\beta}s_{23}&c_{23}\end{array}\right)=\left(\begin{array} []{ccc}1&0&0\\ 0&c_{23}&s_{23}\\ 0&-s_{23}&c_{23}\end{array}\right)\times\left(\begin{array}{ccc}s_{\beta}& c_{\beta}&0\\ -c_{\beta}&s_{\beta}&0\\ 0&0&1\end{array}\right)\,. \tag{116}\]
The alignment conditions (114) translates into the non-trivial relations between the scalar potential couplings,
\[\lambda_{8}\,c_{\beta}^{2}+\lambda_{7}\,s_{\beta}^{2}+\lambda_{5 }\,s_{\beta}c_{\beta} = 0 \tag{117}\] \[\lambda_{2}\,c_{\beta}^{2}-\lambda_{1}\,s_{\beta}^{2}-\lambda_{3 }(c_{\beta}^{2}-s_{\beta}^{2}) = 0\,. \tag{118}\]
One can also express the mixing angle \(\alpha_{23}\) in terms of the parameters of the scalar potential,
\[\cos(2\alpha_{23})=-\frac{B_{23}}{\sqrt{4A_{23}^{2}+B_{23}^{2}}} \,,\qquad\sin(2\alpha_{23})=-\frac{A_{23}}{\sqrt{4A_{23}^{2}+B_{23}^{2}}}\,, \tag{119}\]
where we define
\[A_{23} = 2\,v\,v_{\phi}s_{\beta}(c_{\beta}\lambda_{5}+2\lambda_{7}s_{ \beta}) \tag{120}\] \[B_{23} = \lambda_{5}v_{\phi}^{2}+4s_{\beta}c_{\beta}\left(\lambda_{6}v_{ \phi}^{2}-(\lambda_{1}-\lambda_{3})v^{2}s_{\beta}^{2}\right)\,. \tag{121}\]
The CP-odd mass matrix in the basis \(\left(\mathrm{Im}\,H_{u}^{0},\mathrm{Im}\,H_{d}^{0},\mathrm{Im}\,\phi\right)\) reads
\[\mathbf{M}_{\mathrm{CP-odd}}^{2}=-\lambda_{5}\left(\begin{array}{ccc}\frac{v_ {d}v_{e}^{2}}{4v_{\mu}}&\frac{v_{\phi}^{2}}{4}&\frac{v_{d}v_{\phi}}{2}\\ \frac{v_{\phi}^{2}}{4}&\frac{v_{u}v_{\phi}^{2}}{4v_{\mu}}&\frac{v_{u}v_{\phi}}{ 2}\\ \frac{v_{d}v_{\phi}}{2}&\frac{v_{u}v_{\phi}}{2}&v_{u}v_{d}-2\frac{\mu_{\mathrm{ th}}^{2}}{\lambda_{5}}\end{array}\right)\,. \tag{110}\]
Finally, the charged scalar mass matrix is given by
\[\mathbf{M}_{\mathrm{Charged}}^{2}=\left(\begin{array}{ccc}\lambda_{4}\frac{v _{d}^{2}}{2}-\lambda_{5}\frac{v_{d}v_{\phi}^{2}}{4v_{\mu}}&\lambda_{4}\frac{v_ {u}v_{d}}{2}-\lambda_{5}\frac{v_{\phi}^{2}}{4v_{\phi}}\\ \lambda_{4}\frac{v_{u}v_{d}}{2}-\lambda_{5}\frac{v_{\phi}^{2}}{4}&\lambda_{4} \frac{v_{u}^{2}}{2}-\lambda_{5}\frac{v_{u}v_{\phi}^{2}}{4v_{d}}\end{array} \right). \tag{111}\]
## Appendix C Derivation of the bounded-from-below conditions
In this Appendix, we derive the scalar potential bounded-from-below conditions shown in Eq. (36). In doing so, we follow the approach of Ref. [93] and we extend it to the three-field case.
In order to determine the shape of the scalar potential (19) in the limit of the large fields, it is enough to investigate the behavior of the quartic terms,
\[\begin{split} V_{4}&=\frac{1}{2}\lambda_{1}(H_{u}^{\dagger }H_{u})^{2}+\frac{1}{2}\lambda_{2}(H_{d}^{\dagger}H_{d})^{2}+\lambda_{3}(H_{u }^{\dagger}H_{u})(H_{d}^{\dagger}H_{d})+\lambda_{4}(H_{u}^{\dagger}H_{d})(H_{ d}^{\dagger}H_{u})\\ &-\frac{1}{2}\lambda_{5}(\epsilon_{ij}H_{u}^{i}H_{d}^{j}\phi^{2}+ \mathrm{H.c.})+\frac{1}{2}\lambda_{6}(\phi^{*}\phi)^{2}+\lambda_{7}(\phi^{*} \phi)(H_{u}^{\dagger}H_{u})+\lambda_{8}(\phi^{*}\phi)(H_{d}^{\dagger}H_{d}). \end{split} \tag{112}\]
It is convenient to parameterize each quartic term in the following way,
\[\begin{split} a&=H_{u}^{\dagger}H_{u}\\ b&=H_{d}^{\dagger}H_{d}\\ c&=\phi^{*}\phi\\ d&=\mathrm{Re}\,H_{u}^{\dagger}H_{d}\\ e&=\mathrm{Im}\,H_{u}^{\dagger}H_{d}\\ f&=\mathrm{Re}\,\epsilon_{ij}H_{u}^{i}H_{d}^{j}\phi^{2}\\ g&=\mathrm{Im}\,\epsilon_{ij}H_{u}^{i}H_{d}^{j}\phi^{2}\,. \end{split} \tag{113}\]
To make our results more general, we allow \(\lambda_{5}\) to be complex. Note that \(a,b,c\geq 0\) by definition, and
\[\begin{split} a\,b&\geq d^{2}+e^{2}\\ a\,b\,c^{2}&\geq f^{2}+g^{2}\geq 2fg.\end{split} \tag{114}\]
In terms of the new parameters, the scalar potential (112) can be rewritten as
\[\begin{split} V_{4}&=\frac{1}{4}\left(\sqrt{\lambda_ {1}}a-\sqrt{\lambda_{2}}b\right)^{2}+\left(\frac{1}{2}\sqrt{\lambda_{1} \lambda_{2}}+\lambda_{3}\right)a\,b\\ &+\frac{1}{4}\left(\sqrt{\lambda_{1}}a-\sqrt{\lambda_{6}}c\right) ^{2}+\left(\frac{1}{2}\sqrt{\lambda_{1}\lambda_{6}}+\lambda_{7}\right)a\,c\\ &+\frac{1}{4}\left(\sqrt{\lambda_{2}}b-\sqrt{\lambda_{6}}c\right) ^{2}+\left(\frac{1}{2}\sqrt{\lambda_{2}\lambda_{6}}+\lambda_{8}\right)b\,c\\ &+\lambda_{4}\left(d^{2}+e^{2}\right)-\left(\mathrm{Re}\,\lambda _{5}f-\mathrm{Im}\,\lambda_{5}g\right)\,.\end{split} \tag{115}\]
We are now ready to analyze the asymptotic behaviour of the potential (C4) in different field directions.
\(\mathbf{a=0.}\)
The parameters \(d,e,f,g\) automatically vanish, see Eq. (C3), and the global potential reduces to
\[V_{4}\left(a=d=e=f=g=0\right)=\frac{1}{2}\left(\sqrt{\lambda_{2}}b-\sqrt{ \lambda_{6}}c\right)^{2}+\left(\lambda_{8}+\sqrt{\lambda_{2}\lambda_{6}}\right) bc\,,\] (C5)
giving rise to the condition
\[\lambda_{8}+\sqrt{\lambda_{2}\lambda_{6}}>0.\] (C6)
\(\mathbf{b=0.}\)
In analogy to the previous case, one obtains
\[V_{4}\left(b=d=e=f=g=0\right)=\frac{1}{2}\left(\sqrt{\lambda_{1}}a-\sqrt{ \lambda_{6}}c\right)^{2}+\left(\lambda_{7}+\sqrt{\lambda_{1}\lambda_{6}} \right)ac,\] (C7)
which gives
\[\lambda_{7}+\sqrt{\lambda_{1}\lambda_{6}}>0.\] (C8)
\(\mathbf{c=0.}\)
This time, only the parameters \(f\) and \(g\) vanish and the reduced scalar potential reads
\[V_{4}\left(c=f=g=0\right)=\frac{1}{2}\left(\sqrt{\lambda_{1}}a-\sqrt{\lambda_{ 2}}b\right)^{2}+\left(\lambda_{3}+\sqrt{\lambda_{1}\lambda_{2}}\right)ab+ \lambda_{4}\left(d^{2}+e^{2}\right)\,.\] (C9)
In order to determine the fate of the scalar potential \(V_{4}\) at the large field values, we need to analyze additional directions in the field space. We first choose a direction along which \(a=\sqrt{\frac{\lambda_{2}}{\lambda_{1}}}b\) and \(d=e=0\). Inserting these expressions into Eq. (C9), we arrive to the following condition
\[\lambda_{3}+\sqrt{\lambda_{1}\lambda_{2}}>0.\] (C10)
Choosing another direction, \(a=\sqrt{\frac{\lambda_{2}}{\lambda_{1}}}b\) and \(ab=d^{2}+e^{2}\), we obtain
\[\lambda_{3}+\lambda_{4}+\sqrt{\lambda_{1}\lambda_{2}}>0.\] (C11)
\(\mathbf{a=\sqrt{\frac{\lambda_{6}}{\lambda_{1}}}c,\,b=\sqrt{\frac{\lambda_{6 }}{\lambda_{2}}}c.}\)
Under this assumption the scalar potential (C4) reduces to
\[V_{4}=\lambda_{a}c^{2}+\lambda_{4}\left(d^{2}+e^{2}\right)-\left(\text{Re}\, \lambda_{5}f-\text{Im}\,\lambda_{5}g\right),\] (C12)
where one defines
\[\lambda_{a}=\frac{3}{2}\lambda_{6}+\lambda_{3}\frac{\lambda_{6}}{\sqrt{\lambda _{1}\lambda_{2}}}+\lambda_{7}\frac{\lambda_{6}}{\lambda_{1}}+\lambda_{8}\frac {\lambda_{6}}{\lambda_{2}}.\] (C13)
From Eq. (C3) one has
\[c^{2}\geq\frac{f^{2}+g^{2}}{d^{2}+e^{2}}\,,\] (C14)
leading to
\[V_{4}\geq\lambda_{a}\frac{f^{2}+g^{2}}{d^{2}+e^{2}}+\lambda_{4}\left(d^{2}+e^{2} \right)-\left(\operatorname{Re}\lambda_{5}f-\operatorname{Im}\lambda_{5}^{ \prime},g\right). \tag{101}\]
The r.h.s. of Eq. (101) can now be rewritten as
\[\operatorname{R.H.S}=\left(f-\frac{\operatorname{Re}\lambda_{5}}{c_{1}} \right)^{2}+\left(g+\frac{\operatorname{Im}\lambda_{5}}{c_{1}}\right)^{2}- \frac{1}{4c_{1}}\left((\operatorname{Re}\lambda_{5})^{2}+(\operatorname{Im} \lambda_{5})^{2}\right)+\lambda_{4}\left(d^{2}+e^{2}\right)\,, \tag{102}\]
where
\[c_{1}=\frac{\lambda_{a}}{d^{2}+e^{2}}. \tag{103}\]
Choosing an additional direction in the field space, \(f=\frac{\operatorname{Re}\lambda_{5}}{c_{1}}\) and \(g=-\frac{\operatorname{Im}\lambda_{5}}{c_{1}}\), we can derive the following condition,
\[-\frac{1}{4}\frac{(\operatorname{Re}\lambda_{5})^{2}+(\operatorname{Im} \lambda_{5})^{2}}{\lambda_{a}}+\lambda_{4}>0\,. \tag{104}\]
Finally, let us rewrite the r.h.s. of Eq. (101) in yet another way,
\[\operatorname{R.H.S}=\left(\frac{\sqrt{c_{2}}}{\sqrt{d^{2}+e^{2}}}-\sqrt{ \lambda_{4}\left(d^{2}+e^{2}\right)}\right)^{2}+2\sqrt{c_{2}\lambda_{4}}-( \operatorname{Re}\lambda_{5}f-\operatorname{Im}\lambda_{5}g)\, \tag{105}\]
where
\[c_{2}=\lambda_{a}\left(f^{2}+g^{2}\right),\quad\lambda_{b}=\sqrt{\lambda_{a} \lambda_{4}}\,. \tag{106}\]
Analyzing the quartic potential along the direction \(\sqrt{c_{2}}=\sqrt{\lambda_{4}}(d^{2}+f^{2})\), we obtain
\[V_{4}\geq\left(4\lambda_{b}^{2}-(\operatorname{Re}\lambda_{5})^{2}+ \operatorname{Re}\lambda_{5}\operatorname{Im}\lambda_{5}\right)f^{2}+\left(4 \lambda_{b}^{2}-(\operatorname{Im}\lambda_{5})^{2}+\operatorname{Re}\lambda_ {5}\operatorname{Im}\lambda_{5}\right)g^{2}\,, \tag{107}\]
leading straightforwardly to the last two conditions,
\[\begin{split} 4\lambda_{b}^{2}-(\operatorname{Re}\lambda_{5})^{2}+ \operatorname{Re}\lambda_{5}\operatorname{Im}\lambda_{5}>0\\ 4\lambda_{b}^{2}-(\operatorname{Im}\lambda_{5})^{2}+ \operatorname{Re}\lambda_{5}\operatorname{Im}\lambda_{5}>0.\end{split} \tag{108}\]
## Appendix D Renormalization group equations
In this Appendix, we collect the one-loop RGEs of our model computed with SARAH[61; 62]. We denote
\[\beta(X)\equiv\mu\frac{dX}{d\mu}\equiv\frac{1}{16\pi^{2}}\beta^{(1)}(X)\,. \tag{109}\]
\[\beta^{(1)}(g_{1}) =\frac{103g_{1}^{3}}{15} \tag{110}\] \[\beta^{(1)}(g_{2}) =-\frac{g_{2}^{3}}{3}\] (111) \[\beta^{(1)}(g_{3}) =-\frac{13g_{3}^{3}}{3} \tag{112}\]
\[\beta^{(1)}(\lambda_{1}) =-\frac{9}{5}g_{1}^{2}\lambda_{1}-9g_{2}^{2}\lambda_{1}+\frac{27g_{1} ^{4}}{100}+\frac{9g_{2}^{4}}{4}+\frac{9}{10}g_{1}^{2}g_{2}^{2}+12\lambda_{1}^{2} +4\lambda_{3}^{2}+2\lambda_{4}^{2}+2\lambda_{7}^{2}+4\lambda_{3}\lambda_{4}\] \[+12\lambda_{1}(y_{43}^{u})^{2}+12\lambda_{1}\left[(y_{24}^{u})^{2} +(y_{34}^{u})^{2}\right]-12(y_{43}^{u})^{4}-12\left[(y_{24}^{u})^{2}+(y_{34}^{u })^{2}\right]{}^{2}\] \[+4\lambda_{1}\left[(y_{14}^{\nu})^{2}+(y_{24}^{\nu})^{2}+(y_{34}^{ \nu})^{2}\right]-4\left[(y_{14}^{\nu})^{2}+(y_{24}^{\nu})^{2}+(y_{34}^{\nu})^{2 }\right]^{2}\] (121) \[\beta^{(1)}(\lambda_{2}) =-\frac{9}{5}g_{1}^{2}\lambda_{2}-9g_{2}^{2}\lambda_{2}+\frac{27g _{1}^{4}}{100}+\frac{9g_{2}^{4}}{4}+\frac{9}{10}g_{1}^{2}g_{2}^{2}+12\lambda_{2 }^{2}+4\lambda_{3}^{2}+2\lambda_{4}^{2}+2\lambda_{8}^{2}+4\lambda_{3}\lambda_{4}\] \[+12\lambda_{2}(y_{43}^{d})^{2}+12\lambda_{2}\left[(y_{14}^{d})^{2} +(y_{24}^{d})^{2}+(y_{34}^{d})^{2}\right]-12(y_{43}^{d})^{4}-12\left[(y_{14}^{ d})^{2}+(y_{24}^{d})^{2}+(y_{34}^{d})^{2}\right]{}^{2}\] \[+4\lambda_{2}(y_{43}^{e})^{2}+4\lambda_{2}\left[(y_{24}^{e})^{2}+ (y_{34}^{e})^{2}\right]-4(y_{43}^{e})^{4}-4\left[(y_{24}^{e})^{2}+(y_{34}^{e}) ^{2}\right]{}^{2}\] \[+4\lambda_{2}\left[(x_{14}^{\nu})^{2}+(x_{24}^{\nu})^{2}+(x_{34}^ {\nu})^{2}\right]-4\left[(x_{14}^{\nu})^{2}+(x_{24}^{\nu})^{2}+(x_{34}^{\nu})^ {2}\right]{}^{2}\] (1222) \[\beta^{(1)}(\lambda_{3}) =-\frac{9}{5}g_{1}^{2}\lambda_{3}-9g_{2}^{2}\lambda_{3}+\frac{27g _{1}^{4}}{100}+\frac{9g_{4}^{2}}{4}+\frac{9}{10}g_{1}^{2}g_{2}^{2}+4\lambda_{3 }^{2}+2\lambda_{4}^{2}+\lambda_{5}^{2}+6\lambda_{1}\lambda_{3}+6\lambda_{2} \lambda_{3}+2\lambda_{1}\lambda_{4}\] \[+2\lambda_{3}(y_{43}^{e})^{2}+6\lambda_{3}\left[(y_{24}^{u})^{2}+ (y_{34}^{d})^{2}\right]+2\lambda_{3}\left[(y_{24}^{e})^{2}+(y_{34}^{e})^{2}\right]\] \[-4\left[x_{14}^{\nu}y_{14}^{\nu}+x_{24}^{\nu}y_{24}^{\nu}+x_{34}^ {\nu}y_{34}^{\nu}\right]{}^{2}+2\lambda_{3}\left[(y_{14}^{\nu})^{2}+(y_{24}^{ \nu})^{2}+(y_{34}^{\nu})^{2}\right]\] \[\beta^{(1)}(\lambda_{4}) =-\frac{9}{5}g_{1}^{2}\lambda_{4}-9g_{2}^{2}\lambda_{4}-\frac{9}{ 5}g_{1}^{2}g_{2}^{2}+4\lambda_{4}^{2}-\lambda_{5}^{2}+2\lambda_{1}\lambda_{4} +2\lambda_{2}\lambda_{4}+8\lambda_{3}\lambda_{4}+6\lambda_{4}(y_{43}^{u})^{2}+6 \lambda_{4}(y_{43}^{d})^{2}\] \[+6\lambda_{4}\left[(y_{24}^{u})^{2}+(y_{34}^{u})^{2}\right]-12(y_ {43}^{d})^{2}(y_{43}^{u})^{2}-12\left[y_{24}^{d}y_{24}^{u}+y_{34}^{d}y_{34}^{ u}\right]{}^{2}+6\lambda_{4}\left[(y_{14}^{d})^{2}+(y_{24}^{d})^{2}+(y_{34}^{d})^{2}\right]\] \[+2\lambda_{4}(y_{43}^{e})^{2}+2\lambda_{4}\left[(y_{24}^{e})^{2}+ (y_{34}^{e})^{2}\right]-4\left[y_{24}^{e}y_{24}^{\nu}+y_{34}^{e}y_{34}^{\nu} \right]{}^{2}+2\lambda_{4}\left[(x_{14}^{\nu})^{2}+(x_{24}^{\nu})^{2}+(x_{34}^{ \nu})^{2}\right]\] \[+4\left[x_{14}^{\nu}y_{14}^{\nu}+x_{24}^{\nu}y_{24}^{\nu}+x_{34}^ {\nu}y_{34}^{\nu}\right]{}^{2}+2\lambda_{4}\left[(y_{14}^{\nu})^{2}+(y_{24}^{ \nu})^{2}+(y_{34}^{\nu})^{2}\right]\] (1223) \[\beta^{(1)}(\lambda_{5}) =\lambda_{5}\Big{\{}-\frac{9}{20}g_{1}^{2}-\frac{9}{4}g_{2}^{2}+ \lambda_{3}-\lambda_{4}+\lambda_{6}+2\lambda_{7}+2\lambda_{8}+6(x_{34}^{Q})^{2} +3\left[(x_{42}^{u})^{2}+(x_{43}^{u})^{2}\right]\] \[+\frac{3}{2}(y_{43}^{u})^{2}+\frac{3}{2}\left[(y_{24}^{u})^{2}+(y _{34}^{u})^{2}\right]+3\left[(x_{42}^{d})^{2}+(x_{43}^{d})^{2}\right]+\frac{3}{2 }(y_{43}^{d})^{2}+\frac{3}{2}\left[(y_{14}^{d})^{2}+(y_{24}^{d})^{2}+(y_{34}^ {d})^{2}\right]\] \[+2(x_{34}^{L})^{2}+\left[(x_{42}^{e})^{2}+(x_{43}^{e})^{2}\right]+ \frac{1}{2}(y_{43}^{e})^{2}+\frac{1}{2}\left[(y_{24}^{e})^{2}+(y_{34}^{e})^{2} \right]+\frac{1}{2}\left[(x_{14}^{\nu})^{2}+(x_{24}^{\nu})^{2}+(x_{34}^{\nu})^{2}\right]\] \[\beta^{(1)}(\lambda_{6}) =2\lambda_{5}^{2}+10\lambda_{6}^{2}+4\lambda_{7}^{2}+4\lambda_{ 8}^{2}+24\lambda_{6}(x_{34}^{Q})^{2}-24(x_{34}^{Q})^{4}+12\lambda_{6}\left[(x_{4 2}^{u})^{2}+(x_{43}^{u})^{2}\right]\] \[-12\left[(x_{42}^{u})^{2}+(x_{43}^{u})^{2}\right]{}^{2}+12 \lambda_{6}\left[(x_{42}^{d})^{2}+(x_{43}^{d})^{2}\right]-12\left[(x_{42}^{d})^{ 2}+(x_{43}^{d})^{2}\right]{}^{2}\] \[+8\lambda_{6}(x_{34}^{L})^{2}-8(x_{34}^{L})^{4}+4\lambda_{6}\left[( x_{42}^{e})^{2}+(x_{43}^{e})^{2}\right]-4\left[(x_{42}^{e})^{2}+(x_{43}^{e})^{2} \right]{}^{2}\] (123) \[\beta^{(1)}(\lambda_{7}) =-\frac{9}{10}g_{1}^{2}\lambda_{7}-\frac{9}{2}g_{2}^{2}\lambda_{7 }+2\lambda_{5}^{2}+4\lambda_{7}^{2}+6\lambda_{1}\
\[+2\lambda_{8}\left[(x_{14}^{\nu})^{2}+(x_{24}^{\nu})^{2}+(x_{34}^{ \nu})^{2}\right]\] (112) \[\beta^{(1)}(y_{43}^{d}) =-\frac{1}{4}g_{1}^{2}y_{43}^{d}-\frac{9}{4}g_{2}^{2}y_{43}^{d}-8 g_{3}^{2}y_{43}^{d}+y_{43}^{d}\left[(x_{14}^{\nu})^{2}+(x_{24}^{\nu})^{2}+(x_{34}^{ \nu})^{2}\right]+\frac{1}{2}(x_{43}^{d})^{2}y_{43}^{d}+\frac{1}{2}y_{43}^{d}(y _{43}^{u})^{2}\] \[+3y_{43}^{d}\left[(y_{14}^{d})^{2}+(y_{24}^{d})^{2}+(y_{34}^{d})^{ 2}\right]+y_{43}^{d}\left[(y_{24}^{e})^{2}+(y_{34}^{e})^{2}\right]+y_{43}^{d}(y _{43}^{e})^{2}+\frac{9}{2}(y_{43}^{d})^{3}\] (113) \[\beta^{(1)}(y_{14}^{d}) =-\frac{1}{4}g_{1}^{2}y_{14}^{d}-\frac{9}{4}g_{2}^{2}y_{14}^{d}-8 g_{3}^{2}y_{14}^{d}+y_{14}^{d}\left[(x_{14}^{\nu})^{2}+(x_{24}^{\nu})^{2}+(x_{34}^{ \nu})^{2}\right]\] \[+\frac{9}{2}y_{14}^{d}\left[(y_{14}^{d})^{2}+(y_{24}^{d})^{2}+(y_{ 34}^{d})^{2}\right]+y_{14}^{d}\left[(y_{24}^{e})^{2}+(y_{34}^{e})^{2}\right]+3 y_{14}^{d}(y_{43}^{d})^{2}+y_{14}^{d}(y_{43}^{e})^{2}\] (114) \[\beta^{(1)}(y_{24}^{d}) =-\frac{1}{4}g_{1}^{2}y_{24}^{d}-\frac{9}{4}g_{2}^{2}y_{24}^{d}-8 g_{3}^{2}y_{24}^{d}+y_{24}^{d}\left[(x_{14}^{\nu})^{2}+(x_{24}^{\nu})^{2}+(x_{34}^{ \nu})^{2}\right]\] \[+\frac{9}{2}y_{24}^{d}\left[(y_{14}^{d})^{2}+(y_{24}^{d})^{2}+(y_{ 34}^{d})^{2}\right]+y_{24}^{d}\left[(y_{24}^{e})^{2}+(y_{34}^{e})^{2}\right]+ \frac{1}{2}y_{24}^{d}(y_{24}^{u})^{2}\] \[+3y_{24}^{d}(y_{43}^{d})^{2}+y_{24}^{d}(y_{43}^{e})^{2}+\frac{1}{ 2}y_{24}^{u}y_{34}^{d}y_{34}^{u}\] (115) \[\beta^{(1)}(y_{34}^{d}) =-\frac{1}{4}g_{1}^{2}y_{34}^{d}-\frac{9}{4}g_{2}^{2}y_{34}^{d}-8 g_{3}^{2}y_{34}^{d}+y_{34}^{d}\left[(x_{14}^{\nu})^{2}+(x_{24}^{\nu})^{2}+(x_{34}^{ \nu})^{2}\right]+\frac{1}{2}(x_{34}^{Q})^{2}y_{34}^{d}\] \[+\frac{9}{2}y_{34}^{d}\left[(y_{14}^{d})^{2}+(y_{24}^{d})^{2}+(y_ {34}^{d})^{2}\right]+\frac{1}{2}y_{24}^{d}y_{24}^{u}y_{34}^{u}+y_{34}^{d}\left[ (y_{24}^{e})^{2}+(y_{34}^{e})^{2}\right]\] \[+\frac{1}{2}y_{34}^{d}(y_{34}^{u})^{2}+3y_{34}^{d}(y_{43}^{d})^{2 }+y_{34}^{d}(y_{43}^{e})^{2}\] (116) \[\beta^{(1)}(y_{43}^{u}) =-\frac{17}{20}g_{1}^{2}y_{43}^{u}-\frac{9}{4}g_{2}^{2}y_{43}^{u}- 8g_{3}^{2}y_{43}^{u}+\frac{1}{2}(x_{43}^{u})^{2}y_{43}^{u}+y_{43}^{u}\left[(y_ {14}^{\nu})^{2}+(y_{24}^{\nu})^{2}+(y_{34}^{\nu})^{2}\right]\] \[+3y_{43}^{u}\left[(y_{24}^{u})^{2}+(y_{34}^{u})^{2}\right]+\frac{ 1}{2}(y_{43}^{d})^{2}y_{43}^{u}+\frac{9}{2}(y_{43}^{u})^{3}\] (117) \[\beta^{(1)}(y_{24}^{u}) =-\frac{17}{20}g_{1}^{2}y_{24}^{u}-\frac{9}{4}g_{2}^{2}y_{24}^{u}- 8g_{3}^{2}y_{24}^{u}+y_{24}^{u}\left[(y_{14}^{\nu})^{2}+(y_{24}^{\nu})^{2}+(y_{ 34}^{\nu})^{2}\right]+\frac{1}{2}(y_{24}^{d})^{2}y_{24}^{u}\] \[+\frac{1}{2}y_{24}^{d}y_{34}^{d}y_{34}^{u}+\frac{9}{2}y_{24}^{u} \left[(y_{24}^{u})^{2}+(y_{34}^{u})^{2}\right]+3y_{24}^{u}(y_{43}^{u})^{2}\] (118) \[\beta^{(1)}(y_{34}^{u}) =-\frac{17}{20}g_{1}^{2}y_{34}^{u}-\frac{9}{4}g_{2}^{2}y_{34}^{u}- 8g_{3}^{2}y_{34}^{u}+\frac{1}{2}(x_{34}^{Q})^{2}y_{34}^{u}+y_{34}^{u}\left[(y_ {14}^{\nu})^{2}+(y_{24}^{\nu})^{2}+(y_{34}^{\nu})^{2}\right]\] \[+\frac{1}{2}y_{24}^{d}y_{24}^{u}y_{34}^{d}+\frac{9}{2}y_{34}^{u} \left[(y_{24}^{u})^{2}+(y_{34}^{u})^{2}\right]+\frac{1}{2}(y_{34}^{d})^{2}y_{34} ^{u}+3y_{34}^{u}(y_{43}^{u})^{2}\] (119) \[\beta^{(1)}(x_{42}^{d}) =-\frac{2}{5}g_{1}^{2}x_{42}^{d}-8g_{3}^{2}x_{42}^{d}+2(x_{34}^{L})^ {2}x_{42}^{d}+6(x_{34}^{Q})^{2}x_{42}^{d}+x_{42}^{d}(x_{42}^{e})^{2}+3x_{42}^{d} \left[(x_{42}^{u})^{2}+(x_{43}^{u})^{2}\right]\] \[+4x_{42}^{d}\left[(x_{42}^{d})^{2}+(x_{43}^{d})^{2}\right]+x_{42}^ {d}(x_{43}^{e})^{2}\] (120) \[\beta^{(1)}(x_{43}^{d}) =-\frac{2}{5}g_{1}^{2}x_{43}^{d}-8g_{3}^{2}x_{43}^{d}+2(x_{34}^{L})^ {2}x_{43}^{d}+6(x_{34}^{Q})^{2}x_{43}^{d}+4x_{43}^{d}\left[(x_{42}^{d})^{2}+(x_{43 }^{d})^{2}\right]+(x_{42}^{e})^{2}x_{43}^{d}\] \[+3x_{43}^{d}\left[(x_{42}^{u})^{2}+(x_{43}^{u})^{2}\right]+x_{43}^ {d}(x_{43}^{e})^{2}+x_{43}^{d}(y_{43}^{d})^{2}\] (121) \[\beta^{(1)}(x_{34}^{d}) =-\frac{1}{10}g_{1}^{2}x_{34}^{Q}-\frac{9}{2}g_{2}^{2}x_
\[\beta^{(1)}(y^{e}_{24}) =-\frac{9}{4}g_{1}^{2}y^{e}_{24}-\frac{9}{4}g_{2}^{2}y^{e}_{24}+y^{e} _{24}\left[(x^{\nu}_{14})^{2}+(x^{\nu}_{24})^{2}+(x^{\nu}_{34})^{2}\right]-\frac{ 3}{2}x^{\nu}_{24}x^{\nu}_{34}y^{e}_{34}-\frac{3}{2}(x^{\nu}_{24})^{2}y^{e}_{24}\] \[+\frac{1}{2}y^{e}_{24}(y^{\nu}_{24})^{2}+3y^{e}_{24}\left[(y^{d}_ {14})^{2}+(y^{d}_{24})^{2}+(y^{d}_{34})^{2}\right]+\frac{5}{2}y^{e}_{24}\left[( y^{e}_{24})^{2}+(y^{e}_{34})^{2}\right]\] \[+3y^{e}_{24}(y^{d}_{43})^{2}+y^{e}_{24}(y^{e}_{43})^{2}+\frac{1}{2 }y^{\nu}_{24}y^{e}_{34}y^{\nu}_{34}\] (125) \[\beta^{(1)}(y^{e}_{34}) =-\frac{9}{4}g_{1}^{2}y^{e}_{34}-\frac{9}{4}g_{2}^{2}y^{e}_{34}+y^ {e}_{34}\left[(x^{\nu}_{14})^{2}+(x^{\nu}_{24})^{2}+(x^{\nu}_{34})^{2}\right]- \frac{3}{2}x^{\nu}_{24}x^{\nu}_{34}y^{e}_{24}-\frac{3}{2}(x^{\nu}_{34})^{2}y^{ e}_{34}\] \[+\frac{1}{2}x^{L}_{34}x^{Q}_{34}y^{e}_{34}+3y^{e}_{34}\left[(y^{d} _{14})^{2}+(y^{d}_{24})^{2}+(y^{d}_{34})^{2}\right]+\frac{1}{2}y^{e}_{24}y^{ \nu}_{24}y^{\nu}_{34}+\frac{5}{2}y^{e}_{34}\left[(y^{e}_{24})^{2}+(y^{e}_{34}) ^{2}\right]\] \[+\frac{1}{2}y^{e}_{34}(y^{\nu}_{34})^{2}+3y^{e}_{34}(y^{d}_{43})^ {2}+y^{e}_{34}(y^{e}_{43})^{2}\] (126) \[\beta^{(1)}(y^{e}_{43}) =-\frac{9}{4}g_{1}^{2}y^{e}_{43}-\frac{9}{4}g_{2}^{2}y^{e}_{43}+y^ {e}_{43}\left[(x^{\nu}_{14})^{2}+(x^{\nu}_{24})^{2}+(x^{\nu}_{34})^{2}\right]+ \frac{1}{2}(x^{e}_{43})^{2}y^{e}_{43}+y^{e}_{43}\left[(y^{e}_{24})^{2}+(y^{e}_{ 34})^{2}\right]\] \[+3y^{e}_{43}\left[(y^{d}_{14})^{2}+(y^{d}_{24})^{2}+(y^{d}_{34}) ^{2}\right]+3(y^{d}_{43})^{2}y^{e}_{43}+\frac{5}{2}(y^{e}_{43})^{3}\] (127) \[\beta^{(1)}(y^{\nu}_{14}) =-\frac{9}{20}g_{1}^{2}y^{\nu}_{14}-\frac{9}{4}g_{2}^{2}y^{\nu}_{1 4}+\frac{1}{2}x^{\nu}_{14}x^{\nu}_{24}y^{\nu}_{24}+\frac{1}{2}x^{\nu}_{14}x^{ \nu}_{34}y^{\nu}_{34}+\frac{1}{2}(x^{\nu}_{14})^{2}y^{\nu}_{14}\] \[+\frac{5}{2}y^{\nu}_{14}\left[(y^{\nu}_{14})^{2}+(y^{\nu}_{24})^{2 }+(y^{\nu}_{34})^{2}\right]+3y^{\nu}_{14}\left[(y^{u}_{24})^{2}+(y^{u}_{34}) ^{2}\right]+3y^{\nu}_{14}(y^{u}_{43})^{2}\] (128) \[\beta^{(1)}(y^{\nu}_{24}) =-\frac{9}{20}g_{1}^{2}y^{\nu}_{24}-\frac{9}{4}g_{2}^{2}y^{\nu}_{2 4}+\frac{1}{2}x^{\nu}_{14}x^{\nu}_{24}y^{\nu}_{14}+\frac{1}{2}x^{\nu}_{24}x^{ \nu}_{34}y^{\nu}_{34}+\frac{1}{2}(x^{\nu}_{24})^{2}y^{\nu}_{24}+3y^{\nu}_{24 }\left[(y^{u}_{24})^{2}+(y^{u}_{34})^{2}\right]\] \[+\frac{5}{2}y^{\nu}_{24}\left[(y^{\nu}_{14})^{2}+(y^{\nu}_{24})^{2 }+(y^{\nu}_{34})^{2}\right]+\frac{1}{2}(y^{e}_{24})^{2}y^{\nu}_{24}+\frac{1}{2 }y^{e}_{24}y^{e}_{34}y^{\nu}_{34}+3y^{\nu}_{24}(y^{u}_{43})^{2}\] (129) \[\beta^{(1)}(y^{\nu}_{34}) =-\frac{9}{20}g_{1}^{2}y^{\nu}_{34}-\frac{9}{4}g_{2}^{2}y^{\nu}_{3 4}+\frac{1}{2}x^{\nu}_{14}x^{\nu}_{34}y^{\nu}_{14}+\frac{1}{2}x^{\nu}_{24}x^{ \nu}_{34}y^{\nu}_{24}+\frac{1}{2}(x^{L}_{34})^{2}y^{\nu}_{34}+\frac{1}{2}(x^{ \nu}_{34})^{2}y^{\nu}_{34}+3y^{\nu}_{34}(y^{u}_{43})^{2}\] \[+\frac{5}{2}y^{\nu}_{34}\left[(y^{\nu}_{14})^{2}+(y^{\nu}_{24})^{2 }+(y^{\nu}_{34})^{2}\right]+\frac{1}{2}y^{e}_{24}y^{\nu}_{24}y^{e}_{34}+3y^{ \nu}_{34}\left[(y^{u}_{24})^{2}+(y^{u}_{34})^{2}\right]+\frac{1}{2}(y^{e}_{34}) ^{2}y^{\nu}_{34}\] (130) \[\beta^{(1)}(x^{e}_{42}) =-\frac{18}{5}g_{1}^{2}x^{e}_{42}+2(x^{L}_{34})^{2}x^{e}_{42}+6(x^ {Q}_{34})^{2}x^{e}_{42}+2x^{e}_{42}(x^{e}_{43})^{2}+2(x^{e}_{42})^{3}\] \[+3x^{e}_{42}\left[(x^{d}_{42})^{2}+(x^{d}_{43})^{2}\right]+3x^{e}_ {42}\left[(x^{u}_{42})^{2}+(x^{u}_{43})^{2}\right]\] (131) \[\beta^{(1)}(x^{e}_{43}) =-\frac{18}{5}g_{1}^{2}x^{e}_{43}+2(x^{L}_{34})^{2}x^{e}_{43}+6(x ^{Q}_{34})^{2}x^{e}_{43}+3x^{e}_{43}\left[(x^{d}_{42})^{2}+(x^{d}_{43})^{2} \right]+2(x^{e}_{42})^{2}x^{e}_{43}\] \[+3x^{e}_{43}\left[(x^{u}_{42})^{2}+(x^{u}_{43})^{2}\right]+x^{e}_ {43}(y^{e}_{43})^{2}+2(x^{e}_{43})^{3}\] (132) \[\beta^{(1)}(x^{L}_{34}) =-\frac{9}{10}g_{1}^{2}x^{L}_{34}-\frac{9}{2}g_{2}^{2}x^{L}_{34}+ \frac{1}{2}x^{L}_{34}(x^{\nu}_{34})^{2}+6x^{L}_{34}(x^{Q}_{34})^{2}+3x^{L}_{34 }\left[(x^{d}_{42})^{2}+(x^{d}_{43})^{2}\right]+x^{L}_{34}(x^{e}_{42})^{2}\] \[+3x^{L}
\[\beta^{(1)}(x^{\nu}_{34}) =-\frac{9}{20}g1^{2}x^{\nu}_{34}-\frac{9}{4}g2^{2}x^{\nu}_{34}+\frac {5}{2}x^{\nu}_{34}\left[(x^{\nu}_{14})^{2}+(x^{\nu}_{24})^{2}+(x^{\nu}_{34})^{2} \right]+\frac{1}{2}x^{\nu}_{14}y^{\nu}_{14}y^{\nu}_{34}-\frac{3}{2}x^{\nu}_{24} y^{e}_{24}y^{e}_{34}\] \[+\frac{1}{2}x^{\nu}_{24}y^{\nu}_{24}y^{\nu}_{34}+\frac{1}{2}(x^{L }_{34})^{2}x^{\nu}_{34}+3x^{\nu}_{34}\left[(y^{d}_{14})^{2}+(y^{d}_{24})^{2}+(y ^{d}_{34})^{2}\right]-\frac{3}{2}x^{\nu}_{34}(y^{e}_{34})^{2}\] \[+x^{\nu}_{34}\left[(y^{e}_{24})^{2}+(y^{e}_{34})^{2}\right]+\frac {1}{2}x^{\nu}_{34}(y^{\nu}_{34})^{2}+3x^{\nu}_{34}(y^{d}_{43})^{2}+x^{\nu}_{34 }(y^{e}_{43})^{2} \tag{100}\]
|
2309.14961 | Adding Value to JWST Spectra and Photometry: Stellar Population and Star
Formation Properties of Spectroscopically Confirmed JADES and CEERS Galaxies
at $z > 7$ | In this paper, we discuss measurements of the stellar population and star
forming properties for 43 spectroscopically confirmed publicly available
high-redshift $z > 7$ JWST galaxies in the JADES and CEERS observational
programs. We carry out a thorough study investigating the relationship between
spectroscopic features and photometrically derived ones, including from
spectral energy distribution (SED) fitting of models, as well as morphological
and structural properties. We find that the star formation rates (SFRs)
measured from H$\beta$ line emission are higher than those estimated from
Bayesian SED fitting and UV luminosity, with ratios SFR$_{H\beta}$/ SFR$_{UV}$
ranging from 2~13. This is a sign that the star formation history is
consistently rising given the timescales of H$\beta$ vs UV star formation
probes. In addition, we investigate how well equivalent widths (EWs) of
H$\beta$ $\lambda$4861, [O III] $\lambda$4959, and [O III] $\lambda$5007 can be
measured from photometry, finding that on average the EW derived from
photometric excesses in filters is 30% smaller than the direct spectroscopic
measurement. We also discover that a stack of the line emitting galaxies shows
a distinct morphology after subtracting imaging that contains only the
continuum. This gives us a first view of the line or ionized gas emission from
$z > 7$ galaxies, demonstrating that this material has a similar distribution,
statistically, as the continuum. We also compare the derived SFRs and stellar
masses for both parametric and non-parametric star formation histories, where
we find that 35% of our sample formed at least 30% of their stellar mass in
recent (< 10 Myr) starburst events. | Qiao Duan, Christopher J. Conselice, Qiong Li, Thomas Harvey, Duncan Austin, Katherine Ormerod, James Trussler, Nathan Adams | 2023-09-26T14:29:02Z | http://arxiv.org/abs/2309.14961v1 | Adding Value to JWST Spectra and Photometry: Stellar Population and Star Formation Properties of Spectroscopically Confirmed JADES and CEERS Galaxies at \(z>7\)
###### Abstract
In this paper, we discuss measurements of the stellar population and star forming properties for 43 spectroscopically confirmed publicly available high-redshift \(z>7\) JWST galaxies in the JADES and CEERS observational programs. We carry out a thorough study investigating the relationship between spectroscopic features and photometrically derived ones, including from spectral energy distribution (SED) fitting of models, as well as morphological and structural properties. We find that the star formation rates (SFRs) measured from H\(\beta\) line emission are higher than those estimated from Bayesian SED fitting and UV luminosity, with ratios SFR\({}_{\rm H\beta}\)/SFR\({}_{\rm UV}\) ranging from \(\sim 2-13\). This is a sign that the star formation history is consistently rising given the timescales of H\(\beta\) vs UV star formation probes. In addition, we investigate how well equivalent widths (EWs) of H\(\beta\)\(\lambda\)4861, [O iii] \(\lambda\)4959, and [O iii] \(\lambda\)5007 can be measured from photometry, finding that on average the EW derived from photometric excesses in filters is 30% smaller than the direct spectroscopic measurement. We also discover that a stack of the line emitting galaxies shows a distinct morphology after subtracting imaging that contains only the continuum. This gives us a first view of the line or ionized gas emission from \(z>7\) galaxies, demonstrating that this material has a similar distribution, statistically, as the continuum. We also compare the derived SFRs and stellar masses for both parametric and non-parametric star formation histories, where we find that 35% of our sample formed at least 30% of their stellar mass in recent (\(<10\) Myr) starburst events.
keywords: galaxies:high-redshift - galaxies: formation - galaxies: general - galaxies: photometry - galaxies: star formation
## 1 Introduction
The high redshift universe is now being studied in depth by JWST as shown by a key of papers on early galaxy discoveries in the past year (Austin et al., 2023; Adams et al., 2023; Doman et al., 2023; Finkelstein et al., 2023; Harikane et al., 2022; Atek et al., 2022; Castellano et al., 2022; Donnan et al., 2022; Trussler et al., 2023; Bouwens et al., 2023; McLeod et al., 2023; Franco et al., 2023; Casey et al., 2023; Naidu et al., 2022; Curtis-Lake et al., 2022; Hainline et al., 2023). These studies have found that there are many more distant candidate galaxies at \(z>7\) than inferred from before based on HST observations. However, uncovering their properties is really just in its infancy, and a major way to understand these systems is through spectroscopy. There are also many questions which we need to answer before we can reach the ultimate goal of using spectroscopy and imaging together to infer the physical properties of galaxies and therefore to determine galaxy evolution. A major one is how well spectra and imaging agree in terms of deriving the physical properties of galaxies.
It is clear that spectroscopy with, in particular NIRSpec and also NIRCam/NIRISS in grism mode, are and will continue to be of major importance for the study of the first galaxies. At the same time, it will never be the case that we will obtain spectroscopy for all, or even a large fraction, of the most distant galaxies. The systems are too faint, and in many cases, too abundant to effectively obtain many spectra. Thus, we must resort to imaging, down to the completeness limit, to derive galaxy properties for understanding the galaxy population. This is a well worn path and many papers have used imaging for the measurements of photometric redshifts, stellar masses, and derived star formation rates, amongst other properties (e.g., Adams et al., 2023; Austin et al., 2023; Fujimoto et al., 2023; Atek et al., 2023).
The purpose of this paper is therefore two-fold. We investigate how well we can derive properties of distant galaxies from their photometry by comparing the same properties as derived from spectroscopy. This includes a redshift comparison: \(z_{\rm Phot}\) vs. \(z_{\rm Spec}\), as well as measures of star formation rates and stellar masses. For example, it might be the case that there is a systematic difference in the measurements of these quantities, such that the ones derived from photometry are for example lower than spectroscopy. If this is the case then we will need to account for this in future analyses. We can also use spectroscopy and imaging together to derive unique properties of galaxies. An example of this is using the location of emission lines seen in spectroscopy which exist, and contribute flux, within various imaging filters. When this is well understood and well known
(e.g., without uncertain redshifts) we can obtain an image of the line emission alone through subtracting filters that only contain continuum (no emission lines) from filters with flux arising from emission lines (Hatch et al., 2013).
This type of analysis has been carried out in other ways before, but never quite addressing the same questions we are here. Previous similar work includes examining how well star formation and stellar masses can be measured based on comparisons with models and with different fitting codes and methods (e.g., Mobasher et al., 2015; Pacifici et al., 2023). This is also the case for different photometric redshift codes (Dahlen et al., 2013), where tests can be done to determine which methods and codes are the 'best' for recovering correct photometric redshifts. Recently this has been examined in terms of the stellar population properties of galaxies as derived through photometry, finding that stellar mass is consistent between different codes, although other properties derived from SED fitting can vary quite significantly (Pacifici et al., 2023). Here we examine similar questions, but we take a more detailed approach of comparing within the same code and same initial conditions how well the properties of galaxies can be derived based on photometry vs. spectroscopy. That is, we can determine the same features of galaxies using spectroscopic measurements, sometimes within the line emission detected, but otherwise fitting the spectrum.
Thus, in this paper we investigate the spectroscopic properties of a sample of \(z>7\) galaxies with reliable spectroscopic redshifts from NIRSpec on JWST within two different fields - CEERS (Finkelstein et al., 2023) and JADES (Rieke et al., 2023; Eisenstein et al., 2023).
The structure of this paper is outlined as follows. In Section 2, we detail the dataset sourced from the JADES and CEERS fields. Our main findings and analysis are presented in Section 3. A summary of our conclusions is provided in Section 5. Throughout this work, we adhere to a standard cosmology with \(H_{0}=70\,\mathrm{km\,s^{-1}\,Mpc^{-1}}\), \(\Omega_{\mathrm{M}}=0.3\) and \(\Omega_{\Lambda}=0.7\) to facilitate comparison with other observational studies. All magnitudes reported are consistent with the AB magnitude system (Oke, 1974; Oke & Gunn, 1983).
## 2 Data and reduction
The launch of the James Webb Space Telescope in December 2021 (Rigby et al., 2023) provides an unprecedented opportunity to study the distant universe. Over the past year, several Cycle 1 observation programs have been conducted. In this paper, we analyze data from the JADES and CEERS programs, both in terms of imaging and spectroscopy. Below we give some details of which data we use and how this data was reduced and processed.
### JADES NIRSpec Observations
We use the first JADES released NIRSpec (Ferruit et al., 2022) data (PI: Eisenstein, N. Lutzgendorf, ID:1180, 1210), spanning the time-frame September 2022 to October 2022, with a focus on the publicly released data in GOODS-S field. The spectra are obtained through the application of both disperser/filter and PRISM/clear configurations. Specifically, the PRISM data covers 253 galaxies, and 198 of them have disperser/filter data. Four different disperser/filter combinations are used to acquire the spectroscopy: G140M/F070LP, G235M/F170LP, G395M/F290LP, and G395M/F290LP, with a wavelength coverage of \(0.70-1.27\mu\)m, \(1.66-3.07\mu\)m, \(2.87-5.10\mu\)m, and \(2.87-5.14\mu\)m, respectively. The three medium resolution filters have a nominal resolving power of R \(\approx 1000\), while the high resolution data can reach R \(\approx 2700\). In this paper, we primarily utilize the PRISM data, which covers a wavelength range of \(0.6\,\mu\)m to \(5.3\,\mu\)m, and exhibits a spectral resolution of \(R\approx 30-330\)(Ji & Giavalisco, 2022).
Among the 253 observed galaxies, 13 are situated at \(z_{\mathrm{spec}}>7.0\), with 11 of them having NIRCam observations. During these observations, three micro-shutters were activated for each target. An exposure protocol was implemented consisting of a three-point nodding sequence along the slit, with each nod lasting 8403 seconds, and the entire sequence repeated four times. This culminated in a total PRISM exposure time of up to 28 hours for some sources. The subsequent extraction of flux-calibrated spectra was carried out using specialized pipelines developed by both the ESA NIRSpec Science Operations Team and the NIRSpec GTO Team (Bushouse et al., 2023). A more detailed examination of the JADES/HST-DEEP spectra and the criteria used for sample selection is provided by Eisenstein et al. (2023).
### JADES NIRCam Observations
The JADES NIRCam imaging observations (Rieke et al., 2023) cover both the GOODS-S and GOODS-N fields. In this paper, we focus on the GOODS-S field data (PI: Eisenstein, N. Lutzgendorf, ID:1180, 1210). The observations utilise nine filter bands: F090W, F115W, F150W, F200W, F277W, F335M, F356W, F410M, and F444W, encompassing a spatial extent of 24.4 - 25.8 arcmin2. A minimum of six dither points was used for each observation, with exposure times spanning 14-60 ks. Correspondingly, the \(5\sigma\) depths are within the range from 3.4 to 5.9 nJy, with flux aperture sizes varying between 1.26 and 1.52 arcsec. Across all filter bands, JADES ensures a high level of pixel diversity (Rieke et al., 2023), thereby significantly reducing the impact of flat-field inaccuracies, cosmic ray interference, and other issues at the pixel level. In this paper, we utilize the publicly released JADES data and reductions.
Footnote 2: [https://dawn-cph.github.io/da/blog/2023/07/18/nirspec-data-products/](https://dawn-cph.github.io/da/blog/2023/07/18/nirspec-data-products/).
### CEERS NIRSpec Observations
The CEERS NIRSpec spectroscopic data (Fujimoto et al., 2023; Haro et al., 2023) were procured as part of the ERS program (PI: Steven L. Finkelstein, ID:1345). This dataset was designed to optimize the overlap with observations from both NIRCam and HST, using three medium resolution gratings \(R\approx 1000\) and the PRISM \(R\approx 100\). The PRISM data presented here are a reschedule of the original observations affected by an electrical short in CEERS Epoch 2 (December 2022). These rescheduled observations were executed in CEERS Epoch 3, February 2023. During this period, both NIRSpec pointings, namely NIRSpec11 and NIRSpec12, adhered to the standard CEERS MSA observational guidelines. Specifically, they encompassed three integrations with 14 groups in the NIRSIS2 readout mode per visit, leading to a total exposure time of 3107 s. Within these observations a trio of shutters was used to form slitlets, facilitating a three-point nodding sequence to enhance background subtraction. The PRISM disperser, ranging in wavelength from 0.6-5.3 \(\mu\)m, is characterized by its capacity to provide varied spectral details. In this paper, we use the NIRSpec data reduced by the Cosmic Dawn Center, which is published on the DAWN JWST Archive (DJA).1 From this data set, there are 32 galaxies at \(z_{\mathrm{spec}}>7\), which we analyse in the following sections.
### CEERS NIRCam Imaging
The CEERS (CEERS; ID=1345) NIRCam imaging (Bagley et al., 2023) includes data across seven distinct filters: F115W, F150W, F200W, F277W, F356W, F410M, and F444W, with a 5\(\sigma\) depth of 28.6 AB magnitudes using 0.1 arcsec circular apertures. The dataset encompasses observations collected during June 2022, accounting for 40% of the total NIRCam area covered for CEERS in the latter half of the same year.
In this paper we utilise our own bespoke reduction of this data from the Cosmic Evolution Early Release Science Survey in the Extended Groth Strip field (EGS). We have reduced this data independently ourselves using a custom set-up of the JWST pipeline version 1.6.2 using the in-flight calibration files available through the CDRS 0942. We provide an extensive description of this process and the resulting data quality in Ferreira et al. (2022); Adams et al. (2023a).
In parallel, v1.9 EGS mosaics HST data from the CEERS team are used. These are processed following the methodologies outlined in Koekemoer et al. (2011), which notably include enhancements in calibration and astrometric accuracy beyond what is available from the default HST archival pipeline, with a pixel scale of 0.03". For the HST data, two filters, namely F606W and F814W, are employed in our analyses due to their superior spatial resolution and depth when compared to HST/WFC3 images, and the fact that they are bluer than the JWST data. We find that using these two HST filters within CEERS is critical for measuring accurate redshifts and other physical properties as this JWST dataset is missing the crucially important F090W band.
### Photometric Redshifts
Analysing the quality and robustness of photometric redshift estimates is a key aspect of this paper, and thus we go into some detail in describing how they are measured here. We use two different photometric redshift codes throughout this paper - EAZY-PY (hereafter EAZY) is our primary code, and then LePhare as a check on these values, both of which we describe below. Most of our results however are discussed mainly in terms of the EAZY code.
Our primary photometric redshifts arise from fitting our derived SEDs from the EAZY photometric redshift code (Brammer et al., 2008). This is the standard code used to measure photo-zs from the EPOCHS sample (Adams et al., 2023; Conselice et al., 2023, in prep). To carry out the photometric redshift analysis we use the BC03 template sets with a Chabrier initial mass function for our analyses, with details discussed in Bruzual & Charlot (2003) and Chabrier (2002), respectively. The templates we use include both exponential and constant star formation histories, whereby we use within these 10 characteristic timescales ranging from \(0.01<\tau<13\) Gyr. In addition to this we use 57 different ages for the model galaxies spanning 0 to 13 Gyr. We include galaxies models which are at redshifts that range from \(0<z<25\). Dust is accounted for by using the prescription of Calzetti et al. (2000). We allow for \(E(B-V)\) values up to 3.5, to include any very dusty galaxies that may exist at these very high redshifts, and to determine the likely errors from low redshift contamination. Our fitting of the photo-zs incorporates treatment for emission lines, and we apply the intergalactic medium (IGM) attenuation derived from Madau (1995) when considering our fits. The very blue templates we use are presented in Larson et al. (2022) as well as those which used by the JADES team (Hainline et al., 2023). These templates build upon the default template sets and incorporate galaxies that exhibit bluer colors and stronger emission lines, which are expected to be more appropriate for modelling the spectral energy distributions (SEDs) for those systems that are at \(z>7\).
In addition to EAZY we use photometric redshifts calculated with the LePhare code. The setup that we use is the same as we have used for the EAZY results described above. However, most of our results when using photometric redshifts arise from EAZY, and LePhare is used as a check on these. By utilizing multiple photometric redshift codes, we are able to cross-check the results for consistency and identify potential contaminants, thus ensuring the reliability of our final sample.
We do not use methods to fine-tune the zero points of the photometric bands, as the NIRCam modules consist of multiple individual chips (8 in the blue and 2 in the red), each with their own independent calibrations and photometric zero point offsets. Applying zero point modifications on a chip-by-chip basis, instead of on the final mosaic, would be necessary due to the small field of view covered by each chip, which results in a limited number of objects with spectroscopic redshifts within each chip, and leads to potential unnecessary biases determined by the positions of the galaxies in the NIRCam pointing. Doing this would also introduce potential biases towards systems with certain colors, which depend on the types of spectroscopically confirmed galaxies within each module. Discussions with members of the community have indicated that residual zero point errors were anticipated to be around 5 percent. Therefore, we have implemented a minimum 5 percent error on the measured photometry to account for potential zero point issues within the NIRCam reduction pipeline.
## 3 Results
In this section we describe the basic results of our study by comparing photometric and spectroscopic data, and what can be learned by combining the two. We include a comparison of the galaxy properties derived separately from the photometric and spectroscopic data, and how accurate we can derive properties from photometry by comparing with spectroscopy, assuming that the spectroscopic derivations are more accurate in some cases. We later discuss the likelihood of this later case.
### Photometric vs. Spectroscopic Redshifts
By far the most common way to estimate the distances of galaxies is through photometric redshifts. This is due to the fact that photometric redshifts can be measured when imaging is available for different galaxies in a variety of filters; this allows us to compare to templates of known redshifts and thus determine which is the best 'fit'. In this section we carry out a comparison of how we measure the photometric redshifts for distant galaxies and how well these compare to the known high quality spectroscopic redshifts available from NIRSpec JWST data.
There are however, two issues that we have to discuss concerning comparing the photometric and spectroscopic redshifts. The first is the selection of sources. It is not enough to blindly measure photometric redshifts for everything that enters a catalogue, as the quality of those redshifts depends strongly on the quality of the data at all wavelengths, and how many filters a galaxy is detected within.
As described, the photometric redshift technique that we use to measure redshifts comes from EAZY-PY (Brammer et al., 2008) and uses a variety of approaches discussed in Section 2.5.1. These methods and details of the photometric redshifts are further described in detail in Adams et al. (2023b) and Conselice et al. (2023, in prep). For spectroscopic redshifts, we utilize data from the publicly
available JADES catalog (Bunker et al., 2023), as well as from the DAWN JWST Archive (DJA) for CEERS galaxies. We re-measure these spectroscopic redshifts ourselves using the [O iii] \(\lambda 5007\) line and find a good agreement with the published ones which we use throughout this paper. For this initial comparison we just compare the photometric redshifts we obtain for all 43 galaxies in our sample (11 from JADES and 32 from CEERS), without consideration for whether these galaxies would be selected for observation based on other criteria, which we discuss in more detail below.
The outcomes of our redshift comparison are visually represented in Figure 1. We evaluate two statistical measures for all the galaxy samples: the outlier fraction \(\eta\) and the Normalised Median Absolute Deviation (NMAD). These two parameters are defined by the following expressions:
\[\eta=\frac{N_{115}+N_{\rm SS}}{N_{\rm total}}, \tag{1}\]
where \(N_{115}\) and \(N_{\rm SS}\) represent the counts of points lying above the line \(z_{\rm phot}=1.15\times(z_{\rm spec}+1)\) and below the line \(z_{\rm phot}=0.85\times(z_{\rm spec}+1)\), respectively. These counts indicate the presence of extreme outliers in the sample. The equation for calculating the NMAD is given by (e.g., Duncan et al., 2019):
\[\rm NMAD=1.48\times{\rm med}\left|\frac{z_{\rm spec}-z_{\rm phot}}{1+z_{\rm spec }}\right|. \tag{2}\]
The values for these parameters, as applied to our data set, are detailed in Table 1. As is evident, our photometric redshift measurements show an exceptional concordance with the spectroscopically measured values. Notably, a mere 2.6% of our samples qualify as extreme outliers in terms of their photometric redshifts. We find a very similar trend when using the LePhare photometric redshifts.
We now would like to consider how the selection method we and others use in high redshifts papers would allow these galaxies to be correctly identified as high redshift (e.g., Adams et al., 2023; Conselice et al., 2023, in prep). The selection procedure in these papers, and others similar to them, uses more than just the best-fitting photo-z solution, including issues such as the limits on potential low-z solutions and the detection confidence of the photometry. In addition to having a high-z solution, these high-z papers often require that there be a low probability for the photometric redshift to be at lower-z. Another criteria for robust selection of high-redshift galaxies involves additional criteria, such as \(>3\sigma\) detection in bands blueward of the Lyman break, a PDF integral of photometric redshifts between \(\pm 0.1\) z is greater than 60% of the total, and \(\chi^{2}\) values less than 6. These criteria are done to balance contamination with sample completeness. Thus we can test our methodology with this sample to see how many galaxies from this spectroscopic sample we would have included in our photometric samples in the EPOCHS papers.
In accordance with the selection criteria explained in our previous work (Adams et al., 2023; Conselice et al., 2023, in prep), 16 out of the 32 CEERS galaxies would be categorized as robust galaxies. The reasons that 16 galaxies would not have survived our selection are varied and depend on a few factors. Among the 16 galaxies that would make up this non-robust sample, 4 systems are excluded due to being near image edges or diffraction spikes. 1 galaxy is excluded for lacking observations in bands blueward of the Lyman break, and 11 are rejected owing to flux detections below 5\(\sigma\) above the noise in the first, second, or both bands redward of the Lyman break. It is noteworthy that the CEERS team likely selected these 11 galaxies based on using smaller, 0.2 arcsec apertures for their photometry. Despite their faintness, our analysis still gets their redshifts correct. Thus, overall we only miss those galaxies which are too faint for reliable photometric redshifts or those that are in non-ideal regions of the images.
We generate both primary and secondary photometric redshift solutions for each galaxy in our study. The secondary redshift solutions are constrained to have a maximum allowable redshift of \(z=6\). In our robust galaxy samples, these secondary solutions typically exhibit an inferior fit quality compared to the primary solutions. This is substantiated by an average \(\Delta\chi^{2}\) value which is \(\sim 35\) higher than that of the primary solutions, for which the mean \(\chi^{2}\) is 7.47.
Universe. In addition, Kroupa (2001) IMF, Bruzual & Charlot (2003) SPS model, and the Calzetti et al. (2000) dust attenuation model is implemented. For each model, we examine no effects other than using different SFR timescales -- 5 Myr, 10 Myr, and 100 Myr -- on the derived properties. These timescales only impact the measured SFR.
Since there are no significant differences in galaxy parameters derived from various models, we have chosen to focus our analysis on the results obtained using the log-normal SFH model. For each property computed from Bagpipes, the derived values are represented by the median of their respective PDF. The lower and upper uncertainties are determined as the differences between the 50th percentile and the 16th, and between the 84th and the 50th percentiles, respectively.
In our spectroscopic fitting, we incorporate three additional considerations (Carall et al., 2019): velocity dispersion, flux calibrations, and noise. The velocity dispersion is modelled by setting the width of the Gaussian kernel in velocity space to be convolved with the spectroscopic output, within a range of \([1,1000]\) km/s. For flux calibrations, we address potential discrepancies between photometric and spectroscopic measurements by fitting a Chebyshev polynomial perturbation to the spectroscopic data (Carall et al., 2019). This method assists in correcting calibration issues and aligning the models. To account for noise, we introduce a factor that applies a multiplicative adjustment to all spectroscopic uncertainties. Moreover, to evaluate potential slit losses, we simulate photometric flux using the observed spectral data. Our analysis reveals a maximum discrepancy of \(\sim\)20% between the observed photometric flux points and the simulated data, predominantly in the NIRCam filter F090W. This discrepancy is likely attributed to the fact that this band is blueward of the Lyman break for our sample galaxies at redshifts \(z>7\), resulting in a significant drop in flux. Consequently, the noise dominates in this band. For other filter bands, no discernible differences are observed.
We produce a scatter plot with photometrically-derived values on the y-axis and spectroscopically-derived values on the x-axis, for Bagpipes derived stellar masses, formed masses, SFRs, and dust extinction values (A\({}_{\rm V}\)). Using the Bayesian Markov Chain Monte Carlo (MCMC) method, we compute the line of best fit for each plot via the emcee package (Foreman-Mackey et al., 2013). Specifically, we employ 100,000 steps and 50 walkers to generate candidate gradients and y-intercept values. For both sets of values, we adopt the mean as the representative value and use the 1\(\sigma\) deviation as the associated uncertainty, as the distributions follow a perfect Gaussian. In addition, the Pearson correlation coefficient between the spectroscopic and photometrically derived values is determined, and its uncertainty is calculated using the Fisher transformation. Specifically, the Pearson correlation coefficient \(r\) is transformed into a \(z\)-score using the Fisher transformation, which is given by \(z=\frac{1}{2}\ln\left(\frac{1\sigma}{1-r}\right)\). This transformation ensures that the distribution of \(z\) is approximately normal. Once \(z\) is obtained, the 95% confidence interval for it is calculated. Subsequently, this confidence interval is transformed back to the correlation coefficient scale using the inverse Fisher transformation, represented by \(r=\frac{e^{2z}-1}{e^{2z}+1}\). Thus, providing the 95% confidence interval for the original correlation coefficient \(r\). The results of gradients, intercepts, and correlation coefficients using 100 Myr SFR timescale are presented in Table 2.
#### 3.2.1 Quality of the Bagpipesfits
In this short section we discuss how well we can fit the SEDs of our galaxies with the Bagpipes fits and the underlying models which we use. These are standard models which have been used throughout the literature for years, but it might be the case that at these higher redshifts galaxy SEDs might be better fit by, for example, models in which the IMF differs from the assumption (perhaps top-heavy) or by models which incorporate binary stars (e.g., BPASS) (e.g., Eldridge and Stanway, 2009). One way to determine this is through examining how well our SEDs are fit by these models as determined through the \(\chi^{2}_{\rm reduced}\) values of these fits.
We evaluate the goodness of fit for our models by calculating the \(\chi^{2}_{\rm reduced}\) for both photometric and spectroscopic fitting. Both our JADES and CEERS samples exhibit comparable photometric \(\chi^{2}_{\rm reduced}\) values, meaning that there is not one particular sample which is better fit by our methods than the other. Precisely, the mean photometric \(\chi^{2}_{\rm reduced}\) for the JADES samples is \(1.5\pm 0.6\), whereas for CEERS samples, it stands at \(2.0\pm 1.1\). This indicates a similar and good level of photometric fitting quality for these two sets of galaxy samples.
However, the spectroscopic fitting quality for CEERS samples appears to be slightly inferior based on this statistic. The mean \(\chi^{2}_{\rm reduced}\) for JADES is \(1.54\pm 0.65\). In contrast, the corresponding value for the CEERS sample rises to \(3.19\pm 1.34\), nearly double that of JADES. We speculate that the worse fitting quality for CEERS is primarily attributed to its shorter exposure time. Some JADES galaxies have exposure times extending up to 28 hours, whereas CEERS employs an exposure time of less than an hour. Whilst the larger errors on the fainter observations should account for this, it is possible that these are being underestimated in our fits, and therefore resulting in higher \(\chi^{2}_{\rm reduced}\) values. In any case, we do not observe large \(\chi^{2}_{\rm reduced}\) values that would suggest the models we fit are inherently flawed. However, a more detailed analysis is warranted and necessary, but this is beyond the scope of this paper.
#### 3.2.2 Measuring Galaxy Stellar masses
In this section, we examine the various different ways in which stellar mass and formed mass are derived from Bagpipes using the spectroscopic and photometric data. Stellar mass represents the present-day mass of the galaxy, while the formed mass incorporates the observed mass plus the return mass, accounting for the mass from exploded stars that contribute to the formation of new stars. Consequently, the formed mass is always greater than the stellar mass. Also, the stellar mass is the only quantity we can directly compare with given that this is what we are observing. In addition, different SFR timescales dictate the duration over which the star formation rate is averaged, and these do not influence the derived galaxy masses. Thus, we present only the 100 Myr averaged SFR results here. In Table 2, we show the correlation coefficient and the parameters of the best-fit line for the spectroscopically and photometrically derived values of these two quantities. Generally speaking, these two masses derived from both methods are in moderate agreement, with high scatter.
We present a graphical comparison for stellar masses in Figure 2. For our galaxy samples, the stellar masses for both CEERS and JADES range from \(\log_{10}({\rm M_{*}/M_{\odot}})=6.8\) to 9.3, with individual means of 8.0 for both fields, consistent with the findings of Fujimoto et al. (2023). The correlation coefficient for spectroscopically and photometrically derived stellar masses is \(0.62^{+0.39}_{-0.28}\), which indicates moderate agreement between these two methods. However, the 1\(\sigma\) residual of 0.37 \(\log_{10}({\rm M_{*}/M_{\odot}})\) from the best fit line suggests high scatter in the data. We hypothesize that this scatter arises from some photometric bands being affected by strong emission lines of H\(\beta\) and [O iii], thereby reducing the accuracy of the stellar masses. Further investigation reveals that galaxies with this pronounced scatter gen
erally exhibit high star formation rates. Although there isn't a universally strong agreement across all mass ranges, a notably better alignment is observed within the mass range \(\log_{10}(\mathrm{M_{*}/M_{\odot}})=[7.6,8.2]\).
#### 3.2.3 Measuring Star Formation Rates
In this section, we employ three methods to measure the SFR: which we name as: Bagpipes, UV luminosity, and H\(\beta\) line luminosity. For the Bagpipes method, we not only analyze the correlation in SFR derived both photometrically and spectroscopically, but also study the variations in the derived SFR values when employing different timescales: 100 Myr, 10 Myr, and 5 Myr. We use this to investigate the star formation history of our sample and to determine when the stellar masses of these galaxies formed. We then compare these SFR measurements with those from direct line and UV measures. The specific parameters for the Bagpipes fitting are detailed in Section 3.2.
Beyond the insights provided by the Bagpipes method, we further measure the SFR directly using H\(\beta\) line luminosity from spectrum, and UV luminosity derived from the photometry. Each technique, as elaborated in this section, calculates the SFR over distinct timescales. For instance, the Hydrogen \(\beta\) method predominantly captures recent SFRs--about 10 Myr prior to observations. In contrast, the UV luminosity method gauges the SFR over a longer window, specifically the \(\sim 100\) Myr preceding the observations (Kennicutt Jr & Evans, 2012).
For SFRs measured from the H\(\beta\) line, we employ the calibration proposed by Kennicutt Jr & Evans (2012). The approach harnesses synthetic stellar populations and SEDs to calibrate various SFR tracers, relying on a standard IMF for enhanced results over previous calibrations. Typically, the H\(\alpha\) luminosity is used for SFR calculations due to its direct relationship with recent star formation, and this relationship is expressed as:
\[\log M_{*}(\mathrm{M_{\odot}yr^{-1}})=\log L_{\mathrm{H\alpha}}-\log C_{\mathrm{ H\alpha}}, \tag{3}\]
where \(L_{\mathrm{H\alpha}}\) is the H\(\alpha\) luminosity and \(C_{\mathrm{H\alpha}}\) is the calibration constant with \(\log C_{\mathrm{H\alpha}}=41.27\). However, in our high-redshift galaxy samples, the H\(\alpha\) line is redshifted beyond the NIRSpec wavelength range. We therefore measure the H\(\beta\) line luminosity, from which we derive the H\(\alpha\) luminosity using the ratio \(L_{\mathrm{H\alpha}}/L_{\mathrm{H\beta}}=2.86\), applicable in dust-free star-forming regions (Kennicutt Jr & Evans, 2012). We later discuss how viable this assumption is and how it might influence our measurements.
To estimate SFRs directly from the photometry, we employ the conversion from the UV luminosity directly measured \(L_{\mathrm{UV}}\) to SFR as presented in Equation 4. In this case we do correct for dust obscuration by measuring the rest-frame UV using a technique that involves utilising the UV \(\beta\) slope. We fit a power law to the rest-frame UV photometry of the galaxy to determine the proportionality constant, \(\beta\). The dust corrected SFR in solar mass per year is then computed using the equation from Madau & Dickinson (2014):
\[\mathrm{SFR_{UV}}=\kappa\cdot L_{\mathrm{UV}}\cdot 10^{0.4(4.4(4.43+1.99 \beta)})\,, \tag{4}\]
where \(\kappa=1.15\times 10^{-28}\) M\({}_{\odot}\) yr\({}^{-1}\) erg\({}^{-1}\) s Hz, is the proportionality constant that accounts for the efficiency of star formation and the IMF (Salpeter, 1955), \(4.43+1.99\beta\) is the dust correction factor \(A_{\mathrm{UV}}\)(Meurer et al., 1999), and \(L_{\mathrm{UV}}\) is the UV luminosity of the galaxy. We use these star formation calibrations and measurements in the following subsections.
#### 3.2.4 Photometry vs. Spectroscopy SFRs
In this subsection we investigate how well fits to spectroscopy compare with fits to the photometry for measuring star formation rates within our sample of galaxies. The reason for doing this is to determine how well we can measure the SFR in terms of internal consistency, but also if we assume that the star formation rate measured from spectroscopy is somehow more 'correct' than with photometry, how different these two measures would be. In Figure 3 we show a comparison of Bagpipes derived spectroscopic and photometric
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Property & Correlation & Gradient & Intercept & Residual 1\(\sigma\) \\ \hline Stellar Mass [\(\log_{10}(\mathrm{M_{\odot}})\)] & \(0.62^{+0.39}_{-0.28}\) & \(0.55\pm 0.11\) & \(3.49\pm 0.90\) & 0.37 [\(\log_{10}(\mathrm{M_{\odot}})\)] \\ Mass Formed [\(\log_{10}(\mathrm{M_{\odot}})\)] & \(0.58^{+0.34}_{-0.35}\) & \(0.53\pm 0.12\) & \(3.72\pm 0.95\) & 0.41 [\(\log_{10}(\mathrm{M_{\odot}})\)] \\ Star Formation Rate [\(\mathrm{M_{\odot}\,yr^{-1}}\)] & \(0.64^{+0.15}_{-0.12}\) & \(0.67\pm 0.13\) & \(0.42\pm 0.30\) & 1.37 [\(\mathrm{M_{\odot}\,yr^{-1}}\)] \\ Dust Extinction (Av) [AB mag] & \(0.61^{+0.16}_{-0.24}\) & \(0.49\pm 0.1\) & \(0.15\pm 0.04\) & 0.20 [AB mags] \\ \hline \hline \end{tabular}
\end{table}
Table 2: Linear regression and Pearson correlation analysis between spectroscopic and photometric results for different galaxy properties derived from Bagpipes using 100 Myr SFR timescale. The gradient and y-intercept of the regression model are computed, and the uncertainty in the correlation coefficient is calculated using the Fisher transformation. The 1\(\sigma\) values (or scatter) for residuals between the best fit line and scatter points are shown.
Figure 2: The comparison of galaxy stellar masses derived from spectroscopic and photometric data using Bagpipes fitting, based on a log-normal star formation history with a 100 Myr SFR timescale. The best fit line for all data points has a gradient of \(0.55\pm 0.11\), and an interception of \(3.49\pm 0.90\) log\({}_{10}(\mathrm{M_{*}/M_{\odot}})\), as shown by the dashed line. The solid line shows the 1:1 relation between the two masses. The correlation coefficient between the spectroscopic and photometric measurements is \(0.62^{+0.39}_{-0.28}\). In general, we find a better agreement between these methods of measuring stellar masses at intermediate masses. At the lowest masses the photometric method gives larger masses, whereas at the higher masses the spectroscopic measurement of stellar mass is larger.
SFR using a 100 Myr timescale. In our sample, the majority of galaxies exhibit an SFR ranging from \(\sim 0.3\) to \(\sim 3\) M\({}_{\odot}\)yr\({}^{-1}\), with a number of systems having higher SFRs, reaching up to \(\sim 9\) M\({}_{\odot}\)yr\({}^{-1}\). We find that the JADES sources with NIRSpec data typically exhibit a lower mean SFR of 1.6 M\({}_{\odot}\)yr\({}^{-1}\), compared to those from the CEERS field which have a mean SFR of 5.6 M\({}_{\odot}\)yr\({}^{-1}\). However, it is important to note that this is within the errors of these measurements. These differences underline the significance of selection biases in studying diverse high-redshift galaxies, emphasizing the need for a more comprehensive spectroscopic approach in future endeavors.
Furthermore, the correlation coefficient between these star formation measurements is \(0.64^{+0.15}_{-0.22}\), signifying a good agreement between the two methods. Notably, there is an especially strong concordance between photometrically and spectroscopically derived SFRs, for SFR values up to 2 M\({}_{\odot}\)yr\({}^{-1}\). It is only at the higher end of the star formation where we find that the photometry is higher. However, it is important to keep in mind that these differences are at about the level of the uncertainty in these values.
#### 3.2.5 Bursty Star Formation Events
In this section, we present three ways in which the bursty SFH nature for our galaxy sample is identified and verified within these high redshift galaxies. We are able to do this as we have the ability to determine the SFR accurately knowing the correct redshift of our systems.
Firstly, we turn our attention to the comparison between SFR derived from H\(\beta\) line emission and the UV luminosity. Of our samples, 5 out of 11 JADES galaxies and 20 out of 32 CEERS galaxies exhibit an H\(\beta\) line which we can measure. The comparison for these galaxies is illustrated in Figure 4. Given that the H\(\beta\) results have not been corrected for dust, we opted for a consistent comparison by assuming a dust-free condition for the UV-derived SFR as well. As these are low mass high redshift galaxies, they are unlikely to be very dusty in any case. Consequently, the term \(10^{0.4(4.43+1.99\beta)}\) as outlined in Equation 3 is omitted from these comparisons.
It is worth noting that if dust correction is taken into account, then the effect is stronger in the rest-frame UV than in the rest-frame optical where H\(\beta\) is located. Upon analysis, 60% of the CEERS samples show a higher SFR from the H\(\beta\) line luminosity measurement compared to that from the UV luminosity, while this observation is true for all the JADES samples. The SFR derived from H\(\beta\) line luminosity can be as much as 2.4 times higher for JADES Samples and 13.5 times for CEERS samples, which may well be due to photometric selection biases in the way these galaxies are selected.
The higher SFR from the H\(\beta\) line method most likely arises from the differing timescales each method probes. The UV luminosity reflects the SFR over the previous 100 Myr, while H\(\beta\) traces the SFR over much shorter timescales of \(\sim\)10 Myr. Such findings suggest a bursty phase of star formation in these galaxies over the recent few million years (see below for further proof of this). One factor that may bias the sample towards higher SFR during the past 10 Myr is that we are only showing the H\(\beta\) SFRs for galaxies with an identifiable H\(\beta\) detection. Another issue which we have ignored in this calculation is the dust content. It might be the case that the dust extinction is high enough to attenuate the UV light more than the H\(\beta\) line flux such that it only appears to be lower. We investigate the dust in more detail in Section 3.2.6, however we give some indication for its impact here. Using the Calzetti et al. (2000) dust law we find an attenuation of A\({}_{\rm UV}=0.25\), A\({}_{\rm H\beta}=0.13\) for our galaxies. This leads to a relative increase in UV SFR over H\(\beta\) by about 10% (25% increase in UV vs. a 12% increase in H\(\beta\) flux) which is not nearly enough to create UV star formation rates that match the observed H\(\beta\). Thus, we can conclude that there is an intrinsic difference in what these two star formation rates are measuring.
To investigate the bursty nature of the SFH of these galaxies more thoroughly, we utilize the non-parametric 'Continuity' model presented by Leja et al. (2019). Our analyses yield consistent findings: galaxies with higher H\(\beta\)-derived SFR do indeed exhibit a notable burst in their SFH when interpreted through the Continuity model. Specifically, for a majority of these cases, the timing of these star formation bursts is identified to occur within a timeframe spanning 0.3 to 0.7 Gyr.
Another aspect that indicates a bursty SFH is from the specific star formation rate, defined as
\[\rm{sSFR}=\frac{M_{formed}(<t)/t}{M_{*}}, \tag{5}\]
where M\({}_{\rm formed}\) represents the mass formed within the past \(t\) years, and M\({}_{*}\) is the observed stellar mass of the galaxy. If a galaxy formed all its mass within the past \(t\) years, then M\({}_{\rm formed}=\) M\({}_{*}\), neglecting any stellar mass loss through stellar evolution processes, resulting in a maximum sSFR of sSFR = \((1/t)\).
We utilise Bagpipes to derive the sSFR both spectroscopically and photometrically. The majority of our sample display higher values of photometrically derived log\({}_{10}\)(sSFR) compared to the spectroscopically derived values, with the most significant discrepancy being 11% observed in both the CEERS and JADES samples. Furthermore, in Figure 5, we show log\({}_{10}\)(sSFR) for our samples derived from Bagpipes spectroscopic fitting under 10 Myr and 100 Myr SFR timescale. Most galaxies attain the maximum log\({}_{10}\)(sSFR) value of log\({}_{10}(1/t)=-8\) using a 100 Myr SFR timescale. This implies that most galaxies are consistent with forming most of their stars within the past 100 Myr. Additionally, we also find 45% + 20% of JADES galaxies and 34%\(\pm\)11% of CEERS galaxies formed 30% of their total mass within the past 10 Myr. In addition, from this 10 Myr timescale model, two CEERS samples achieve a log\({}_{10}\)(sSFR) value of \(-7\), suggesting they formed their entire stellar mass within this period, while two JADES galaxies reach \(-7.2\), indicating approximately 60% of their stellar mass was formed during the past 10 Myr, both signifying periods of intense star formation. These observations underscore the bursty nature of star formation in the last few million years for these galaxies. A comparative analysis using a 5 Myr SFR timescale does not produce results significantly different in sSFR from those obtained with a 10 Myr SFR timescale, indicating a relatively stable star formation rate across these two timescales.
#### 3.2.6 Dust attenuation and Star Formation Rate
It is crucial to emphasize the dust correction factor used in our SFR measurements across different methodologies, as well as when we do and do not use it. As mentioned, we employ three methods to derive SFR measures: this includes the measurements from the Bagpipes code, SFR derived from UV luminosity with dust corrections, and SFR calculated from the H\(\beta\) line luminosity. Dust attenuation effects are only considered for the SFR derived using the first two methods, assuming a (Calzetti et al., 2000) dust attenuation model.
Specifically, Bagpipes fitting applies this dust law to derive \(A_{\rm V}\), representing the attenuation in the V-band. In contrast, when calculating SFRs via the UV luminosity method, we utilize the same dust model but determine \(A_{\rm UV}\) using the formula by Meurer et al. (1999): \(A_{\rm UV}=4.43+1.99\beta\). Essentially this formula allows us to determine the dust extinction in the UV by measuring the UV slope \(\beta\), which we do using methods outlined in Austin et al. (2023). It is worth noting
that the dust law from Meurer et al. (1999) is primarily tailored for \(z\sim 4\) galaxies and thus may not be directly applicable for our sample at \(z>7\). We chose to use it in the absence of a currently widely accepted dust attenuation law for high-redshift galaxies. A comprehensive discussion regarding this choice can be found in Austin et al. (in prep). One way that we can see this problem is that some of the values for the \(A_{\rm UV}\) are actually negative using this method, which is meaningless in this context.
Considering the potential unsuituitability of the Meurer et al. (1999) dust attenuation relation for our high-redshift samples, it is essential to gauge its influence. We address this by comparing the dust-corrected SFR values derived from UV luminosity with those derived from Bagpipes. As the UV luminosity method computes the SFR over an approximate 100 Myr timescale, assuming a lognormal SFH, we adhere to the same parameters in our Bagpipes fits. In addition, while both methodologies employ the Calzetti et al. (2000) dust model, their applications differ. The UV luminosity method calculates the attenuation \(A_{\rm UV}\) in the UV band, whereas Bagpipes determines the attenuation \(A_{\rm V}\) in the V band. To ensure a consistent comparison we convert the UV luminosity's dust correction factor from \(A_{\rm UV}\) to \(A_{\rm V}\), accounting for the discrepancies in dust attenuation between the two methods. This conversion leverages the relationship \(A_{\rm UV}/A_{\rm V}=S\), with \(\log_{10}S=0.40\)(Salim & Narayanan, 2020).
After these SFR values measured using both methods are aligned in terms of the same dust correction factor (\(A_{\rm V}\)), we can assess the potential discrepancies between the two. The comparison of SFR derived from these two methods is shown in Figure 6. Yellow points in this figure indicate galaxies with negative \(A_{\rm UV}\) and, consequently, \(A_{\rm V}\) values. For our analyses, we treat these negative values as zero - implying that these galaxies are 'dust free'. We will further elaborate on the rationale and implications of this decision in the subsequent paragraph. From this figure, we find the correlation coefficient is \(\sim 0.7\). This discrepancy, resulting from the application of the dust attenuation relation in the UV luminosity method, underscores the necessity for a refined scaling relation, which will likely bring the correlation coefficient closer to equality, assuming that this dust measurement method is the culprit.
Building upon the above discussion, we detail the values of \(A_{\rm UV}\) and \(A_{\rm V}\) utilised in our study to appreciate the scale and implications of our dust corrections. By employing the dust scaling relation from Meurer et al. (1999) to determine \(A_{\rm UV}\) and hence \(A_{\rm V}\), we find that 55% of JADES samples and 73% of CEERS samples exhibit negative \(A_{\rm V}\) values. These negative values indicate both an absence of dust corrections and issues with calibration; consequently, they are reset to zero. This is due to the very blue nature of the SEDs of these high-redshift galaxies. These are bluer than the systems that were used to calibrate the Meurer relation. The comparison between \(A_{\rm V}\) derived from UV \(\beta\) slope and Bagpipes is shown in Figure 7. Among the galaxies that have positive \(A_{\rm V}\) values derived from the UV \(\beta\) slope, most of the JADES and CEERS sample exhibit \(A_{\rm V}\) values below 0.5, with a median value of 0.10. Taking into account that 68% of total samples exhibit negative \(A_{\rm V}\), this suggests that even for the cases with accurate dust attenuation correction, the magnitude of the necessary correction is typically modest. This conclusion is also shown in Figure 6, where there are no discernible observations showing that the dust-corrected UV SFR deviates from the uncorrected samples.
Our findings underscore the importance of a refined dust scaling relation for high redshift samples. New UV dust scaling relations are
Figure 4: Comparison of SFR derived from H\(\beta\) lines and UV luminosity for 25 galaxies exhibiting H\(\beta\) lines under dust free assumption. 68 % galaxies have higher H\(\beta\) derived SFR values than that obtained using the UV luminosity method, with a factor up to \(2.4-13.5\).
Figure 5: Comparison of \(\log_{10}\)(\(\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{ }}}}}}}{{{{{{ {\rmrmrm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{
being developed, and are needed to make progress on this front. For instance, a comprehensive cosmological hydrodynamical simulation of dust attenuation is presented in Wilkins et al. (2018). Moreover, a promising technique to recover the dust content of galaxies using machine learning methods is being explored (Fu et al., 2023, in prep). Concurrently, a new empirical relation is also under construction (Austin et al., 2023, in prep).
#### 3.2.7 Relations of SFR, Mass and Redshift
In this section we aim to determine the relationship between SFR, masses, and redshifts for our samples. To illustrate our findings, we have created several figures.
Figure 8 presents the plot of SFR versus stellar masses, derived from Bagpipes using a short 10 Myr SFR timescale. This is compared with the results from the FLARES simulation (Wilkins et al., 2022) and the main sequence relations at \(z\sim 2\)(Iyer et al., 2018) and \(z\sim 6\)(Santini et al., 2017). Our findings are in close alignment with these three established studies. We determine a best-fit line, represented by \(y=0.61x-4.49\), and find that the \(1\sigma\) scatter of the residuals is 0.43, indicating low scatter. Although the gradient of our best-fit line is less steep than those found in the aforementioned studies, it should be noted that this discrepancy may be attributable to bias in the selection of our sample.
We present two sets of scatter plots that illustrate the relationships among SFR, stellar masses, and redshifts, as shown in Figure 9 and Figure 10. Each set contains two sub-plots: in the left sub-plot, the SFR is calculated using H\(\beta\) line emission and UV luminosity, while the stellar mass is derived using Bagpipes. In the right sub-plot, both the SFR and stellar mass are determined via Bagpipes. We subsequently compute the ratio SFR\({}_{\rm H\beta}\)/SFR\({}_{\rm UV}\) for the left plot, and SFR\({}_{\rm 10Myr}\)/SFR\({}_{\rm 100Myr}\) for the right plot, for further analysis. Dust corrections are only considered in the Bagpipes case.
From the right panel (derived SFR using Bagpipes) of Figure 9, it is evident that more massive galaxies generally exhibit comparable SFR values derived from both 10 Myr and 100 Myr timescales, consistent across all redshifts in our samples. This demonstrates the absence of a significant recent burst in SFR for high-redshift galaxies that are more massive \(\log_{10}({\rm M_{s}}/{\rm M_{\odot}})>8.6\). However, this observation is not mirrored in the left panel which might be largely attributable to the absence of a dust attenuation correction for the SFR derived from UV and H\(\beta\) luminosity. If an accurate dust scaling relation for UV luminosity is developed, then we expect the left result to be similar to the right.
From Figure 10, we find that the SFR\({}_{\rm 10\,Myr}\)/SFR\({}_{\rm 100\,Myr}\) ratio is higher on average for galaxies with a lower SFR as determined by the 100 Myr timescale. The results from this figure's left and right images support this observation. This underscores the recent bursty star-formation patterns, and such bursty star formation histories are particularly pronounced in younger and less massive galaxies, aligning with the findings of Looser et al. (2023). Furthermore, we do not observe any significant correlations between redshifts and either stellar mass or SFRs for our sample galaxies in the range \(z_{\rm spec}=7-13.2\). This suggests that galaxies within this high-redshift interval may exhibit a diverse range of behaviors.
### Emission Line Characteristics
We investigate the emission line attributes in the four distinct JADES galaxies that prominently display strong H\(\beta\)\(\lambda\)4861, [O iii] \(\lambda\)4959, and [O iii] \(\lambda\)5007 emission lines, using the specutils package (Astropy-Specutils Development Team, 2019). These lines, within the NIRSpec wavelength range coverage, exhibit the strongest S/N ratio compared to other potential lines. Our choice of these galaxies is informed by two primary factors. Firstly, these JADES galaxies have longer NIRSpec exposure times than the CEERS galaxies, leading to a superior S/N ratio. Secondly, of the 13 JADES galaxies with \(z_{\rm spec}>7\), two systems are without NIRcam images, and 4 (at \(z=10.3\) - 13.2) are identified as metal-poor galaxies (Curtis-Lake et al., 2022). Among the remaining 7, only 4 of these galaxies distinctly exhibit
Figure 6: Comparison of the SFR determined from UV luminosity to those derived via Bagpipes spectroscopic fitting, using log-normal SFH and a 100 Myr SFR timescale. We convert the UV luminosity dust attenuation from \(A_{\rm UV}\) to \(A_{\rm V}\) to ensure consistency in the dust attenuation factor with Bagpipes. Yellow points represent galaxies with negative \(A_{\rm UV}\), which is physically meaningless and are thus set to = 0 (dust-free). Overall, the correlation coefficient is \(\sim 0.7\), although an ideal correlation would yield a value of 1. This discrepancy stems from the erroneous \(A_{\rm UV}\) value we calculated, using the scaling relation from Meurer et al. (1999), which is only applicable at lower redshifts (\(z\sim 4\)).
Figure 7: Comparison of \(A_{\rm V}\) values obtained from Meurer et al. (1999) and those determined through spectroscopic fitting using Bagpipes. Points marked in yellow represent negative \(A_{\rm V}\) values as per Meurer et al. (1999), which we reset to zero (dust free). These instances constitute 55% of the JADES samples and 73% of the CEERS samples. The best-fit line for data points with positive \(A_{\rm V}\) values yields a gradient \(m=0.24\pm 0.02\). The \(1\sigma\) scatter of the residuals, defined as the differences \(x-y\), is measured to be 0.79, indicating high scatter.
the aforementioned three emission lines. The associated spectra for these galaxies are laid out in Appendix A, and Table 3 shows the line flux and equivalent width (EW) of these three lines.
To compare the spectra of these systems with their photometry we attempt to estimate equivalent widths from the photometry. This is a technique to learn about galaxy emission lines without spectra, something which has been done using Spitzer photometry to determine properties of high redshift galaxies (e.g., Smit et al., 2016). To test this idea using JWST data we compare the sum of the equivalent widths for these three lines as derived spectroscopically with their photometric counterparts. The computation of photometric equivalent widths hinges on the differential broad-band magnitudes, specifically between the bands featuring emission lines and those devoid of them. The aggregate equivalent width inherent within the band harboring emission lines can be mathematically expressed as:
\[\Delta m=-2.5\log\left(1+\frac{\mathrm{EW_{Sum}}(1+z)}{\mathrm{Bandwidth}} \right), \tag{6}\]
where \(\Delta m\) is the magnitude differences between the filter band with emission line and the continuum, 'Bandwidth' represents the width of the band that includes the emission lines, \(\mathrm{EW_{Sum}}\) represents the cumulative equivalent width of all emission lines within that filter band. A detailed introduction of this equation is in Duncan et al. (2023); Marmol-Queralto et al. (2016). This formula succinctly captures the incremental contribution of the emission line to the overall flux of the band. Among our four JADES galaxies, two display emission in the F444W band, using the F410M band as continuum. The other two show emission in the F410M band, with F356W band serving as the continuum.
Figure 11 presents a comparative analysis of the sum of the EW of H\(\beta\)\(\lambda\)4861, [O iii] \(\lambda\)4959, and [O iii] \(\lambda\)5007 lines as determined through both photometric and spectroscopic techniques. To ensure a comprehensive study, we incorporate all JADES galaxies at \(z\approx 3\) that display these three emission lines in the F200W filter. Additionally, six JADES galaxies at \(z\approx 6\) with these lines detected in the F335M filter are also included. The gradient of the line of best fit for all samples is \(0.49\pm 0.11\), indicating a moderate agreement between the results obtained from both spectroscopic and photometric approaches. Generally, the photometric method yields sums of EW that are about \(30\%\pm 20\%\) lower compared to those derived spectroscopically. We attribute this discrepancy to a potential overestimation of the photometric continuum, leading to diminished EW measurements. While the spectroscopic spectra are uncontaminated, there can be sources of contamination in the photometric data. One possible cause is the assumption that the continuum in the spectrum is flat within the filter band's wavelength range; however, spectra can display various shapes across these wavelengths. In addition, the presence of noise in the spectra can directly influence the size of the continuum, thereby affecting the spectroscopic EW values.
Among the four \(z>7\) JADES galaxies, those with the presence of the three specific emission lines in the F410M medium band (indicated by blue stars in Figure 11) exhibit more precise photometrically-derived EW values in comparison to galaxies with emission lines in the wide band (F444W). However, this conclusion does not hold as strongly for the \(z\approx 3\) samples, which have emissions in the F200W wide band. We believe that the primary underlying factor is still the detection of the continuum. From the spectra of the \(z\approx 3\) samples, the continuum is clearly observable and detectable. In contrast, for the four high-redshift samples, the continuum is hardly discernible, as evidenced in Appendix A. As a result, when deriving the spectroscopic EW, the continuum introduces uncertainty, leading to deviations from its photometric counterparts. Given the above considerations, some caution should be used when measuring and interpreting EW measurements from broad-band photometry, especially for galaxies with high EW emission lines.
Finally, we compare our results with Withers et al. (2023), which studies the sum of the EWs of the same emission lines (H\(\beta\) and [O iii]) for galaxies at redshifts between 1.7 and 6.7, and find a good agreement with our samples within this redshift range.
### Morphological and Photometric Size Effects from Line Emission
In our study of the line-emitting sample, we note that the photometric fluxes in line-emitting bands are sometimes stronger than neighboring bands. This brightness can likely be attributed to line emission, as discussed in the previous section. Our primary inquiry in this section is to discern the impact of this line emission on the morphological attributes of galaxies. This is achieved by subtracting and subsequently analyzing the residuals from bands that exhibit line emissions in contrast to those that do not.
A particular focus of our examination are the emission lines H\(\beta\)\(\lambda\)4861, [O iii] \(\lambda\)4959, and [O iii] \(\lambda\)5007, evident in four high-redshift JADES galaxies as discussed in Section 3.3. Of these galaxies, two display the lines in the F444W band (NIRSpec ID: 8013, 21842), while the others do so in the F410M band (NIRSpec ID: 20961, 10013682). To delineate further, the F410M and F356W bands act as the continuum for these sets respectively. We use these as the continuum as they are the bands closest to those with emission lines, without themselves having emission lines present. Thus, our dataset encompasses two galaxy sets, each offering data from a pair of filter bands - one with emission lines present and its counterpart containing only the continuum. These can be subtracted from each other to show the location of the line emission spatially.
Our methodology of subtraction is very similar to that used in Hatch et al. (2013), whereby essentially the line emission structure is found by subtracting a normalised image which contains no lines from the image in the filter where line emission exists. The idea is that the residuals show the distribution of the gas which produces the line emission. To do this we carry out a background subtraction for each image. We do this by masking each galaxy and other galaxies in each image, we then derive the median value for the background level, which is then subtracted from each image. This is followed by the normalization of every galaxy image set, this is a critical step as we have to ensure that all the continuum light is removed from the
Figure 8: Plot of SFR from Bagpipes with a 10 Myr timescale versus stellar masses. Results from the FLARES simulation with a 10 Myr timescale and main sequence relation at \(z\sim 2\)(Jyer et al., 2018) and \(z\sim 6\)(Santini et al., 2017) are also shown. The best-fit line is characterized by \(0.61\pm 0.01\). Despite the slightly lower gradient in our results, close agreement with these established studies is observed.
band with the line emission to reveal that underlying emission. To do this we use an aperture of consistent size across the frames (154 pixels roughly the size of all our galaxies) for each of the galaxies within these images, we compute the total flux within this aperture. The image with the highest flux summation is used for normalization, from which the normalization constants for other images are determined. The latter is accomplished by dividing the flux summation by their individual flux sums. These constants are then multiplied with the background-subtracted images, resulting in images that are both normalized and devoid of background.
We used this procedure on individual galaxies, however, when this was carried out no single galaxy was found to show line emission that could be detected. Therefore, we concluded that stacking of these images was potentially a way to retrieve a signal. To do this for every galaxy set, a weighted stack of these images - both emission and continuum - is created. This involves calculating the standard deviation of the background noise for each image and subsequently assigning weights to each, based on the inverse of the noise standard deviation. The final stacked image is constructed by achieving a weighted flux sum and then dividing this by the total weight (the sum of the weights of all images). This procedure is executed separately for the emission and continuum images of every galaxy set.
Figure 10: Plots akin to Figure 9, but with the y-axis representing the SFR ratio and the x-axis displaying the average SFR over a 100 Myr timescale, while the color denotes redshifts. The SFR ratio is more noticeable for galaxies with lower average SFR during the past 100 Myr.
Figure 9: Scatter plots depicting the relationship between stellar masses and the redshift, with color coding representing the SFR ratio (10 Myr / 100 Myr) values. Stellar masses are derived from Bagpipes in both plots. The left plot showcases the SFR calculated using the H\(\beta\) line emission and UV luminosity methods, while the right plot displays the SFR as determined by Bagpipes over 10 Myr and 100 Myr timescales. Only the Bagpipes-derived SFR adopts a dust correction factor. The left figure has fewer data points because not all galaxies exhibit an H\(\beta\) emission line (26/43). As can be seen from the right plot, galaxies with higher masses tend to have more comparable SFR derived between 10 Myr and 100 Myr timescales.
To ensure the consistency of the PSF with the F444W band, we employ a two-step process involving the convolution of emission and continuum images with their respective PSF kernels. The PSF models for our bands are generated using WebbPSF (Perrin et al., 2012, 2014). The kernels for this convolution are derived using pyphher (Boucaud et al., 2016). These kernels are designed such that when convolved with the PSFs of their specific bands (either emission or continuum), the resultant PSFs are then such that they match that of the F444W band. Due to the emission and continuum residing in different bands, two distinct kernels were crafted and applied for the convolution. After this, the continuum images are subtracted from the emission ones, effectively revealing the location of the material producing the line emission. This assumes that the underlying continuum light in the emission line band is similarly distributed at similar wavelengths. We test this with measuring the flux below and find a good agreement, revealing that we are indeed retrieving the line emission. Notably, this emission is accentuated in the galaxy set associated with the F410M band as the emission band, as depicted in Figure 12. To quantify the flux of the line emission, eight equal-area apertures are positioned around the emission domain, and the flux sum within these is computed. Through the standard deviation of these sums, we deduce that the core line emission flux sum is elevated at \(\sim 11\sigma\) above the background threshold.
We use these normalization constants to scale the photometric fluxes we measure. Upon analyzing the photometric line flux of this region, as revealed in this image, we obtain a flux measurement of \((203.4\pm 36)\times 10^{-20}\) erg/s/cm\({}^{2}\). This closely aligns with the direct line flux measurements (the sum of the lines in the galaxies stacked), which is found to be \((247.01\pm 12.86)\times 10^{-20}\) erg/s/cm\({}^{2}\), as reported by the JADES team for the same lines in the same galaxies (Bunker et al., 2023). This is a strong indication that we are indeed seeing the spatial extent of the line emission for these systems, and not as a result of a colour gradient or stellar continuum excess at the emission line band wavelength.
Furthermore, to measure the structure of this line emitting gas we employed the GALFIT software (Peng et al., 2002; Peng et al., 2010) for a detailed morphological analysis. The radii and Sersic indices of the two galaxies (NIRSpec ID: 20961, 10013682) across different filter bands are presented in Table 4. The photometric band with the stacked line emission has a fitted radius of \(0.61\pm 0.02\) kpc and a Sersic index of \(n=0.27\pm 0.09\). These values align with the average dimensions of the corresponding galaxies in their individual emission bands. Moreover, as emphasized in Table 4, the size of the galaxy gaseous region is slightly larger than stellar contributions, but the errors on these measurements are quite large. Therefore, we can only conclude with this information that the sizes of the emission line regions are statistically similar to the continuum size. However, the Sersic index for the line emission image is much lower than for the galaxy continuum images that go into the stack, showing that it is perhaps less concentrated (diffuse) than the stellar light itself.
Lastly, we measure the sizes of the four JADES galaxies with emission lines that overlap in wavelength with the NIRCam filters using GALFIT. After visually inspecting the sizes in these bands, we discard any data exhibiting notably high uncertainties or large \(\chi^{2}_{\rm reduced}\) values. The final results are found in Figure 13. Notably, we identified a consistent pattern, mirroring findings from the stacked data: bands exhibiting line emission consistently display a slightly larger size relative to those of the continuum bands, with the exception of NIRSpec ID: 10013683. It is not clear why in that particular case the sizes are not as large. We do note that in this galaxy, however, we find the weakest emission lines amongst these four systems, which may be the reason.
## 4 Discussion
Our results show that photometric quantities are fairly good at representing the properties of galaxies that can be derived through spectroscopy. This is under the assumption however, that the quantities we derive from spectroscopy are standard 'correct' values. Whilst this is true for the spectroscopic redshift which is very unlikely to be ambiguous or wrong, this is not necessary the case for star formation
Figure 11: Comparison of the sum of EWs calculated using photometric and spectroscopic methods. The sum represents the combined values of H\(\beta\)\(\lambda\)4861, [O iii] \(\lambda\)4959, and [O iii] \(\lambda\)5007. The red and blue colors denote the emissions from these three lines in wide and medium differ bands, respectively. Galaxies at different redshifts are labelled with circles, diamonds, and stars for \(z_{\rm spec}\sim 3,6,\ \rm and\ >7\), respectively.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline NIRSpec ID & \(\mathcal{Z}_{\rm spec}\) & H\(\beta\)\(\lambda\)4861 & [O iii] \(\lambda\)4959 & [O iii] \(\lambda\)5007 & H\(\beta\) & [O iii] \(\lambda\)4959 & [O iii] \(\lambda\)5007 \\ & & Line Flux & Line Flux & Line Flux & EW & EW & EW \\ & & (\(10^{-20}\) erg s\({}^{-1}\) cm\({}^{-2}\)) & (\(10^{-20}\) erg s\({}^{-1}\) cm\({}^{-2}\)) & (\(10^{-20}\) erg s\({}^{-1}\) cm\({}^{-2}\)) & (Å) & (Å) & (Å) \\ \hline
8013 & 8.473 & \(20.59\pm 2.38\) & \(34.53\pm 2.83\) & \(93.54\pm 6.80\) & \(160.3\pm 72.3\) & \(341.9\pm 67.4\) & \(1102.6\pm 66.3\) \\
21842 & 7.98 & \(35.40\pm 3.13\) & \(64.78\pm 3.54\) & \(184.81\pm 3.76\) & \(278.1\pm 76.1\) & \(620.8\pm 66.7\) & \(1950.4\pm 67.4\) \\
20961 & 7.045 & \(46.91\pm 8.34\) & \(41.97\pm 6.57\) & \(105.04\pm 5.40\) & \(18.3\pm 133.1\) & \(12.2\pm 92.5\) & \(315.8\pm 83.5\) \\
10013682 & 7.275 & \(10.10\pm 2.34\) & \(25.06\pm 3.11\) & \(61.44\pm 3.48\) & \(81.4\pm 57.4\) & \(351.8\pm 71.4\) & \(1039.8\pm 70.3\) \\ \hline \end{tabular}
\end{table}
Table 3: NIRSpec Emission Line Measurements for Four JADES Galaxies: Fluxes and equivalent widths (EWs) for H\(\beta\)\(\lambda\)4861, [O iii] \(\lambda\)4959, and [O iii] \(\lambda\)5007 are detailed. Intriguingly, for each galaxy, the ratio of line fluxes does not align with the ratio of their corresponding EWs. This discrepancy may arise from the continuum. The continuum surrounding these emission lines for the four galaxies is scarcely detectable, hence influencing the derived values.
and stellar mass, which we discuss below. Even the measurement of line fluxes for SFR values can be incorrect, despite the common lore that these values are better than others. It is especially not clear if the measurements of stellar mass and star formation rates are better measured spectroscopically than with photometry. Under the same assumptions about the underlying process for fitting, that is the same code and same star formation history models, we find that galaxy properties are within 60 % the same between measurements done with the photometry and spectroscopy for \(z>7\) galaxies. This is often below the typical random uncertainty limits for these quantities from any measurements we can do now.
We have also shown in this paper that our methods for deriving photometric redshifts using the EPOCHS methods (Adams et al., 2023) reveal a good agreement with spectroscopic redshift measurements. Obtaining reliable photometric samples is crucial for subsequent spectroscopic redshift follow-up. Given that spectroscopic redshifts are resource-intensive and expensive, we cannot anticipate every galaxy to undergo a spectroscopic analysis due to the associated costs. Consequently, the reliance on photometric redshifts remains paramount for studying the broader galaxy population for the foreseeable future. This dependence is underscored by the fact that these photometric redshifts play a fundamental role in our analyses to decipher evolutionary patterns across various fields. This includes datasets like the PEARLS data (Windhorst et al., 2023) and the recent public releases from JWST. Thus tests such as this one are critical for determining the quality of the photometric redshifts as well as determining what fraction of high redshift galaxies at \(z>7\) would even be included in samples of distant galaxies with photometric redshifts. One caveat to all of this, which we showed in this paper, is that the spectroscopic samples from JADES and CEERS are quite different in their underlying properties and these certainly are not representative of the distant galaxy population. More full and complete redshift surveys are needed at these redshifts to determine absolutely how well photometric and selection methods work.
Beyond this we are finding that the gas properties, as measured through emission lines, of these earliest galaxies can be measured with the comparison of spectroscopy and photometry. This involves extracting the equivalent widths of lines that are present within the photometric bands. This is the method of finding fluxes or equivalent widths by using the excess in a filter over a fit continuum. We find that this can be done; however, in some instances, the equivalent widths derived from photometry are about 30% \(\pm\) 20% smaller than those measured with spectroscopy. Our conclusion from this is that any measurements made outside of spectroscopy should be carefully done when trying to measure emission line properties from fluxes within filters.
We also show that new approaches towards understanding galaxy structure in line emission at \(z>7\) can be carried out by subtracting filters with emission lines from those without emission lines to view the entire line emitting structure. We carry this out on a limited sample here, showing that the structure of the gas is slightly diffuse within galaxies. This is an indication that this gas is perhaps not as concentrated as the stars, and gives further evidence for an outside-in formation in these galaxies, assuming that the line emission is produced from star formation events, which from line ratios of these galaxies appears to be the case (Rinaldi et al., 2023; Sun et al., 2023).
## 5 Conclusions
In this paper we investigate galaxies that have spectroscopy taken with NIRSpec with JWST and are confirmed to be at \(z>7\). Our primary sample is those galaxies that have NIRSpec data taken as part of the JADES GTO and the CEERS ERS data sets. Our primary goal is to use this spectroscopy and imaging to determine how well photometrically derived quantities, using methods we have developed, compare with those based on the more possibly reliable spectroscopic measurements. Our findings include:
I. We find that there is an excellent agreement in the comparison of photometric redshifts to spectroscopic redshifts using the EAZY
\begin{table}
\begin{tabular}{c c c c} \hline Galaxy & Band & Radius (Kpc) Error & Sérsic Index \\ \hline
20961 & Emission & \(0.48\pm 0.01\) & \(0.59\pm 0.10\) \\
20961 & Continuum & \(0.41\pm 0.01\) & \(0.31\pm 0.11\) \\
10013682 & Emission & \(0.73\pm 0.64\) & \(0.05\pm 0.29\) \\
10013682 & Continuum & \(1.87\pm 0.25\) & \(1.03\pm 0.50\) \\ Stacked Emission & Emission & \(0.66\pm 1.14\) & \(0.03\pm 0.2\) \\ Stacked Continuum & Continuum & \(0.49\pm 1.87\) & \(0.05\pm 0.72\) \\ Stacked Residual & Line Emission & \(0.61\pm 0.02\) & \(0.27\pm 0.09\) \\ \hline \end{tabular}
\end{table}
Table 4: Morphological parameters for two JADES galaxies and the attributes of their stacked images are detailed. The stacked residual is calculated by subtracting the Stacked Continuum from the Stacked Emission, highlighting the contribution from gas emission. The uncertainties associated with the radius and Sérsic Index derived from GALFIT are purely statistical, and do not represent physical errors.
Figure 12: Line emission image obtained by subtracting the stacked continuum-only images from the stacked emission images for the subset of galaxies exhibiting emission lines in the F410M band (NIRSpec ID: 20961, 10013682). A pronounced line emission detection, registering \(11.08\sigma\) above the background, is clearly visible, with a possible distinct shape.
code. Only two galaxies are classed as outliers within the full sample of 43 galaxies. We also discuss in this paper which galaxies in the spectroscopic sample would not be selected using normal procedures for finding high-z galaxies depending on their properties.
II. We find a correlation coefficient \(r\sim 0.60\) between the stellar masses derived both photometrically and spectroscopically, and a similar correlation for the SFR, using exactly the same Bagpipes setup to measure both. The moderate agreement between results obtained from these two methods underscores the accuracy of the photometric method, given the assumption that spectroscopically derived values are correct.
III. By comparing the star formation rate measurements for our galaxies using the H\(\beta\) line and UV luminosity, we find that there is a'mismatch' in the spectroscopic properties of the galaxies compared to those derived through photometry. In nearly all cases we find a systematically higher star formation rate (range from ratios of 2.4 to 13.5) as derived through the spectroscopic line fluxes than we get from the photometry itself. This is an indication that the star formation rate is increasing with time, as the H\(\beta\) is measuring more recent star formation.
IV. Furthermore, we find that using broad-band filters to measure emission line equivalent widths is possible, but can lead to high uncertainties and possible underestimates by 30% \(\pm\) 20%. Thus, any measurements of line fluxes or equivalent widths using these filter sets should be done with some caution.
V. We also use a new method to find the spatial distribution of the line emission by subtracting NIRCam filter with and without emission lines present. Using this method we find that there are no detections of line emission in the individual subtracted images of these galaxies. However, a stacked version of this method with several galaxies finds a significant detection from which we show that the line emission has a spatial distribution similar to the continuum light.
VI. We measure the morphological and structural properties (size and Sersic indices) of this sample of galaxies as a function of wavelength in the broad-band and medium-band filters. We find that in three out of four cases the sizes of these galaxies are slightly larger in the bands that contain the emission lines compared to neighboring bands which are emission line free. This gives some indication that perhaps the line emission is slightly more extended or less concentrated than the older stellar population. However, when we subtract
Figure 13: Size comparisons of four JADES galaxies with prominent emission lines. To the right of each individual galaxy plot, the average representative radius error for each galaxy is displayed, while each point on the plots indicates the radius that minimises the \(\chi^{2}_{\rm reduced}\) value. We discard any data exhibiting notably high uncertainties or large \(\chi^{2}_{\rm reduced}\) values. Typically, the band with the emission line shows a larger radius compared to other filter bands. This implies an extended gas emission region around these galaxies that extends beyond their star-forming regions. The errors are statistically derived from GALFIT and do not necessarily represent physical uncertainties, and are lower limits.
off the continuum from the bands with emission lines we find that statistically the sizes of the emission region are similar to the size of the continuum light.
Overall, we have shown in this paper that the use of photometry to measure galaxy properties is a reliable method of measuring photometric redshifts, stellar masses (or mass to light ratios) and star formation rates. There are slight differences with spectral derived properties and these should be taken into account when trying to calibrate an absolute scale for star formation and stellar mass histories of galaxies which have been derived based on photometry. In the future, it is clear that more general spectroscopy is needed for early galaxies where tests like these can be done over a broader range of intrinsic properties.
## Data Availability
Some of the data underlying this article is made available by Adams et al. (2023c), and the DAWN JWST Archive (DJA). The remainder of the data set will be released together with Conselice et al. (2023, in prep). The catalogues of the sample discussed herein may be acquired by contacting the corresponding author.
## Acknowledgement
We thank Elizabeth Stanway for suggestions and thoughts on comparing spectral energy distribution models. We acknowledge support from the ERC Advanced Investigator Grant EPOCHS (788113), as well as a studentship from STFC. LF acknowledges financial support from Coordenacao de Aperfeicoamento de Pessoal de Nivel Superior - Brazil (CAPES) in the form of a PhD studentship. DL acknowledges support by the European Research Council via ERC Consolidator Grant KETJU (no. 818930). CCL acknowledges support from the Royal Society under grant RGF/EA/181016. CT acknowledges funding from the Science and Technology Facilities Council (STFC). This work is based on observations made with the NASA/ESA _Hubble Space Telescope_ (HST) and NASA/ESA/CSA _James Webb Space Telescope_ (JWST) obtained from the Mikulski Archive for Space Telescopes (MAST) at the _Space Telescope Science Institute_ (STScI), which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-03127 for JWST, and NAS 5-26555 for HST. Some of the data products presented herein were retrieved from the Dawn JWST Archive (DJA). DJA is an initiative of the Cosmic Dawn Center, which is funded by the Danish National Research Foundation under grant No. 140. This research made use of the following Python libraries: Numpy (Harris et al., 2020); Scipy (Virtanen et al., 2020); Matplotlib (Hunter, 2007); Astropy (Astropy Collaboration et al., 2013, 2018, 2022); EA27-PY (Brammer et al., 2008); LePhare (Arnouts et al., 1999; Ilbert et al., 2006); Bagpipes (Carnall et al., 2018); mpHpy (Dalcin & Fang, 2021); specutils (Astropy-Specutils Development Team, 2019); Pickle (Van Rossum, 2020).
|
2306.17435 | Thin-Shell Gravastar Model in $f(Q,T)$ Gravity | In the last few decades, gravastars have been proposed as an alternative to
black holes. The stability of the gravastar has been studied in many modified
theories of gravity along with Einstein's GR. The $f(Q,T)$ gravity, a
successfully modified theory of gravity for describing the current accelerated
expansion of the Universe, has been used in this article to study gravastar in
different aspects. According to Mazur and Mottola (Proc. Natl. Acad. Sci 101,
9545 (2004)), it has three regions with three different equations of state.
Here in this work, we have studied the interior of the gravastar by considering
the $p=-\rho$ EoS to describe the dark sector for the interior region. The next
region is a thin shell of ultrarelativistic stiff fluid, in which we have
investigated several physical properties, viz., the proper length, energy,
entropy, surface energy density, etc. In addition, we have studied the surface
redshift and speed of sound to check the potential stability of our proposed
thin-shell gravastar model. Apart from that, we have used the entropy
maximization technique to verify the stability of the gravastar model. The
gravastar's outer region is a complete vacuum described by exterior
Schwarzschild geometry. Finally, we have presented a stable gravastar model
which is singularity-free and devoid of any incompleteness in classical black
hole theory. | Sneha Pradhan, Debasmita Mohanty, P. K. Sahoo | 2023-06-30T07:14:40Z | http://arxiv.org/abs/2306.17435v1 | # Thin-Shell Gravastar Model in \(f(Q,T)\) Gravity
###### Abstract
In the last few decades, gravastars have been proposed as an alternative to black holes. The stability of the gravastar has been studied in many modified theories of gravity along with Einstein's GR. The \(f(Q,T)\) gravity, a successfully modified theory of gravity for describing the current accelerated expansion of the Universe, has been used in this article to study gravastar in different aspects. According to Mazur and Mottola [1; 2], it has three regions with three different equations of state. Here in this work, we have studied the interior of the gravastar by considering the \(p=-\rho\) EoS to describe the dark sector for the interior region. The next region is a thin shell of ultrarelativistic stiff fluid, in which we have investigated several physical properties, viz., the proper length, energy, entropy, surface energy density, etc. In addition, we have studied the surface redshift and speed of sound to check the potential stability of our proposed thin-shell gravastar model. Apart from that, we have used the entropy maximization technique to verify the stability of the gravastar model. The gravastar's outer region is a complete vacuum described by exterior Schwarzschild geometry. Finally, we have presented a stable gravastar model which is singularity-free and devoid of any incompleteness in classical black hole theory.
**Keywords:** Gravastar; Stability; \(f(Q,T)\) gravity.
## I Introduction
There has been a large scientific interest in understanding the problems in both cosmology and astrophysics during the past few decades. Compact objects are a crucial source for this reason because they provide a platform to test many pertinent ideas in the high-density domain. The Gravitationally Vacuum Condense Star, or simply gravastar, is an excellent notion for an extremely compact object that addresses the singularity problems in classical black hole (CBH) theory. It was first postulated by Mazur and Mottola [1; 2]. They construct a cold, compact object with an internal de Sitter condensate phase and an exterior Schwarzschild geometry of any total mass M that is free of all known limitations on the known CBH. As a result, this hypothesis has gained popularity among researchers and it could be seen as an alternative for the CBH.
The gravastar, in particular, has three separate zones with different equations of states (EoS), according to Mazur and Mottola's model:
1. An internal region that is full of dark energy with an isotropic de Sitter vacuum situation.
2. An intermediate thin shell consists of stiff fluid matter.
3. The outer area is completely vacuum, and Schwarzschild geometry represents this situation appropriately.
Recent studies on the brightness of type Ia distant supernovae [3; 4; 5] indicate that the universe is expanding more quickly than previously thought, which suggests that the universe's pressure \(p\) and energy density \(\rho\) should contradict the strong energy condition, that is, \(\rho+3p<0\). "Dark energy" is the substance that causes this requirement to be fulfilled at some point in the evolution of the universe [6; 7; 8]. There are several substances for the status of dark energy. The most well-known contender is a non-vanishing cosmological constant, which is equivalent to the fluid that satisfies the EoS \(p=-\rho\). There are two interfaces (junctions) located at \(R_{1}\) and \(R_{2}\) apart from the center, where \(R_{1}\) and \(R_{2}\) stand for the thin shell's interior and outer radii. The presence of stiff matter on the shell with thickness \(R_{2}-R_{1}=\epsilon<<1\) is required to provide the system's stability, which is achieved by exerting an inward force to counteract the repulsion from within.
Astrophysicists proposed a new solution of a compact, spherically symmetric astrophysical phenomenon, known as a gravastar, to solve the singularity problem in black hole geometry. There are several arguments for and against the theory that gravitational waves (GW), detected by LIGO, are the consequence of merging gravastars or black holes, despite the fact that no experimental observations or discoveries of gravastars have yet been made. A method for identifying gravastar was devised by Sakai et al. [9] by looking at the gravastar shadows. Since black holes don't exhibit microlensing effects of maximal brightness, Kubo and Sakai [10] hypothesized that gravitational lensing may be used to find gravastars. The finding of GW150914 [11; 12] by interferometric LIGO detectors increased the likelihood that ringdown signals originated from sources without an event horizon. In a recent examination of the picture taken by the First M87 Event Horizon Telescope (EHT), a shadow that resembled a gravastar has been discovered [13].
One could observe that there are numerous publications on the gravastar available in the literature that focuses on various mathematical and physical problems in the framework of general relativity postulated by Albert Einstein [14; 15; 16; 17; 18; 19; 20; 21; 22; 23]. Bilic et al. [16] replaced the de Sitter interior with a Chaplygin gas equation of state and saw the system as a Born-Infield phantom gravastar to examine the gravastar's interior, whereas Lobo [17] replaces the inner vacuum with dark energy. Although it is commonly known that Einstein's general relativity is an exceptional tool for revealing many hidden mysteries of nature, certain observable evidence of the expanding universe and the existence of dark matter has put a theoretical challenge to this theory. Hence, a number of modified theories have been put over time, like \(f(R),f(Q),f(T),f(R,T),f(Q,T)\) gravity, etc. The \(f(R),f(R,T)\) gravity which is based upon the Riemannian geometry in which Ricci scalar curvature plays an important role. Another way to represent the gravitational interaction between two particles in space-time is by torsion and non-metricity upon which \(f(T)\) and \(f(Q)\) gravity theory has been built, respectively. In the current project, our objective is to investigate the gravastar using one of the alternative theories of gravity, \(f(Q,T)\) gravity, and to examine many physical characteristics and stability of the object. The \(f(Q,T)\) gravity is the extension of the symmetric teleparallel gravity in which the gravitational action is determined by any function \(f\) of the nonmetricity \(Q\) and the trace of the matter energy-momentum tensor \(T\), such that \(L=f(Q,T)\). There are very few articles in which compact object has been studied under the framework of \(f(Q,T)\) gravity [24]. Xu et al. have investigated the cosmological implication of this theory, and they have obtained the cosmological evolution equation for isotropy, homogeneous, flat geometry [25]. In [26], the author has investigated the different FRW models with three specific forms of \(f(Q,T)\) gravity models. One could see the references to the recent work of gravastar in the framework of modified gravity [27; 28; 29; 30; 31; 32]. In the article, [33] researchers have studied the gravastar model in \(f(Q)\) gravity. Ghosh et al. [34] has studied the gravastar in Rastall gravity. In the work [35], the author has studied traversable wormhole solutions in the presence of the scalar field. Wormhole solutions in \(f(R,T)\) gravity have been studied in [36]. Elizalde et al. [37] discussed the cosmological dynamics in \(R^{2}\) gravity with logarithmic trace term. Godani and Samanta [38] discussed the gravitational lensing effect in traversable wormholes. In [39], the researchers have investigated wormhole solutions with scalar field and electric charge in modified gravity. In [40], the authors studied the cosmologically stable \(f(R)\) model and wormhole solutions. Salvatore et al. [41] studied the non-local gravity wormholes, and they obtained stable and traversable wormhole solutions. Shamir et al. [42] has explored the behavior of anisotropic compact stars in \(f(R,\phi)\) gravity. The Bardeen compact stars in Modified \(f(R)\) gravity have been researched in the work [43].
Our paper is organized as follows: In sec I we have given a brief introduction to the gravastar model and the recent research work regarding that. After that in sec II we provide the geometrical aspects of \(f(Q,T)\) gravity. In sec III we have derived the modified field equation and the modified energy conservation equation in \(f(Q,T)\) gravity. Sec IV gives the solution of the field equation for different regions using different EoS. After that in sec V we have studied the junction requirement and EoS and we have obtained the limiting range for the radius of the gravastar. The physical features of the model has been analyzed in sec VI. The most important thing is to check the stability of the model which is given in sec VII. Finally, we provide the conclusion of our analysis in sec VIII.
## II Construction of \(f(Q,T)\) gravity
The \(f(Q,T)\) theory of gravity which introduces an arbitrary function of scalar non-metricity \(Q\) and trace \(T\) of the matter energy-momentum tensor, is an intriguing modification to Einstein's theory of gravity. The action
of \(f(Q,T)\) theory coupled with matter Lagrangian \(\mathcal{L}_{m}\) is given by [44]
\[S=\int\sqrt{-g}\left[\frac{1}{16\pi}f(Q,T)+\mathcal{L}_{m}\right]d^{4}x, \tag{1}\]
where \(g\) represents the determinant of \(g_{\mu\nu}\). The non-metricity and disformation tensor is defined as
\[Q\equiv-g^{\mu\nu}\left(L^{\alpha}_{\ \beta\mu}L^{\beta}_{\ \nu \alpha}-L^{\alpha}_{\ \beta\alpha}L^{\beta}_{\ \mu\nu}\right), \tag{2}\] \[L^{\lambda}_{\ \mu\nu}=-\frac{1}{2}g^{\lambda\gamma}\left(\nabla_{ \nu}g_{\mu\gamma}+\nabla_{\mu}g_{\gamma\nu}-\nabla_{\gamma}g_{\mu\nu}\right). \tag{3}\]
The non-metricity tensor is defined as the covariant derivative of the metric tensor, and its explicit form is
\[Q_{\alpha\mu\nu}\equiv\nabla_{\alpha}g_{\mu\nu}. \tag{4}\]
with the trace of a non-metricity tensor as
\[Q_{\lambda}=Q^{\ \mu}_{\ \ \mu}_{\ \ \mu},\qquad\tilde{Q}_{\lambda}=Q^{\mu} _{\ \ \lambda\mu}.\]
The Superpotential \(P^{\lambda}_{\ \mu\nu}\) is defined as
\[P^{\lambda}_{\ \ \mu\nu}=-\frac{1}{2}L^{\lambda}_{\ \ \mu\nu}+\frac{1}{4}\left(Q^{ \lambda}-\tilde{Q^{\lambda}}\right)g_{\mu\nu}-\frac{1}{4}\delta^{\lambda}_{\ (\mu}Q_{\nu)}, \tag{5}\]
giving the relation of scalar non-metricity as
\[Q=-Q_{\lambda\mu\nu}P^{\lambda\mu\nu}. \tag{6}\]
The field equations of \(f(Q,T)\) theory by varying the action (1) with respect to the metric tensor inverse \(g^{\mu\nu}\) is obtained as
\[-\frac{2}{\sqrt{-g}}\nabla_{\lambda}\left(f_{Q}\sqrt{-g}\,P^{ \lambda}_{\ \mu\nu}\right)-\frac{1}{2}f\,g_{\mu\nu}+f_{T}\left(T_{\mu\nu}+\Theta_{\mu\nu }\right)\] \[-f_{Q}\left(P_{\mu\lambda\alpha}Q^{\ \lambda\alpha}_{\nu}-2Q^{ \lambda\alpha}_{\ \mu}\,P_{\lambda\alpha\nu}\right)=8\pi T_{\mu\nu}. \tag{7}\]
The terms used in the above are defined as
\[\Theta_{\mu\nu} = g^{\alpha\beta}\frac{\delta T_{\alpha\beta}}{\delta g^{\mu\nu}},\quad T_{\mu\nu}=-\frac{2}{\sqrt{-g}}\frac{\delta(\sqrt{-g}\mathcal{L}_{m})} {\delta g^{\mu\nu}}, \tag{8}\] \[f_{T} = \frac{\partial f(Q,T)}{\partial T},\quad f_{Q}=\frac{\partial f (Q,T)}{\partial Q}. \tag{9}\]
Where \(T_{\mu\nu}\) is known as the energy-momentum tensor.
## III Modified field equation in \(f(Q,T)\)
To derive the modified field equation let's take the static, spherically symmetric line element given by,
\[ds^{2}=e^{v}dt^{2}-e^{\lambda}dr^{2}-r^{2}(d\theta^{2}+sin^{2}\theta d\phi^{2}). \tag{10}\]
To describe the fluid distribution we are going to take the energy-momentum tensor in the form :
\[T_{\mu\nu}=(\rho+p_{t})u_{\mu}u_{\nu}-p_{t}\delta_{\mu\nu}+(p_{r}-p_{t})v_{ \mu}v_{r}, \tag{11}\]
where \(\rho\) is the density of the fluid, \(p_{r}\) and \(p_{t}\) are the pressures of the fluid in the direction of \(u_{\mu}\)(radial pressure) and orthogonal to \(u_{\nu}\) (tangential pressure) respectively. \(u_{\mu}\) is the time like four-velocity vector. \(v_{\mu}\) is the unit space-like vector in the direction of the radial coordinate. Therefore the stress energy momentum tensor \(T_{\mu\nu}\) and the components of \(\Theta_{\mu\nu}\) can be expressed as,
\[T_{\mu\nu}=diag(e^{v}\rho,e^{\lambda}p_{r},r^{2}p_{t},r^{2}p_{t }sin^{2}\theta)\] \[\Theta_{11}=-e^{v}(P+2\rho),\,\Theta_{22}=e^{\lambda}(P-2p_{r}), \tag{12}\] \[\Theta_{33}=r^{2}(P-2p_{t}),\,\,\Theta_{44}=r^{2}sin^{2}\theta( P-2p_{t}).\]
Where we have taken the Lagrangian matter density \(\mathcal{L}_{\rm m}=-P=-\frac{p_{r}+2p_{t}}{3}\). By utilizing the aforementioned constraints the derived modified field equation for spherically symmetric metric in \(f(Q,T)\) gravity is,
\[8\pi\rho=\frac{1}{2r^{2}e^{\lambda}}[2rf_{QQ}Q^{\prime}(e^{\lambda}-1)+f_{Q}[( e^{\lambda}-1)(2+rv^{\prime})+(1+e^{\lambda})r\lambda^{\prime}]+fr^{2}e^{ \lambda}]-f_{T}[P+\rho], \tag{13}\]
\[8\pi p_{r}=-\frac{1}{2r^{2}e^{\lambda}}[2rf_{QQ}Q^{\prime}(e^{\lambda}-1)+f_{Q} [(e^{\lambda}-1)(2+rv^{\prime}+r\lambda^{\prime})-2rv^{\prime}]+fr^{2}e^{ \lambda}]+f_{T}[P-p_{r}], \tag{14}\]
\[8\pi p_{t}=-\frac{1}{4re^{\lambda}}[-2rf_{QQ}Q^{\prime}v^{\prime}+f_{Q}[2v^{ \prime}(e^{\lambda}-2)-rv^{\prime 2}+\lambda^{\prime}(2e^{\lambda}+rv^{\prime})-2rv^{ \prime\prime}]+2fr{e^{\lambda}}]+f_{T}[P-p_{t}]. \tag{15}\]
Now here we are going to take a particular functional form of \(f(Q,T)\) gravity as \(f(Q,T)=\alpha\,Q+\beta\,T\). One can see there are many references [24; 25; 44] in which this cosmological models has been studied widely. Then we
can rewrite the field equation in a as like :
\[e^{-\lambda}\left(\frac{\lambda^{\prime}}{r}-\frac{1}{r^{2}}\right)+\frac{1}{r^{2 }}=\rho^{eff}, \tag{16}\]
\[e^{-\lambda}\left(\frac{\nu^{\prime}}{r}+\frac{1}{r^{2}}\right)-\frac{1}{r^{2}} =p_{r}^{eff}, \tag{17}\]
\[e^{-\lambda}\left(\frac{\nu^{\prime\prime}}{2}-\frac{\lambda^{\prime}\nu^{ \prime}}{4}+\frac{\nu^{\prime 2}}{4}+\frac{\nu^{\prime}-\lambda^{\prime}}{2r} \right)=p_{t}^{eff}, \tag{18}\]
Where,
\[\rho^{eff}=\frac{8\pi\rho}{\alpha}+\frac{\beta}{3\alpha}(3\rho+p_{r}+2p_{t})- \frac{\beta}{2\alpha}(\rho-p_{r}-2p_{t}), \tag{19}\]
\[p_{r}^{eff}=\frac{8\pi p_{r}}{\alpha}-\frac{2\beta}{3\alpha}(p_{t}-p_{r})+ \frac{\beta}{2\alpha}(\rho-p_{r}-2p_{t}), \tag{20}\]
\[p_{t}^{eff}=\frac{8\pi p_{t}}{\alpha}-\frac{\beta}{3\alpha}(p_{r}-p_{t})+ \frac{\beta}{2\alpha}(\rho-p_{r}-2p_{t}). \tag{21}\]
One can verify that for \(\alpha=1\), \(\beta=0\) i.e. for \(f=Q\) the above field equation reduces to Einstein's GR. However in this article, we limit ourselves to the isotropic scenario in order to establish the simplest possibility where \(p_{r}=p_{t}\). Now the energy conservation equation is given by,
\[\frac{dp^{eff}}{dr}+\frac{\nu^{\prime}}{2}(p^{eff}+\rho^{eff})=0. \tag{22}\]
By using the equation (19) and (20) we get the modified energy conservation equation in \(f(Q,T)\) gravity as:
\[\frac{dp}{dr}+\frac{\nu^{\prime}}{2}\left[(1+\frac{\beta}{8\pi})(p+\rho) \right]+\frac{\beta}{16\pi}(\rho^{\prime}-3p^{\prime})=0. \tag{23}\]
The above equation is different from that obtained in GR and can be retrieved in the limit \(\beta\to 0\).
## IV Geometry of Gravastar
We are specifically interested in the geometrical interpretation and their related analytical solution in the three different zones for the gravastar under study. It is simple to imagine the idea that the inside of the star is considered to be encircled by a thin shell made of ultra-relativistic stiff fluid, but the outside area is completely vacuum. Schwarzschild's measure is therefore assumed to be appropriate for this outer area. The shell's structure is believed to be extremely thin, with a limited width ranging \(R_{1}=R\leq r\leq R+\epsilon=R_{2}\), where \(r\) is the radial coordinate and \(R_{1},R_{2}\) denotes the inner and outer radius of the shell.
### Interior Region
In the primary model proposed by Mazur and Mottola[1; 2] the three different zones obey the standard cosmological EoS \(p=\omega\rho\), where \(\omega\) is the EoS parameter takes different value for different region. Here, we suppose that an enigmatic gravitational source is present in the interior area. Dark matter and dark energy are typically assumed to be separate entities, although there is the possibility that they are both just various representations of the same thing. For describing the dark sector in the interior region we are interested to consider the EoS is given by,
\[p=-\rho. \tag{24}\]
For the aforementioned EoS obtaining constant critical density \(\rho_{c}\) from the energy conservation Eq.(23), we get the pressure for the interior region as,
\[p=-\rho_{c}. \tag{25}\]
Using Eq.(25) in field equations (16) and (19) we obtained the final expression for metric potential \(e^{-\lambda(r)}\) as,
\[e^{-\lambda(r)}=\frac{2(\beta-4\pi)\rho_{c}r^{3}-3c_{1}}{3\alpha r}+1. \tag{26}\]
To make our solution regular at center we set the integrating constant \(c_{1}=0\). Thus we have,
\[e^{-\lambda(r)}=\frac{2(\beta-4\pi)r^{2}\rho_{c}}{3\alpha}+1. \tag{27}\]
Again using (27) we get another metric potential from (17,20) as,
\[e^{v(r)}=C_{1}\left[2(4\pi-\beta)\rho_{c}r^{2}-3\alpha\right]. \tag{28}\]
It is clear from the aforementioned results that there is no singularity in the inner solutions, which overcomes the issue of a classical black hole's central singularity. For more clearance, we have plotted the variation of the metric potential \(e^{\lambda}\) with respect to the radial parameter \(r\) in Fig.(1).
One could physically extrapolate from the figure that there is no central singularity, along with that the metric potential is regular at \(r=0\), and it is finite and positive across the whole interior area. Additionally, The following equation could be used to determine the active gravitational mass of the internal region:
\[\mathcal{M}(\mathcal{R})=\int_{0}^{\mathcal{R}}4\pi r^{2}\rho dr=\frac{4}{3}\pi \mathcal{R}^{3}\rho_{c}. \tag{29}\]
Where \(\mathcal{R}\) represents the radius for the interior area and \(\rho_{c}\) is the critical density.
### Shell
The shell is made of ultra-relativistic stiff matter and abides by the EoS \(p=\rho\). Zel'dovich [45; 46] was the pioneer of the concept of this extremely relativistic fluid known as the stiff fluid in correspondence to the cold baryonic universe. We can claim that in the current situation, this could result from thermal excitation with a very low chemical potential or from the preserved number density of the gravitational quanta at absolute zero. This kind of fluid has been widely explored by several researchers to investigate different cosmological [47; 48; 49] and astrophysical [50; 51; 52] aspects. One may note that it is extremely challenging to solve the field equations in the non-vacuum area or the shell. However, an analytical solution could be found within the specifications of the thin shell limit, i.e. \(0<e^{-\lambda(r)}<<1\). We can argue that the interior area between the two space-times must be a thin shell, as suggested by Israel [55]. Moreover, in general, any parameter that is a function of \(r\) could be considered \(<<1\) as \(r\to 0\). By considering this type of approximation our field Eq.(16)-Eq.(18) along with the Eq.(19)-Eq.(21) reduces to:
\[\alpha\left(\frac{e^{-\lambda}\lambda^{\prime}(r)}{r}+\frac{1}{r^{2}}\right)= 8\pi\rho+\frac{\beta}{2}(5p+\rho), \tag{30}\]
\[\alpha\left(\frac{-1}{r^{2}}\right)=8\pi p+\frac{\beta}{2}(\rho-3p), \tag{31}\]
\[\alpha\left(\frac{-\lambda^{\prime}v^{\prime}e^{-\lambda(r)}}{4}-\frac{e^{- \lambda}\lambda^{\prime}}{2r}\right)=8\pi p+\frac{\beta}{2}(\rho-3p). \tag{32}\]
Utilizing the Eqs.(30)-(32) we achieve the two metric potential as
\[e^{-\lambda(r)}=\frac{2(\beta+8\pi)\log(r)}{8\pi-\beta}-C_{2}, \tag{33}\]
\[e^{\nu(r)}=C_{3}\left(r(\beta+8\pi)\right)^{-\frac{3\pi\pi}{\mathcal{R}^{3} \pi-\beta}}. \tag{34}\]
Where \(C_{2}\) and \(C_{3}\) are integrating constants. Furthermore, by plugging the EoS \(p=\rho\) and using Eq.(34) into the energy conservation equation (23) we have obtained the pressure/matter density for the shell region as,
\[p(r)=\rho(r)=\rho_{0}\left(8\pi r-\beta r\right)^{\frac{3\pi\pi}{8\pi-\beta}}. \tag{35}\]
Where \(\rho_{0}\) is the constant of integration. Fig.(2) shows the variation of pressure or matter density. One can see that the matter density of the shell is monotonically growing up toward the outer boundary of the shell. The shell is made of ultra-relativistic stiff fluid, since the pressure or matter density is monotonically increasing towards the outer surface we can physically interpret that the amount of stiff matter is rising towards the outer border rather than the internal region of the shell.
### Exterior Region
The EoS \(p=\rho=0\) is believed to be obeyed by the outside of the gravastar, indicating that the external portion
Figure 1: Variation of the metric potential (\(e^{\lambda}\)) with regard to the radial parameter \(r\) for \(\alpha=-4.5\), \(\beta=3.4\), \(\rho_{c}=0.001\).
of the shell is entirely vacuum. Thus, utilizing Eq.(16)-Eq.(17) along with the Eq.(19)-Eq.(20), we obtain
\[\lambda^{\prime}+v^{\prime}=0. \tag{36}\]
The line element for the outside region may be seen as the well-known Schwarzschild metric, which is provided by the solution to Eq.(35) given by,
\[ds^{2}=\left(1-\frac{2M}{r}\right)dt^{2}-\left(1-\frac{2M}{r}\right)^{-1}dr^{2 }-r^{2}d\Omega^{2}, \tag{37}\]
where \(d\Omega^{2}=(d\theta^{2}+sin^{2}\theta d\phi^{2})\) and \(M\) denotes the total mass of the object.
### Boundary Condition
There are two junctions/interfaces in a gravastar configuration. Let us denote the interface between interior space-time and intermediate thin shell (at \(r=R_{1}\)) by junction-\(I\) and the interface between the intermediate thin shell and exterior space-time (at \(r=R_{2}\)) by junction-\(II\). It is necessary that the metric functions at these interfaces must be continuous for any stable arrangement. We matched the metric functions at these borders in order to find the unknown constants of our current study like \(C_{1}\), \(C_{2}\), and \(C_{3}\), and we ultimately discovered the values of these constants.
* **Junction-I :** \[\frac{2(\beta+8\pi)\log R_{1}}{8\pi-\beta}-C_{2}=\frac{2\rho_{c}(\beta-4\pi)R _{1}^{2}}{3\alpha}+1,\] (38) \[C_{3}\left(R_{1}(\beta+8\pi)\right)^{-\frac{32\pi}{\beta+8\pi}}=C_{1}\left[2 \rho_{c}(4\pi-\beta)R_{1}^{2}-3\alpha\right].\] (39)
* **Junction-II :** \[\frac{2(\beta+8\pi)\log R_{2}}{8\pi-\beta}-C_{2}=1-\frac{2M}{R_{2}},\] (40) \[C_{3}\left(R_{2}(\beta+8\pi)\right)^{-\frac{32\pi}{\beta+8\pi}}=1-\frac{2M}{R_ {2}}.\] (41)
* **Obtained Constants :** \[C_{3}=-\frac{(2M-R_{2})((\beta+8\pi)R_{2})^{\frac{32\pi}{\beta+8\pi}}}{R_{2}},\] (42) \[C_{2}=\frac{2(\beta+8\pi)\log(R_{1})}{8\pi-\beta}-\frac{2\rho_{c}(\beta-4\pi) R_{1}^{2}}{3\alpha}-1,\] (43) \[C_{1}=\frac{(2M-R_{2})((\beta+8\pi)R_{1})^{-\frac{32\pi}{\beta+8\pi}}((\beta+8 \pi)R_{2})^{\frac{32\pi}{\beta+8\pi}}}{R_{2}\left(3\alpha+2\beta\rho_{c}r^{2} -8\pi\rho_{c}R_{1}^{2}\right)}.\] (44)
Now to find the numerical values of constants \(C_{1}\),\(C_{2}\) and \(C_{3}\) we have considered the astrophysical object PSR J1416- 2230 [53] with \(M=1.97M_{\odot}\), internal radius \(R_{1}=10\) and the exterior radius \(R_{2}=10.01\). Apart from that by varying a number of values of model parameter \(\alpha\) and \(\beta\) we have determined a bunch of numerical values of \(C_{1}\), \(C_{2}\) and \(C_{3}\) which is shown in table-1.
With relation to the example of those numerical solutions of constants for some particular parameter choices given above, let's talk about the parameter space of our
solution. One could inquire about the following associated problems:
1. For particular choices of \(M,R_{1},R_{2}\), will we get a singular free solution always?
2. If one varies the model parameter then the results will be unique or not?
We provide some arguments in answer to these concerns: In the current work, we have selected values for a number of factors to examine the physical behavior of gravastar. It will provide a unique solution for a given value of \(M\), \(R_{1}\), and \(R_{2}\), but we have chosen these values in order to satisfy the ratios \(\frac{2M}{R_{1}}<1\), \(\frac{2M}{R_{2}}<1\) for a stable gravastar model. Besides there are some other criteria like the surface redshift \(Z_{\rm s}<2\) and the square of the speed of sound(\(v_{s}^{2}\)) must satisfy the inequality \(0<v_{s}^{2}<1\). Apart from that for avoiding central singularity, we should maintain \(\frac{2(\beta-4\pi)\rho_{c}r^{2}}{3\alpha}+1\neq 0\). Moreover, we have taken \(\rho_{0}=1\) and \(\rho_{c}=0.001\) in order to maintain \(\rho_{0}>>\rho_{c}\). We are free to choose any \(M\), \(R_{1}\), and \(R_{2}\) combination that would provide the same findings as those presented in this research as long as the aforementioned requirements are valid.
## V Junction condition and equation of states
It is established that the gravastar is divided into three regions viz. the interior (I), the intermediate thin shell (II), and the exterior (III). This shell keeps the internal area and outer region connected. So, This region is crucially significant in the construction of the gravastar. According to the fundamental junction requirement, regions I and III must match smoothly at the junction. The derivatives of these metric coefficients may not be continuous at the junction surface, despite the fact that the metric coefficients are continuous there. In order to calculate the surface stresses at the junction, we will now employ the Darmois-Israel [54, 55, 56] condition. The Lanczos equation [57, 58, 59, 60] provides the intrinsic surface stress-energy tensor \(S_{ij}\) in the following manner:
\[S_{ij}=-\frac{1}{8\pi}(k_{ij}-\delta_{ij}k_{\gamma\gamma}). \tag{45}\]
In the above expression, \(k_{ij}=K_{ij}^{+}-K_{ij}^{-}\) denotes the discontinuity in some second fundamental expression. Where the second fundamental expression is given by,
\[K_{ij}^{\pm}=-n_{\sigma}^{\pm}\left(\frac{\partial x_{\sigma}}{\partial\phi^{ i}\partial\phi^{j}}+\Gamma_{km}^{l}\frac{\partial x^{l}}{\partial\phi^{i}} \frac{\partial x^{m}}{\partial\phi^{j}}\right), \tag{46}\]
where \(\phi^{i}\) denotes the intrinsic co-ordinate in the shell area, along with \(n^{\pm}\) represents the two-sided unit normal to the surface, which can be written as,
\[n^{\pm}=\pm\left|g^{lm}\frac{\partial f}{\partial x^{l}}\frac{\partial f}{ \partial x^{m}}\right|^{-1/2}\frac{\partial f}{\partial x^{\sigma}}, \tag{47}\]
with \(n^{\gamma}n_{\gamma}=1\). Utilizing the Lanczos method [57] the surface energy tensor can be written as \(S_{ij}=diag(-\sum P)\), where the surface energy density and surface pressure are denoted by \(\sum\) and \(P\) respectively and are defined by,
\[\sum=-\frac{1}{4\pi\mathcal{R}}\left[\sqrt{e^{-\lambda}}\right]_{-}^{+}, \tag{48}\]
\[P=-\frac{\sum}{2}+\frac{1}{16\pi}\left[\frac{(e^{-\lambda})^{\prime}}{\sqrt{e ^{-\lambda}}}\right]_{-}^{+}, \tag{49}\]
\[\text{Also the EoS}\left(\omega\right)=\frac{P}{\sum}. \tag{50}\]
Here \(-\) and \(+\) represent the interior space-time and Schwarzschild space-time respectively. Calculating the Eq. (48)-(50) we get the expression of the above quantities as,
\[\sum=\left(-\frac{1}{4\pi R}\right)\left(\sqrt{1-\frac{2M}{R}}-\sqrt{\frac{2 (\beta-4\pi)\rho cR^{2}}{3\alpha}+1}\right) \tag{51}\]
\begin{table}
\begin{tabular}{c c c c c} \hline \(\alpha\) & \(\beta\) & \(C_{1}\) & \(C_{2}\) & \(C_{3}\) \\ \hline \(-4.5\) & 3.4 & 0.0396871 & 4.91029 & \(2.72477\times 10^{8}\) \\ \(-4.6\) & 3.3 & 0.0388762 & 4.86301 & \(2.88649\times 10^{8}\) \\ \(-4.7\) & 3.2 & 0.0380979 & 4.81611 & \(3.05892\times 10^{8}\) \\ \(-4.8\) & 3.1 & 0.0373501 & 4.76958 & \(3.24284\times 10^{8}\) \\ \(-4.9\) & 3.0 & 0.0366311 & 4.72344 & \(3.4391\times 10^{8}\) \\ \end{tabular}
\end{table}
Table 1: Different numerical values of constants for PSR J1416-223 assuming \(R_{1}=10\) km and \(R_{2}=10.01\) km.
There is some set of conditions known as energy conditions that must be applied in order for a geometric structure to be physically viable. The well-recognized energy criterion are :
1. **NEC:**\(\sum+P>0\),
2. **WEC:**\(\sum>0,\sum+P>0\),
3. **SEC:**\(\sum+P>0\),\(\sum+3P>0\),
4. **DEC:\(\sum>0,\sum\pm P>0\).
The presented model is physically feasible if these energy criteria are established. Here, we're investigating to see if the null energy requirement, which guarantees the presence of ordinary or exotic matter in the thin shell, is satisfied or not. In this context, it is noteworthy to say that violation of null energy conditions (NEC) leads to violation of other energy conditions. It is illustrated in Fig.(3) that the NEC is satisfied over a range of model parameter values throughout the entire region.
Besides that, we have plotted the variation of surface energy density with respect to the thickness parameter(\(\epsilon\)) which shows that the surface energy density is monotonically decreasing towards the boundary of the shell. The mass of the thin shell now is easily determined using the equation for the surface energy density given by,
\[m_{\rm s}=4\pi R^{2}\sum=-R\left(\sqrt{1-\frac{2M}{R}}-\sqrt{\frac{2(\beta-4\pi )\rho_{c}R^{2}}{3\alpha}+1}\right). \tag{54}\]
Now for determining the real value of shell mass, we have the inequality, \(m_{s}>0\) from which we get the upper bound of the radius as \(R<\left(\frac{3M\alpha}{\rho_{c}(4\pi-\beta)}\right)^{\frac{1}{3}}.\) Thus we get the limiting value on the radius as
\[2M<R<\left(\frac{3M\alpha}{\rho_{c}(4\pi-\beta)}\right)^{\frac{1}{3}}. \tag{55}\]
Figure 4: variation of the Surface energy density (\(\sqcap\)) with respect to thickness \(\epsilon(\rm{in}\rm{\ km})\) for \(\alpha=-4.5,\beta=3.4\).
Figure 3: Evolution of the NEC (\(\sum+P\)) by varying model parameter \(\alpha\).
## VI Physical features of the model
### Proper thickness
According to Mazur and Mottola's hypotheses [1; 2], the stiff fluid of the shell is positioned between the meeting of two space-times. The length of the shell ranges from \(R_{1}=R\) (which is the phase barrier between the interior area and intermediate thin shell) up to \(R_{2}=R+\epsilon\) (which is the phase border between the exterior space time and intermediate thin shell). So, using the following formula, one can find the required length or proper thickness of the shell as well as the proper thickness between these two interfaces:
\[l=\int_{R}^{R+\epsilon}\sqrt{e^{\lambda}}dr,\] \[=\int_{R}^{R+\epsilon}\sqrt{\frac{\beta-8\pi}{(8\pi-\beta)C_{2}-2 (\beta+8\pi)\log(r)}}\,dr,\] \[=\left[-e^{-\frac{(\beta-8\pi)C_{2}}{2(\beta+8\pi)}}\sqrt{\frac{ \pi(\beta-8\pi)}{2(\beta+8\pi)}}\text{Erf}\sqrt{\frac{(8\pi-\beta)C_{2}}{2( \beta+8\pi)}-\log(r)}\right]_{R}^{R+\epsilon} \tag{56}\]
The variation of the proper length with respect to the thickness parameter \(\epsilon\) is given in Fig.(5). The figure demonstrates that the proper length rises monotonically as shell thickness increases.
### Energy
The energy of the shell can be calculated by the formula,
\[E=\int_{R}^{R+\epsilon}4\pi r^{2}\rho\,dr,\] \[E=\int_{R}^{R+\epsilon}4\pi r^{2}\rho_{0}\,\left(8\pi r-\beta r \right)^{\frac{32\pi}{8\pi-\beta}}\,\,dr, \tag{57}\] \[E=\frac{4\pi\rho_{0}r^{2}((8\pi-\beta)r)^{\frac{32\pi}{8\pi- \beta}+1}}{56\pi-3\beta}.\]
The variation of the shell energy is illustrated in Fig.(6). In this graph, it can be observed that the energy rises as the shell's thickness increases. The fluctuation of energy is comparable to the fluctuation in matter density. It meets the requirement that the energy of the shell must be increased as the radial distance increases.
### Entropy
The stable configuration for a single condensate area is zero entropy density, which is present in the gravastar's innermost region. Entropy on the intermediate thin shell can be calculated using the formula according to Mazur and Mottola's work [1; 2],
\[\mathcal{S}=\int_{R}^{R+\epsilon}4\pi r^{2}s(r)\sqrt{e^{\lambda}}dr, \tag{58}\]
here the entropy density at local temperature \(T(R)\) is given by the expression \(s(r)=\frac{\gamma^{2}K_{1}^{2}T(R)}{4\pi\hbar^{2}}=\gamma\sqrt{p/2\pi}\) where \(\gamma\) is the dimensionless parameter. In this work, we have considered geometrical units i.e. \(G=c=1\) as well as Planckian units \(K_{B}=1,\hbar=1\). Our estimates of
Figure 5: variation of the proper length (\(l\)) with respect to thickness \(\epsilon(\text{in km})\) for \(\alpha=-4.5\) and \(\beta=3.4\).
the entropy of the thin shell are limited to the second-order term of the thickness parameter, i.e. the order \(\epsilon^{2}\), using Taylor series approximation. Ultimately, we have calculated the intermediate thin shell's entropy as follows:
\[\mathcal{S}=2\sqrt{2\pi}\gamma r^{2}\epsilon\sqrt{\frac{(\beta-8 \pi)\rho_{0}(8\pi r-\beta r)^{\frac{32\pi}{8\pi-\beta}}}{(8\pi-\beta)C_{2}-2( \beta+8\pi)\log(r)}}\ + \tag{59}\] \[\frac{\epsilon^{2}\left(2\sqrt{2\pi}\gamma r((8\pi-\beta)(\beta- 2\beta C_{2}+8\pi(4C_{2}+1))-4(16\pi-\beta)(\beta+8\pi)\log(r))\sqrt{\frac{( \beta-8\pi)\rho_{0}((8\pi-\beta)r)^{-\frac{32\pi}{8\pi-\beta}}}{(8\pi-\beta)C_ {2}-2(\beta+8\pi)\log(r)}}\right)}{2\left((\beta-8\pi)^{2}C_{2}+2\left(\beta^{ 2}-64\pi^{2}\right)\log(r)\right)}.\]
Fig.(7) depicts the evolution of the shell entropy, which shows the growing behavior of the shell entropy with regard to thickness (\(\epsilon\)). Another acceptable condition is that entropy should reach its greatest value on the surface for a stable gravastar configuration which is demonstrated in our analysis.
## VII Stability of the stellar model
In this section, we are interested to investigate the stability of the thin shell gravastar model by analyzing some physical parameters.
### Study of Herrera's cracking concept
Recent observational data appear to indicate that the cosmos is expanding more quickly than before[3; 4; 5]. If general relativity is taken to be the right theory of gravity characterizing the behavior of the universe on a large scale, then the energy density and pressure of the cosmos should violate the strong energy condition. The stable or unstable configuration of gravastars could be analyzed through the nature of \(\eta\), where \(\eta\) is an effective parameter that can be interpreted as the square of the speed of sound i.e. \(\eta=v_{s}^{2}\)[61; 27]. For a stable system \(\eta\) should satisfy \(0<\eta\leq 1\). The speed of sound shouldn't be higher than the speed of light, as is clear. However, this restriction might not be met on the surface layer to test the gravastar's stability. The square of the speed of sound is defined by,
\[\eta=v_{s}^{2}=\frac{P^{\prime}}{\sum^{\prime}}. \tag{60}\]
Where'represents the derivative w.r.t the radial coordinate. As a result, by using (51,52) we examine the parameter's sign to determine the stability of gravastar configurations. We utilize the graphical behavior since the mathematical expression of \(\eta\) is complicated.
From Fig.(8) it could be noticed that the effective parameter \(\eta\) satisfies the inequality \(0<\eta\leq 1\) throughout the entire shell region. Here we have varied the model parameter \(\alpha\) and we observed for each value of \(\alpha\) our model behaves physically stable. One more important observation to mention is that whenever the value of \(\alpha\) increases the parameter \(\eta\to 1\). So as the model parameter value rises our proposed gravastar model approaches the unstable situation.
### Surface Redshift
The study of a gravastar's surface redshift is one of the most basic ways to understand the stability and detection of the object. The formula \(Z_{s}=\frac{\Delta\lambda}{\lambda_{c}}=\frac{\lambda_{a}}{\lambda_{c}}\) could be used for determining the gravitational surface redshift
Figure 7: variation of the entropy (\(\mathcal{S}\)) with respect to thickness \(\epsilon(\text{in km})\) for \(\beta=3.4\), \(\alpha=-4.5\).
of the gravastar, where \(\lambda_{0}\) and \(\lambda_{e}\) represents the wavelength detected by the observer and emitted from the source. Buchdahl [62; 63] proposed that the value of surface redshift should not be more than 2 for an isotropic, stable, perfect fluid distribution. However, Ivanov [64] claimed that for anisotropic fluid dispersion, it might go as high as 3.84. Other than that Barraco and Hamity [65] showed that for an isotropic fluid distribution, \(Z_{s}\leq 2\) holds when the cosmological constant is absent. Bohmer and Harko [66] though, showed that in the presence of the anisotropic star's cosmological constant, \(Z_{s}\leq 5\). Now in our case, we have obtained the surface redshift by the following formula,
\[Z_{s}=-1+\frac{1}{\sqrt{g_{tt}}}=\frac{1}{\sqrt{C_{3}((\beta+8\pi)r)^{-\frac{ 32\pi}{\beta+8\pi}}}}-1. \tag{61}\]
The graphical analysis of \(Z_{s}\) is given in Fig.(9). We have varied the model parameter \(\alpha\) and \(\beta\) for analyzing the maximum possibility case of \(Z_{s}\) and in each case, it is noticed that \(Z_{s}<1\). Consequently, we can thus assert that the current gravastar model is both physically stable and appropriate in the \(f(Q,T)\) framework.
### Entropy Maximization:
Each quasi-black hole (QBH) candidate must be stable to constitute a physically feasible endpoint of gravitational collapse [67]. Now, in order to verify the stability of the current investigation of gravastar in \(f(Q,T)\) gravity, we have used the entropy maximization method recommended by Mazur and Mottola [1; 2]. Since the shell region is only the non vacuum region with stiff fluid and contains the positive heat capacity so the solution should thermodynamically stable here. To check the stability we will use the entropy maximization technique in the shell region. For maximizing the entropy function at first the first variation of the entropy function should vanish at the boundaries of the shell i.e. \(\delta\mathcal{S}=0\) at \(r=R_{1}\) and \(r=R_{2}\). After that we have checked the nature of the second derivative i.e. of \(\delta^{2}\mathcal{S}\) by it's sign for all the variation of \(M(r)\). The entropy function is given by,
\[\mathcal{S}=\frac{\gamma k_{B}}{\hbar G}\int_{R_{1}}^{R_{2}}rdr\left(2\frac{dM }{dr}\right)^{\frac{1}{2}}\frac{1}{\sqrt{1-\frac{2M(r)}{r}}}. \tag{62}\]
A necessary and sufficient condition for the dynamic stability of a static, spherically symmetric solution of the field problem is thermodynamic stability in the context of a hydrodynamic treatment. The second derivative of the entropy function is given by,
\[\delta^{2}\mathcal{S}=\frac{\gamma k_{B}}{\hbar G}\int_{R_{1}}^{R_{2}}rdr \left(2\frac{dM}{dr}\right)^{-\frac{3}{2}}\left(1-\frac{2M}{r}\right)^{-\frac{ 1}{2}}\]
\[\left\{-\left[\frac{d(\delta M)}{dr}\right]^{2}+2\frac{(\delta M)^{2}}{r^{2} \left(1-\frac{2M}{r}\right)^{2}}\frac{dM}{dr}\left(1+2\frac{dM}{dr}\right) \right\}. \tag{63}\]
With the help of equation 33, 37 and 43 we have determined the functional value of \(M(r)\) as,
\[M(r)=\frac{r\left(3\alpha(\beta+8\pi)\ln(\frac{R_{1}}{r})+\left(\beta^{2}-12 \pi\beta+32\pi^{2}\right)\rho_{c}R_{1}^{2}\right)}{3\alpha(8\pi-\beta)}. \tag{64}\]
Figure 8: variation of \(\eta\) with respect to thickness \(e(\mathrm{in\ km})\) for different values of \(\alpha\)
Figure 9: variation of \(Z_{s}\) with respect to thickness \(e(\mathrm{in\ km})\) for different values of model parameter \(\alpha\) and \(\beta\). Red Dotdashed (\(\alpha=-4.1,\beta=3.9\)), Blue Dotted (\(\alpha=-4.4,\beta=3.6\)), Green Tan (\(\alpha=-4.7,\beta=3.3\)), Black Dashed (\(\alpha=-5.0,\beta=3.0\)), Magenta Thickness (\(\alpha=-5.3,\beta=2.7\))
Now if we consider the linear combination of \(M(r)\) as \(\delta M=\chi_{0}\psi\) where \(\psi\) becomes vanish at the boundaries \(R_{1}\) and \(R_{2}\), then integrating Eq.(63) partially by using the diminishing of the variation \(\delta M\) we get,
\[\delta^{2}\mathcal{S}=-\frac{\gamma k_{B}}{\hbar G}\int_{R_{1}}^{R_{2}}\frac{ rdr}{\sqrt{\left(1-\frac{2M}{r}\right)}}\left(2\frac{dm}{dr}\right)^{-\frac{3}{2}} \chi_{0}^{2}\left(\frac{d\psi}{dr}\right)^{2}<0. \tag{65}\]
It is evident from the above expression that for any radial variations that vanish at the endpoints of the shell's boundaries, the entropy function in \(f(Q,T)\) gravity reaches its maximum value. We may thus draw the conclusion that a perturbation in the gravastar's intermediate shell area's fluid leads to a decrease in entropy in region II, which indicates the idea that our solutions are stable against minor perturbations with the specified endpoints. In essence, the stability of the gravastar is unaffected by the effect of \(f(Q,T)\) gravity.
## VIII Discussion and conclusion
Following the model put forward by Mazur-Mottola [1; 2] within the context of general relativity, we have developed a unique stellar model of a gravastar under the theory of \(f(Q,T)\) gravity in this research. There are three different regions namely interior region, intermediate thin shell, and exterior space-time with three different EoS. The interior region fully consists of dark energy as hypothesized by [1; 2]. The following are some of the gravastars' crucial characteristics:
* **Interior Region :** Using the EoS (25) we have derived two non-singular metric potentials (27,28) from the described field equation in \(f(Q,T)\) gravity. The metric potentials are finite and remain positive throughout the entire interior region. This confirms our proposed gravastar model in \(f(Q,T)\) gravity is able to devoid of the concept of central singularity in CBH.
* **Intermediate thin shell:*
* We have estimated the metric potentials in the region of the shell by using the thin shell approximation. Eq. (33) and Eq.(34) indicate that two metric potentials remain finite as well as positive throughout the entire shell.
* **Pressure or Matter density:*
* Apart from that using the energy conservation equation (23), we have derived the pressure or matter density (35) in the shell. Fig.(2) represents the variation of the pressure or matter density with respect to the thickness parameter (\(\epsilon\)). One can see that the matter density of the shell is monotonically growing up toward the outer boundary of the shell. The shell is made of ultra-relativistic stiff fluid, since the pressure or matter density is monotonically increasing towards the outer surface we can physically interpret that the amount of stiff matter is rising towards the outer border rather than the internal region of the shell. That is why the shell's outer boundary becomes denser than the interior border.
* **Junction Condition and EoS :** The junction requirement for the formation of a thin shell is taken into account between the interior and external space-times. We analyze the variation in surface energy density with respect to the thickness parameter (\(\epsilon\)) using the Darmois-Israel junction condition, as shown in Fig.(4). The surface energy density increases towards the outer boundary of the shell. Besides that, in Fig.(3), we have verified that the NEC is satisfied over a range of model parameter values throughout the entire shell. It confirms the presence of ordinary or exotic matter in the shell. Apart from that, we get the limiting value of radius (55) using the concept of determining the real value of shell mass.
* **Physical Features of the Model:*
* Using the geometrical quantity of the intermediate thin shell we have analyzed some physical properties of the thin shell.
* **Proper length:*
* The variation of the proper length with respect to the thickness parameter \(\epsilon\) is given in Fig.(5) and in Eq.(56). The figure demonstrates that the appropriate length rises monotonically as shell thickness increases. This monotonically increasing behavior of proper length of gravastar is similar to the work which has been done in modified gravity [20; 21].
* **Energy :*
* The variation of the shell energy is illustrated in Fig.(6). In this graph, it can be observed that the energy rises as the shell's thickness increases. The fluctuation of energy is comparable to the fluctuation in matter density. It meets the requirement that the energy of the shell increases as the radial distance increases.
* **Entropy :** Fig.(7) depicts the evolution of the shell entropy, which shows the growing behavior of the shell entropy with regard to thickness (\(\epsilon\)). Another acceptable condition is that entropy should reach its greatest value on the surface for a stable gravastar configuration which is demonstrated in our analysis. For comparison of the energy and entropy of the gravastar model with the previous work [34] one can see that the energy and entropy should reach their maximum value at the boundary of the shell which is established in our study.
* **Stability of stellar model:*
* Finally we have verified the stability of our proposed stellar model through the study of the Herreras cracking concept and by the study of the surface redshift analysis method. After that we have used the entropy maximization technique to determine the stability of the gravastar.
* **Herrera's cracking concept:*
* We have analyzed the stability of gravastar by the nature of the effective parameter \(\eta\). In Fig.(8) it is clear that for each value of \(\alpha\) the square of the speed of the sound remains positive and not exceeding 1. Moreover, we can see that for rising the value of the \(\alpha\) parameter the model approaches instability.
* **Surface Redshift:*
* Lastly, we used surface redshift analysis to check the stability of our recently suggested model. The surface redshift (\(Z_{s}\)) for any physically stable star arrangement should always be smaller than 2. By varying the model parameter \(\beta\) we have plotted the surface redshift with respect to the thickness parameter (\(\epsilon\)) which is given in Fig.(9) and in each case \(Z_{s}<1\). It demonstrates that our suggested model is stable under \(f(Q,T)\) gravity.
* **Entropy Maximization :*
* Here we have applied the entropy maximization technique for checking the stability of the gravastar system. For maximizing the entropy function at first the first variation of the entropy function is set to be zero at the boundaries of the shell i.e. \(\delta\mathcal{S}=0\) at \(r=R_{1}\) and \(r=R_{2}\). After that, we checked the nature of the second derivative i.e. of \(\delta^{2}\mathcal{S}\) by its sign for all the variations of \(M(r)\). Eq.65 takes a negative value which represents that the entropy attains its maximum value for all variations of the radial parameter. This further indicates the stability of our gravastar model in \(f(Q,T)\) gravity. One can check the stability of the gravastar model for the entropy maximization technique in [1; 34].
We can draw the conclusion that the gravastar might exist within the constraints of \(f(Q,T)\) gravity. In comparison to past work on gravastars, we have extended the thin shell approximation up to the second order, which provides a more accurate analytical solution for determining the physical parameters of the shell. As well as we have applied a new technique of Herrera's cracking concept to check the stability of our proposed model in \(f(Q,T)\) gravity. We may conclude that the \(f(Q,T)\) theory of gravity was effectively used in the current study on the gravastar by making this finding. The problem of the black hole's event horizon and the central singularity is promptly solved by a group of physically plausible, non-singular gravastar solutions.
**Data availability** There are no new data associated with this article.
## Acknowledgments
SP & PKS acknowledges the National Board for Higher Mathematics (NBHM) under the Department of Atomic Energy (DAE), Govt. of India for financial support to carry out the Research project No.: 02011/3/2022 NBHM(R.P.)/R & D II/2152 Dt.14.02.2022. PKS thanks Transilvania University of Brasov for Transilvania Fellowship for Visiting Professors. We are very grateful to the honorable referees and the editor for the illuminating suggestions that have significantly improved our research quality and presentation.
|
2309.08390 | Self-consistent simulation of photoelectrons in exoplanet winds: Faster
ionisation and weaker mass loss rates | Planetary mass loss is governed by several physical mechanisms, including
photoionisation that may impact the evolution of the atmosphere. Stellar
radiation energy deposited as heat depends strongly on the energy of the
primary electrons following photoionisation and on the local fractional
ionisation. All these factors affect the model-estimated atmospheric mass loss
rates and other characteristics of the outflow in ways that have not been
clearly elucidated. The shape of the XUV stellar spectra influences strongly
the photoionisation and heating deposition on the atmosphere. We elaborate on
the local and planet-wise effects, to clearly demonstrate the significance of
such interactions. Using the PLUTO code, we performed 1D hydrodynamics
simulations from Neptune to Jupiter size planets and stars from M dwarfs to
Sun-like. Our results indicate a significant decrease of the planetary mass
loss rate for all planetary systems when secondary ionisation is taken into
account. The mass loss rate is found to decrease by 43$\%$ for the more massive
exoplanet to 54$\%$ for the less massive exoplanet orbiting solar-like stars,
and up to 52$\%$ for a Jupiter-like planet orbiting a M type star. Our results
also indicate much faster ionisation of the atmosphere due to photoelectrons.
We built a self-consistent model including secondary ionisation by
photoelectron to evaluate its impact on mass loss rates. We find that
photoelectrons affect the mass loss rates by factors that are potentially
important for planetary evolution theories. We also find that enhanced
ionisation occurs at altitudes that are often probed with specific atomic lines
in transmission spectroscopy. Future modelling of these processes should
include the role of photoelectrons. Finally, we make available a simple yet
accurate parameterisation for atomic hydrogen atmospheres. | Alexande Gillet, Antonio Garcia Munoz, Antoine Strugarek | 2023-09-15T13:36:27Z | http://arxiv.org/abs/2309.08390v1 | Self-consistent simulation of photoelectrons in exoplanet winds: Faster ionisation and weaker mass loss rates.
###### Abstract
Context:Close-in exoplanets undergo extreme irradiation levels leading to hydrodynamic atmospheric escape and the formation of planetary winds. The planetary mass loss is governed by several physical mechanisms including photoionisation that may impact the evolution of the atmosphere. The stellar radiation energy deposited as heat depends strongly on the energy of the primary electrons following photoionisation and on the local fractional ionisation. All these factors affect the model-estimated atmospheric mass loss rates and other characteristics of the outflow in ways that have not been clearly elucidated. Moreover, the shape of the XUV stellar spectra influences strongly the photoionisation and heating deposition on the atmosphere. Substantial changes are to be expected on the planetary mass loss rate.
Aims:We study the effect of secondary ionisation by photoelectrons on the ionisation and heating of the gas for different planet-star systems. We elaborate on the local and planet-wise effects, to clearly demonstrate the significance of such interactions.
Methods:Using the PLUTO code, we performed 1D hydrodynamics simulations for a variety of planets and stellar types. We include planets in the range from Neptune to Jupiter size, and stars from M dwarfs to Sun-like.
Results:Our results indicate a significant decrease of the planetary mass loss rate for all planetary systems when secondary ionisation is taken into account. The mass loss rate is found to decrease by 43% for the more massive exoplanet to 54% for the less massive exoplanet orbiting solar-like stars, and up to 52% for a Jupiter-like planet orbiting a M type star. Our results also indicate much faster ionisation of the atmosphere due to photoelectrons.
Conclusions:We built a self-consistent model including secondary ionisation by photoelectron to evaluate its impact on mass loss rates. We find that photoelectrons affect the mass loss rates by factors that are potentially important for planetary evolution theories. We also find that enhanced ionisation occurs at altitudes that are often probed with specific atomic lines in transmission spectroscopy. Future modeling of these processes should include the role of photoelectrons. To that end, we make available a simple yet accurate parameterisation for atomic hydrogen atmospheres.
## 1 Introduction
It has long been proposed that the atmospheres of exoplanets orbiting close-in to their host stars must be rapidly escaping (Lammer et al., 2003; Baraffe et al., 2004). Accurately predicting the mass loss rates remains however a difficult task due to the complexity of the star-planet interactions that participate in the mass loss process.
Planets can lose their atmospheres through multiple mechanisms, broadly separated into thermal and non-thermal (see Grondin et al., 2020, for a complete review on escape processes). Non-thermal processes such as ion sputtering, polar outflow or charge-exchange with the stellar wind dominate for planets orbiting at large distances from their star. In the context of planets on short-period orbits, thermal processes, which include Jeans and hydrodynamic escape, generally dominate. We will focus here on hydrodynamic escape, which occurs when intense stellar radiation deposits its energy in the planetary atmosphere (at various altitudes depending on the radiation wavelength), leading to the escape of the atmosphere into space.
The planetary wind interacts with the stellar wind and forms a number of hydrodynamic features such as a bow shock, a comet-like tail and Kelvin-Helmholtz instabilities (Tremblin & Chiang, 2013). Insight into these processes can be gained by measuring with in-transit spectroscopy the excess absorption that occurs at a number of atomic lines such as Hi Ly\(\alpha\) and H\(\alpha\), or the He i triplet at 10830 A. These features have been measured for hot Jupiters (e.g. Vidal-Madjar et al., 2003; Lecavelier Des Etangs et al., 2010; Jensen et al., 2012) and hot Neptunes (e.g. Kulow et al., 2014; Ehrenreich et al., 2015; Ben-Jaffel et al., 2022). A number of studies have been conducted to understand the global problem of planetary escape with hydrodynamical 1D models (e.g. Yelle, 2004; Garcia Munoz, 2007; Murray-Clay et al., 2009; Koskinen et al., 2013). Such models predict velocities for the planetary wind of a few km/s in the planet's proximity, which are consistent with the line broadening of H\(\alpha\) and the He i triplet measured with high-resolution spectroscopy (e.g. Salz et al., 2018; Garcia Munoz & Schneider, 2019). However, they fail to predict the velocities of \(\sim\)100 km/s that are found with Ly\(\alpha\) spectroscopy (e.g. Vidal-Madjar et al., 2003; Ben-Jaffel, 2007). Whether the velocities measured with Ly\(\alpha\) spectroscopy are representative of the planetary wind or are instead indicative of stellar wind protons that become neutralised by charge-exchange with the planetary wind remains an open question (Tremblin & Chiang, 2013).
Many authors have looked into this problem with 2D and 3D models that include self-consistent radiative transfer, with the goal of constraining the mass loss rates from the planets (e.g. Tripathi et al., 2015; Debrecht et al., 2019). Recently, Shaikhislamov et al. (2021) developed a global 3D multi-fluid model to investigate the He 10830 A line; Daley-Yates & Stevens (2019) explored the star-planet interaction of the wind in the presence of a magnetic field; and Carolan et al. (2021) looked at the effects of the stellar wind strength on a magnetised planetary wind, showing that the increased polar loss compensates for the decreased mass loss at the dead zones of the planet.
Accurately predicting the atmospheric mass loss rate is essential to answering fundamental questions about the evolution of planets. Indeed, some atmospheres, in particular those of low-mass planets, may be escaping so efficiently that they can be completely lost to space on timescales shorter than the planet's lifetime. In the case of hydrodynamic escape of strongly irradiated exoplanets, the aim of this work, calculating the mass loss rate involves determining how much of the stellar radiation energy is converted into heat, and how much goes otherwise to ionisation and excitation of the atmospheric gas.
We focus here on atmospheric gas made of hydrogen atoms H because although H\({}_{2}\) should be prevalent in primary atmospheres, the molecule will typically dissociate rapidly in the planet's upper atmosphere. Also, the physics of photoelectron interactions with H\({}_{2}\) is more complex than for H atoms because, being a molecule, H\({}_{2}\) offers additional channels for excitation, dissociation and ionisation that must be tracked individually (Hallett et al., 2005). We postpone such a study to future work and focus here on the essentials for an atomic atmosphere. Upon absorption by the atmospheric atomic gas, the stellar X-ray and Extreme Ultraviolet photons (jointly referred to as XUV, and covering wavelengths from a few A to the Lyman continuum threshold at 912 A) release high velocity electrons. These so-called photoelectrons have energies \(E_{0}\)=\(\hbar c/\lambda\)=13.6 eV, where \(h\) and \(c\) are Planck's constant and the speed of light, respectively, and \(\lambda\) is the photon wavelength. The photoelectrons can excite the gas and produce secondary electrons while slowing down. The gas heating corresponds to the fraction of the photoelectron's initial energy \(E_{0}\) that is not expended in excitation or ionisation but that goes instead into kinetic energy.
We emphasize the importance of two parameters that dictate the fraction of the photoelectron's initial energy that is deposited as heat, namely: the local fractional ionisation \(x_{c}\) and the energy \(E_{0}\) of the primary photoelectrons. In particular, if the local fractional ionisation is high, elastic collisions between the fast and thermal electrons ensure that most of \(E_{0}\) is transferred to the thermal electrons, thereby heating the gas. Also, if \(E_{0}\) is less than 10.2 eV, i.e. below the lowermost threshold for inelastic collisions in the H atom, all of \(E_{0}\) is transferred as heat to the background gas in elastic collisions. In these two limits, the prediction of the gas heating is relatively straightforward. Importantly, when the fractional ionisation is low or moderate and \(E_{0}\) is sufficiently large, additional electrons and ions are created during the secondary ionisation process, when the primary photoelectron interacts with other hydrogen atoms. For the most energetic primary electrons created after photoionisation, it follows a cascade of multiple ionisation events.
The fundamental information for the treatment of photoelectrons, and for the assessment of how much of their energy goes into excitation, ionisation and heating of the gas has been available for decades. For example, Habing & Goldsmith (1971) and Shull (1979) have looked at the production of secondary electrons induced by soft X-rays and how they interact with a primordial gas using a Monte Carlo method. The outcome of such modelling efforts can be readily parameterised as a function of \(x_{e}\) and \(E_{0}\) to take into account the effect of photoelectrons in the net ionisation rate and heating rate induced by XUV photons impacting the upper atmosphere of short-period orbit exoplanets.
In this work, we revisit the problem by exploring self-consistently the simultaneous heating and ionisation that occurs as the photoelectrons slow down, taking advantage of recent advances in the characterisation of the XUV spectra of stars. This effect is often neglected in the studies dedicated to modelling hydrodynamic escape of planetary atmospheres (e.g. Lammer et al., 2003; Wu & Lithwick, 2013; Garcia Munoz et al., 2020, 2021). With the ultimate goal of better predicting the loss rate and the main features of planetary winds, in some cases an arbitrary pre-fixed fraction (typically, 2-30%) of the photoelectron energy \(E_{0}\) is assumed to transfer into heating (Linssen et al., 2022). Our work shows how to partly overcome such simplified treatments and elaborates on the importance of photoelectrons in the strongly-irradiated atmospheres of some exoplanets.
Guo & Ben-Jaffel (2016) have covered related ideas, with an emphasis on how the spectral energy distribution of the star affects the mass loss rate and ionisation of strongly-irradiated atmospheres. Following their initial study, in this work we put significant effort to elucidate how the photoelectrons affect the atmosphere both locally and globally, and provide simple yet accurate descriptions for the self-consistent implementation of such processes in the continuity and energy conservation equations of the gas. These prescriptions should be useful to other modelling efforts. In addition, we leverage recent developments in the reconstruction of the XUV spectra for cool stars by observations at X-ray and far ultraviolet wavelengths (France et al., 2016).
To examine the quantitative effect of photoelectrons and assess whether the planet's gravity plays a role, we performed 1D spherical hydrodynamic simulations with the PLUTO code (Mignone et al., 2007). We focused on the escaping atmospheres of four hypothesised planetary systems with masses ranging from 0.02 \(M_{J}\) to 0.69 \(M_{J}\) (similar to HD209458b) and assessed the effect of different XUV spectra taken from the MUSCLES survey (France et al., 2016). We defined the planet size \(R_{p}\) so that the planet density remains constant in all cases. This is a somewhat arbitrary choice, but useful because of its simplicity. In the energy-limited limit, the mass loss rate (Erkaev et al., 2007) is proportional to the inverse of the planet density, in which case assuming a constant bulk density serves as a valid basis to compare against that limit. Our simulations incorporate the relevant physics of photoelectrons in the continuity and energy equations.
The plan of the paper is the following: the model, including the PLUTO setup and our treatment of radiative transfer with photoelectrons, is described in section SS2. In section SS3, we describe the relevant physics of photoelectrons and its implementation in PLUTO. Section SS4 is dedicated to the description of the results of the atmospheric escape of a Neptune-like planet. Section SS5 describes the dependence of our results on the planet mass, and finally in section SS6 we study the impact of the shape of the stellar spectra of M and K type stars on secondary ionisation.
## 2 Model description
### Physical model
The model is constructed with the hydrodynamics code PLUTO (Mignone et al. 2007), which solves the Euler equations in a rotating reference frame. The 1D equations for conservation of mass, momentum and energy solved in PLUTO are:
\[\frac{\partial\rho}{\partial t}+\mathbf{\nabla}\cdot(\rho\mathbf{u})=0\,, \tag{1}\]
\[\frac{\partial(\rho\mathbf{u})}{\partial t}+\mathbf{\nabla}\cdot[\rho \mathbf{u}\mathbf{u}+P_{T}\mathbf{I}]=-\rho\nabla\phi+\rho\mathbf{F}_{\rm cont }\,, \tag{2}\]
\[\frac{\partial(E+\rho\phi)}{\partial t}+\mathbf{\nabla}\cdot[(E+P_{T} +\rho\phi)\mathbf{u}]=\rho\mathbf{F}_{\rm cont}\mathbf{u}+H-C\,, \tag{3}\]
where \(r\) is distance as measured from the planet's centre and \(t\) is time, and where \(\mathbf{u}\) is the fluid velocity, \(\rho\) the density, \(P_{T}\) the thermal pressure, \(E=P_{T}/(\gamma-1)+\rho\mathbf{u}^{2}/2\) the total energy with \(\gamma=5/3\), the adiabatic index for a mono-atomic gas, and \(C\) and \(H\) are the cooling and heating terms to be defined in SS2.3. The joint gravitational potential of the planet plus the star is defined as:
\[\phi=-\frac{GM_{p}}{r}-\frac{GM_{\star}}{R_{\rm orbit}-r}\,, \tag{4}\]
where \(G\) is the gravitational constant, and \(M_{\star}\) and \(M_{p}\) are the mass of the star and the planet, respectively. Finally, the centrifugal force is written as:
\[\mathbf{F}_{\rm cont}=-\frac{GM_{\star}}{R_{\rm orbit}^{3}}\left(R_{\rm orbit }-r\right)\mathbf{e}_{r}\,, \tag{5}\]
where \(R_{\rm orbit}\) is the distance from the planet centre to the stellar centre, \(\mathbf{e}_{r}\) is the unit vector in the outwards radial direction.
We consider here an atmosphere composed of atomic hydrogen in neutral Hi and ionised H\({}^{+}\) forms, plus thermal electrons. The density in the equations above corresponds to the total mass density \(\rho=\rho_{\rm Hh}+\rho_{\rm H^{+}}\), which neglects the contribution of thermal electrons. We define the neutral fraction \(x_{\rm Hh}=\rho_{\rm H}/\rho\) and the fractional ionisation as \(x_{e}=1-x_{\rm Hh}\), both taking values between 0 and 1. On the basis of charge neutrality, the number density of ions and electrons is the same. To determine the partitioning between Hi, H\({}^{+}\) and electrons, we consider the processes for:
collisional ionisation: H + e\({}^{-}\) + H\({}^{+}\) + 2 e\({}^{-}\),
radiative recombination: H + + e\({}^{-}\) + H + hv,
photoionisation: H + hv\({}^{-}\) + H\({}^{+}\) + e\({}^{-}\).
In our hydrodynamic simulations, we have adapted the Simplified Non-Equilibrium Cooling (SNe) module from PLUTO (Tesileanu et al. 2008) in the optically thin limit to track the fraction of neutrals and ions of atomic hydrogen in the chemical reaction network equation. In that module, PLUTO solves the equation for the neutral fraction evolution:
\[\frac{\partial x_{\rm Hh}}{\partial t}=n_{e}\left[-(c_{r}+c_{i})x_{\rm Hh}+c_{ r}\right]-Jx_{\rm Hh}\,, \tag{6}\]
where \(c_{r}=2.6\cdot 10^{-11}\times T^{-0.5}\) and \(c_{i}=5.83\cdot 10^{-11}\sqrt{T}\exp(-157890/T)\) are the recombination and ionisation rate coefficients in cm\({}^{3}\)/s, both dependent on the temperature \(T\) [K] of the gas, and on the electron number density \(n_{e}=(\rho/m_{p})x_{e}\), where \(m_{p}\) is the proton mass [g/particle]. In this work, we added the extra term in Eq. 6 to account for the photoionisation of the gas by the stellar XUV photons impacting the planetary atmosphere. The formulation of the photoionisation rate coefficient \(J\) [s\({}^{-1}\)] is given in SS2.3.
Our model considers a single thermal temperature to describe the kinetic energy of the neutral, ion and electron components of the gas. This tacitly assumes that the transfer of kinetic energy between them occurs very rapidly, which should be a valid approximation for the relevant fractional ionisations.
### Radiative transfer
Photoionisation is the only heating source of the hydrogen gas included in our model. It is caused by XUV photons coming from the star (see fig 1). In principle, the XUV stellar spectrum should consider all possible wavelengths below 912 A, which is the threshold at which ground-state H atoms can absorb. In practice, at sufficiently short wavelengths the atmosphere of a planet becomes thin, and these wavelengths can safely be neglected in the radiative transfer. In this work, we consider a solar spectrum which starts at 15 A (see below), and we take that limit also as the shortest-wavelength in our implementation of other stellar fluxes and the H atom cross-sections.
In our simulations, we compute the local stellar flux \(F_{\star}(r,\lambda)\) from a reference stellar flux spectrum at 1 AU, which is scaled by \((1\mathrm{AU}/R_{\rm orbit})^{2}\) to produce the top-of-the-atmosphere spectrum \(F_{\star}(r=TOA,\lambda)\), and is attenuated with an optical depth \(\tau_{\lambda}\) using Beer-Lambert's law. Namely:
\[F_{\star}(r,\lambda)=F_{\star}(r=TOA,\lambda)e^{-\tau_{\lambda}(r)}\,. \tag{7}\]
The wavelength-dependent optical depth \(\tau_{\lambda}(r)\) in the direction towards the star is calculated through:
\[\tau_{\lambda}(r)=\sigma_{\lambda}\int_{r}^{\infty}n_{\rm Hh}dr, \tag{8}\]
where the integral is performed along the planet-star line of sight. Here, \(n_{\rm Hh}=(\rho/m_{p})x_{\rm Hh}\) is the neutral number density [cm\({}^{-3}\)] and \(\sigma_{\lambda}\) [cm\({}^{2}\)] the wavelength-dependent photoionisation cross-section of hydrogen, given by:
\[\sigma_{\lambda}=\sigma_{0}\left(\frac{\lambda}{\lambda_{0}}\right)^{3}\,, \tag{9}\]
where \(\sigma_{0}=6.3\times 10^{-18}\) cm\({}^{2}\) is the cross-section at the threshold wavelength \(\lambda_{0}=912\) A.
Figure 1: Illustration of the model of photoionisation and heating of an atmosphere. The atmosphere is irradiated by XUV stellar photons with an incident flux Fxuv. The gas particles are under the influence of stellar gravity \(F_{\rm star}\), planet gravity \(F_{\rm planet}\) and centrifugal forces \(F_{\rm cont}\).
For the stellar irradiation, we adopted a solar spectrum downloaded from the SOLID1 project, as observed on December 13th 2021. We degraded the spectrum in 20 bins of equal size in the range from \(\lambda_{\rm min}=\)15 A to \(\lambda_{0}=\)912 A. We verified that the choice of 20 bins does not impact the results presented in what follows by performing a few simulations with 40 equal-size bins, which did not show any significant difference. Figure 2 displays the original solar spectrum in black along with the degraded spectrum, the latter represented by the blue bars. The XUV-integrated flux at 0.045 AU is 2174 erg/cm\({}^{2}\)/s.
Footnote 1: European comprehensive solar irradiance data exploitation; [https://projects.pmodwrc.ch/solid](https://projects.pmodwrc.ch/solid)
### Cooling, heating and ionisation rates
The cooling term \(C\) [erg s\({}^{-1}\) cm\({}^{-3}\)] in Eq.3 represents the sum of the thermal energy of the electrons lost by collisional ionisation and radiative recombination:
\[C=n_{e}n_{H}[c_{i}2.17\cdot 10^{-11}x_{\rm Ht}+c_{r}1.07\cdot 10^{-12}(1-x_ {\rm Ht})\frac{T}{11590}], \tag{10}\]
Here, \(n_{H}=n_{\rm Ht}+n_{\rm Ht^{*}}\) is the total number of hydrogen nuclei, and \(c_{r}\) and \(c_{i}\) are the rate coefficients from Eq 6. Our model does not include molecular chemistry and therefore it neglects for example cooling by H\({}_{3}^{+}\) emission in the infra-red. In this expression, \(2.17\cdot 10^{-11}\) erg is equivalent to the energy of 13.6 eV required to ionise the neutral atom. Correspondingly, \(1.07\cdot 10^{-12}(T/11590)\) erg represents the \(\sim\)0.67 kT that are extracted from the kinetic energy of the thermal electrons and lost to radiation if the environment is optically thin. In future work, we intend to move away from the optically thin approximation and treat the recombination into the different bound levels separately, as well as keeping track of the radiated energy.
When the stellar radiation hits the planetary atmosphere, it ionises the gas and contributes to its heating. The photoionisation rate coefficient \(J\) [s\({}^{-1}\)] is given by:
\[J=\int_{\lambda_{\rm min}}^{\lambda_{0}}\sigma_{A}F_{\star}\left(\frac{\lambda }{hc}\right)\left(1+\Phi_{\lambda,\rm{se}}\right)\mathrm{d}\lambda\,, \tag{11}\]
where \(F_{\star}\) [erg cm\({}^{-2}\)s\({}^{-1}\)A\({}^{-1}\)] is the attenuated stellar flux described in Eq. 7. \(\Phi_{\lambda,\rm{se}}\) is the number of secondary ions created per photoionisation, to be described in section 3.
The heating deposition rate \(H\) [erg s\({}^{-1}\) cm\({}^{-3}\)] in the energy equation can be expressed as:
\[H=n_{\rm Ht}\int\sigma_{A}F_{\star}\left(1-\frac{\lambda}{\lambda_{0}}\right) \eta_{\lambda,\rm{se}}\mathrm{d}\lambda \tag{12}\]
with \(\eta_{\lambda,\rm{se}}\) being the heating efficiency to be described in section 3. Omitting the negative term in the parenthesis of the above equation, and assuming \(\eta_{\lambda,\rm{se}}=1\), would provide the incident energy that is deposited in the gas. The negative term in the parenthesis subtracts from it the energy that is expended to produce the primary photoelectrons during photoionisation.
Models that adopt a number of secondary ions \(\Phi_{\lambda,\rm{se}}=0\) and a heating efficiency \(\eta_{\lambda,\rm{se}}=1\) assume that the surplus energy of the photoelectrons after photoionisation goes entirely into heating the gas. However, this is not generally true, since the photoelectrons also cause excitation and ionisation of the surrounding neutral atoms. Ideally, one has to determine precisely the heating efficiency by estimating which fraction of the energy goes into heating and which fraction is being used for the other processes.
Here, we aim to properly calculate the effect of photoelectrons on the chemistry and heating of the atmospheric gas, taking into account the dependence of these effects on the photoelectron energy and fractional ionisation. In section 3 we detail the physical processes in play and how to determine and parameterise \(\eta_{\lambda,\rm{se}}\) and \(\Phi_{\lambda,\rm{se}}\).
### Numerical Method and initial set-up
To solve Eqs. 1-3 we use the Harten-Lax-Van Leer approximate Riemann Solver that solves exactly stationary contact discontinuities between the cells (HLLC, see Toro 2009). We use linear reconstruction for the spatial order of integration and a third order Runge-Kutta scheme (RK3) for the time evolution. According to the stiffness of equations 3 (including cooling 10 and heating 12) and 6, a dynamically-adaptive integration strategy is adopted (see Tesileanu et al. 2008, for more details).
PLUTO uses dimensionless units rather than for example cgs units so that flow quantities can be properly scaled to "reasonable" numbers (Mignone et al. 2007). We work with three fundamental units: \(\rho_{0}\), \(L_{0}\) and \(V_{0}\). The reference density at the base of the atmosphere is fixed for all planetary systems at \(\rho_{0}=1.326\)\(\times\)\(10^{-10}\) g/cm\({}^{3}\) corresponding to a pressure of 12 \(\mu\)bar (value taken from most models to define their boundary condition for a temperature of 1100 K. The reference length \(L_{0}\) is equal to the planetary radius \(R_{p}\) and the reference velocity \(V_{0}\) is calculated as the following \(V_{0}=(GM_{p}/L_{0})^{0.5}\). Their values are given in Table 1 for the models considered in this study. The computational domain is composed of a stretched grid of 500 cells between 1 and 30 \(R_{p}\) with a stretch factor of 1.017. The grid is extended below 1 \(R_{p}\) by adding 10 cells of uniform size between 0.999 and 1 \(R_{p}\). This extension is used by PLUTO to establish the boundary conditions at the bottom of our model.
In our simulations, the atmosphere is initialised with the density profile:
\[\rho(r)=\rho_{0}exp\left[\alpha_{p}\left(\frac{R_{p}}{r}-1\right)\right]\,, \tag{13}\]
Figure 2: Solar XUV spectrum. Black: Original solar spectrum downloaded from SOLID at 1 AU. Blue bars: Binned spectrum used in our calculations.
where \(\alpha_{p}\) sets the initial density scale height in the atmosphere. For the simulations presented in the work, \(\alpha_{p}\) is set to 20. The atmosphere is initialised everywhere at the equilibrium temperature \(T_{\rm eq}\), which is equal to 1100 K in all our models in which the planet orbits a Sun-like star. The ideal gas law is used to initialise the pressure, and the neutral hydrogen fraction profile is initially set as:
\[x_{\rm Ht}=\exp\left(\frac{R_{p}}{r}-1\right)\,, \tag{14}\]
and the velocity is set to zero.
In sections 4 and 5, all planetary systems are considered to orbit a solar-type star with a separation of \(R_{\rm orbit}\) = 0.045 AU. Table 1 lists the physical parameters used for all simulations of each planetary systems. We choose four different planetary masses : 0.02 0.05, 0.1 and 0.69 \(M_{J}\). This ranges from a sub-Neptune planet to a Jupiter-like planet similar to HD209458b. In section 6, we consider different orbital distances for the planets around different stars to ensure having the same integrated flux.
At the inner boundary, we set the density and the pressure (12 \(\mu\)bar) to their initial value, the velocity to zero, and assume that the gas is entirely neutral. At the outer boundary, we use a free outflow boundary condition, with the gradients of all variables (\(P\), \(\rho\), \(v_{r}\) and \(x_{\rm Ht}\)) equal to 0.
## 3 Secondary Ionisation by Photoelectrons
### Theoretical baseline
Before thermalising and depositing their kinetic energy into heat, photoelectrons interact with the neutral hydrogen atoms in the gas multiple times, losing at each collision a part of their energy. This interaction has been investigated by a number of authors for a variety of astrophysical applications (Habing & Goldsmith 1971; Shull 1979; Furlanetto & Stoever 2010). In the investigation of exoplanetary atmospheres, Cecchi-Pestellini et al. (2006) and Shenatovich et al. (2014) studied the effect of stellar X-ray irradiation on the heating by including the effect of photoelectrons and found that the heating efficiency is notably less than 1 if the atmosphere is mostly neutral when photoelectrons are included. Their simulations are not self-consistent, though, as those studies do not let the modified hydrodynamic solution alter the conditions that the photoelectrons experience. Guo & Ben-Jaffel (2016) investigated the production of secondary ions in H\({}_{2}\)/H atmospheres, and found an increase of the total ionisation rate by a factor of 10 in the region r \(<\) 1.05 \(R_{p}\) that is reached only by the shortest-wavelength photons. They also report a drop in the mass loss rate by less than a factor of 2 when photoelectrons are included, although the connection with the local heating efficiency is not established. Garcia Munoz (2023) has looked into the photoelectron-driven processes that affect the population of the first excited level of hydrogen that is sensed in transmission spectroscopy of the H\(\alpha\) line.
Based on the physics of the H\(\alpha\) atom excitation and ionisation, several ranges of energies may be considered. For \(E_{0}\)\(<\)10.2 eV, the threshold for the first excitation level, all the energy of photoelectrons is deposited as heat (Dalgarno & McCray 1972; Cravens et al. 1975). For energies 10.2 eV \(<\)\(E_{0}\)\(<\)13.6 eV, the surplus of energy goes either into heating and into excitation. Finally, for energies \(E_{0}\)\(>\) 13.6eV, which defines the ionisation threshold, it can additionally be used to ionise further the gas (see Fig. 3). The energies 10.2 eV and 13.6 eV are the thresholds for \(E_{0}\) that enable excitation and ionisation of the atom after photoionisation, corresponding to the wavelengths 520 and 455 A respectively. For even shorter-wavelength photons, the ejected photoelectron will be able to excite/ionise the hydrogen atoms multiple times. It is reasonable to assume that the excited H\(\alpha\) (resulting from either collisions of ground state H with photoelectrons or with thermal electrons or from the recombination of protons) atom will radiate away its excitation energy. Indeed, al
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Parameters & M0.69 & M0.1 & M0.05 & M0.02 \\ \hline \(M_{p}\) (\(M_{1}\)) & 0.69 & 0.10 & 0.05 & 0.02 \\ \(R_{p}\) (\(R_{1}\)) & 1.32 & 0.69 & 0.55 & 0.40 \\ \hline \(R_{\rm orbit}\) (\(R_{p}\)) & 73 & 139 & 174 & 239 \\ \(M_{*}/M_{p}\) & 1593 & 10997 & 21995 & 52394 \\ \(L_{0}\) (10\({}^{9}\) cm) & 9.24 & 4.83 & 3.85 & 2.80 \\ \(V_{0}\) (10\({}^{6}\) cm/s) & 3.075 & 1.618 & 1.282 & 0.950 \\ \hline \end{tabular} 1
\end{table}
Table 1: Planet parameters and PLUTO parameters for each planetary system M0.69-M0.02 considered here.
Figure 3: Schematic of the energy channelling from an impacting photon into a photoelectron that can lead to the heating of the gas, its excitation, and secondary ionisation.
though the Lyman-\(\alpha\) line (and other lines in the Lyman series) is not optically thin, it occurs that even after accounting for opacity its effective radiative timescale is shorter than the collisional de-excitation timescale at the relevant atmospheric pressures (see Garcia Munoz 2023).
We calculated the heating efficiency and ionisation yield with the model described in Garcia Munoz (2023). The calculations were conducted over a grid in energy \(E_{0}\) that resolves well the details in the heating efficiency and the ionisation yield at five values of the fractional ionisation, namely, \(x_{e}\)=\(10^{-4}\), \(10^{-3}\), \(10^{-2}\), \(10^{-1}\) and \(1\). Figure 4 shows the heating \(E_{h}\) as a function of the primary electron energy \(E_{0}\) and \(x_{e}\) (different colours). The black solid line for \(x_{e}=1\) coincides with \(E_{h}\)=\(E_{0}\). We define the heating efficiency, \(\eta_{\lambda,x_{e}}=E_{h}/E_{0}\) which corresponds to the ratio between the energy that goes into heating \(E_{h}\) and the initial energy of the photoelectron \(E_{0}\). As noted earlier, for any prescribed \(x_{e}\), the heating efficiency depends on \(E_{0}\) or, equivalently, on the energy of the incident photon \(hc/\lambda\), and takes values from \(0\) to \(1\).
When \(\eta_{\lambda,ue}=1\) all energy is deposited as heat while when \(\eta_{\lambda,ue}<1\) a fraction of the energy is diverted to excite or ionise other atoms. To facilitate its use in our hydrodynamical model and in other similar models, we fitted the MC calculations of \(\eta_{\lambda,ue}\) to \(4^{\rm th}\) order polynomials:
\[\eta_{\lambda,ue}=\exp\left(\sum_{n=1}^{4}a_{n,\lambda}\times(\log_{10}x_{e}) ^{n}\right). \tag{15}\]
The polynomial coefficients \(a_{n,\lambda}\) can be found in Appendix A. When the fractional ionisation falls below \(10^{-4}\), \(\eta_{\lambda,x_{e}}\) becomes essentially independent of \(x_{e}\). In those conditions, we fix \(x_{e}\) to \(10^{-4}\) to estimate \(\eta_{\lambda,x_{e}}\).
During the secondary ionisation process, a number \(\Phi_{\lambda,ue}\) of secondary ions and electrons are produced from the primary electrons of energy \(E_{0}\). Figure 5 reports Monte Carlo model calculations of the number of secondary ions created for a primary electron of energy \(E_{0}\), and for a range of conditions of fractional ionisation \(x_{e}\). For large photoelectron energies, numerous secondary ions can be created. For instance, for \(E_{0}=300\) eV (\(\lambda=39.5\) A) up to 5 secondary ions can be created if \(x_{e}=10^{-1}\) (green curve) and up to 10 if \(x_{e}=10^{-2}\) (orange curve). Simon Wedlund et al. (2011) have shown that about 8.5 secondary ions are expected to be created at \(E_{0}=300\) eV in a non-ionised atmosphere. Here, we consider equally-spaced bins to sample the wavelength of the impacting photons, which translates into a first wavelength bin covering 15 A (826.5 eV) to 59.85 A (207.15 eV) (see Fig. 2). In this bin, we predict the average production of about 14 secondary ions, which is significantly larger because the considered bin encompasses a range of \(E_{0}\) that is on average larger than 300 eV. When considering smaller wavelength bins, we recover the production of 8.5 secondary ions at \(E_{0}=300\) eV predicted by Simon Wedlund et al. (2011). We reiterate nonetheless that our results are not significantly affected when considering smaller wavelength bins.
Conversely, for low photoelectron energies, no additional ions are created (as expected, see Fig. 3).
Similarly, we performed a 4th order polynomial fit:
\[\Phi_{\lambda,x_{e}}=\sum_{n=1}^{4}b_{n,\lambda}\times(\log_{10}x_{e})^{n}\,, \tag{16}\]
with \(b_{n,\lambda}\) the polynomial coefficients found in Appendix A.
Because for sufficiently large energies \(E_{0}\), \(\eta_{\lambda,ue}\) is small and \(\Phi_{\lambda,ue}\) is large when \(x_{e}\) is small, significant effects on the heating and ionisation of the atmosphere are expected in the high-pressure region where the gas remains neutral and that only very energetic photons can reach. Conversely, we expect that further out where the local fractional ionisation is high and the newly created photoelectrons have small energies, the effect of photoelectrons will be weak.
A note is due on the terminology used throughout this paper. We will say that a calculation incorporates secondary ionisation when it is done with the parameterised forms of \(\eta_{\lambda,x_{e}}\) and \(\Phi_{\lambda,x_{e}}\) described above, which obviously includes the effect of both excitation and ionisation by the primary and secondary electrons.
Figure 4: Energy \(E_{h}\) deposited as heat by primary electrons of energy \(E_{0}\). Solid curves represent different fractional ionisations. The heating efficiency is given by \(\eta_{\lambda,x_{e}}=E_{h}/E_{0}\). Adapted from García Munoz (2023).
Figure 5: Secondary ions \(\Phi_{\lambda,x_{e}}\) produced by primary electron of energy \(E_{0}\). Solid curves represent different fractional ionisations. We note that these yields refer to energy bins of finite size rather than to the specific energies quoted there. This distinction makes some difference at the higher energies because the energy bins are larger for them. We have nevertheless confirmed by running a few tests with a refined spectral grid of 40 bins that this has no significant impact on the overall gas properties and mass loss rates. Adapted from García Munoz (2023).
Otherwise, we will say that the calculation does not incorporate secondary ionisation when it is assumed that \(\eta_{\lambda,\kappa}\)=1 and \(\Phi_{\lambda,\kappa}\)=0.
### Photoionisation and heating rates
A main goal of this paper is to investigate the feedbacks between photoelectron-driven processes and the hydrodynamic outflow, which we detail in section 4. Before that, and to demonstrate the effects of photoelectrons, we will illustrate how they affect the energy deposition and ionisation for a prescribed atmospheric profile. For this demonstration, we assume the atmospheric profile to be fixed, meaning that we do not consider the feedbacks driven by the photoelectron deposition in the gas. Through this exercise, we want in particular to assess the intricate link between the shape of the solar spectrum (fig. 1) and the deposition of the associated energy into the atmosphere. The shape of the stellar XUV spectrum partly dictates where in the atmosphere and at which wavelengths the photoionisation and heat deposition are more prominent.
For this exercise, we consider a reference profile taken from one of our numerical simulations: the steady-state atmosphere of a Jupiter-like planet with a mass of 0.69 \(M_{J}\) (M0.69 in Table 1). The top panel of Figure 6 shows the fractional ionisation \(\kappa_{e}\) as a function of the neutral density \(n_{\rm H}\). In addition, we show in the bottom panels the wavelength contributions of photoionisation (left) and heating (right) at the five representative altitudes depicted in the top panel.
Each left panel of Fig.6 shows two curves. The solid curve corresponds to \(J^{\prime}\).\(x_{\rm Hh}=\sigma_{J}F_{\star}\left(\frac{\lambda}{hc}\right)x_{\rm Hi}\) and shows the contribution to ionisation of each wavelength bin, excluding the effect of secondary ionisation by the photoelectrons. In turn, the dotted curve corresponds to \(J^{\prime}\left(1+\Phi_{\lambda,\kappa}\right)x_{\rm Hi}\) and shows the total ionisation, including the production of secondary ions. The integral of the latter over wavelengths corresponds to the net ionisation rate \(J\)\(x_{\rm Hi}\) of Eq. 6.
When the unattenuated stellar flux hits the upper atmosphere of the planet (see panel for 3.71 \(R_{p}\), in magenta), only the longest wavelengths contribute to photoionisation, producing a large peak observed at the threshold of 912 A. This is largely because of the strong modulating effect of the photoionisation cross section, that prioritises the longer wavelengths. As we go deeper into the atmosphere (1.58 \(R_{p}\), green), the number density increases by two orders of magnitude, the atmosphere becomes optically thicker, especially at the longer wavelengths and, a broader range of wavelengths contribute to photoionisation with the presence of multiple peaks (at 300 A, 600 A and 912 A). At 1.29 \(R_{p}\) (in red), photoionisation is dominated by a peak around 300 A. The atmosphere is optically thick at the longest wavelengths, and only the ones lower than 400 A will reach this altitude, as the photoionisation cross-sections are smaller at short wavelengths. For the same reason, only the most energetic photons penetrate below 1.05 \(R_{p}\) (in blue). Photoionisation occurs at a slower rate at this altitude than in the upper atmosphere, yet the number of electrons produced per volume unit is much larger, because it partly follows the hydrogen density.
Interestingly, much faster ionisation occurs when photoelectrons are taken into account (dotted lines). The enhancement of ionisation rate by photoelectrons strengthens at the lowest altitudes. This is clearly seen at 1.05 \(R_{p}\) (in blue) and 1.02 \(R_{p}\) (in cyan) due to the multiplicative term \(\left(1+\Phi_{\lambda,\kappa}\right)\) in the integral of \(J\)\(x_{\rm Hi}\). In these deep layers, the photoelectrons enhance the double peak structure between 15 and 200 A, and ionisation is increased by a factor of two. The existence of multiple peaks results from the shape of the stellar spectrum at short wavelengths, and the extra modulating effect of \(\Phi_{\lambda,\kappa}\) at very short wavelengths.
Similarly, the right panels of Fig. 6 elaborate on the energy term of Eq. 12, and we can define three contributions. \(H^{\prime}=n_{\rm H}\sigma_{J}F_{\star}\) depicts the energy available before photonionisation (solid lines); \(H^{\prime}\left(1-\frac{\lambda}{hc}\right)\) represents how much energy might potentially be effectively deposited as heat after photoionisation (dashed lines); lastly \(H^{\prime}\left(1-\frac{\lambda}{hc}\right)\eta_{\lambda,\kappa}\) shows the total heating rate per wavelength bin, taking into account the effect of photoelectrons (dotted lines).
The energy deposition at different altitudes in the atmosphere is depicted in the right panels of fig 6. In the upper atmosphere, for the highest altitudes, 3.71 and 1.58 \(R_{p}\), the energy available before (solid line) and after (dashed line) photoionisation originates from a wide range of wavelengths. However, the heat deposition rate is 100 times larger at 1.58 \(R_{p}\) than at 3.70 \(R_{p}\), as a result of a higher density. For low-energy photons, the term \(\left(1-\frac{\lambda}{hc}\right)\) is close to 0. Those photons spend most of their energy in the primary ionisation event, and very limited energy remains for heating (or for further ionisation or excitation). As altitude decreases, the peak of energy deposition shifts towards shorter wavelengths, and most of the net energy extracted (dashed line) from the available energy (solid lines) originates from \(\lambda<400\) A. The peak observed at the photoionisation threshold of 912 A at the two highest altitudes disappears because most of the energy available at these wavelengths has been used for primary ionisation at higher altitudes. For the region the closest to the surface (in cyan), we observe that the energy available before \(H^{\prime}\) and after \(H^{\prime}\left(1-\frac{\lambda}{hc}\right)\) photoionisation overlap. Indeed, for wavelengths lower than 200 A, the term \(\left(1-\frac{\lambda}{hc}\right)\) is close to 1 and \(E_{0}\) is marginally impacted after removing 13.6 eV.
An additional modulation occurs when secondary ionisation is considered, described by the dotted lines in the right panels. We observe a significant attenuation of the peaks for altitudes lower than 1.29 \(R_{p}\) at wavelengths lower than 400 A. For very short wavelengths, \(\eta_{\lambda,\kappa}\) is rather small (\(\sim\)0.12) and acts as a significant modulation factor, reducing substantially the heating rate arising from those wavelengths. In other words, the contribution of X-rays towards heating is very weak at all altitudes in our investigation when secondary ionisation is taken into account.
In summary, we observe that taking into account photoelectrons leads to a significantly higher ionisation rate deep in the atmosphere at short wavelengths. Concomitantly, this increase implies that less energy is deposited into heat at these low altitudes, which will naturally change the steady state of the planetary atmosphere. We now turn to the investigation of possible feedbacks in the atmosphere, which we approach through numerical simulations.
## 4 Secondary ionisation of an escaping atmosphere of a Neptune-size planet
### Characteristics of the escaping atmosphere
We now focus on the self-consistent simulations based on the PLUTO code, which include the hydrodynamics and photo-chemistry of the gas and, optionally, a parameterised description of secondary ionisation. The implementation of this new model was validated by comparing our results in the limit \(\eta_{\lambda,\kappa}\)=1 and \(\Phi_{\lambda,\kappa}\)=0 of no secondary ionisation with a version of the model |
2309.16145 | The Confidence-Competence Gap in Large Language Models: A Cognitive
Study | Large Language Models (LLMs) have acquired ubiquitous attention for their
performances across diverse domains. Our study here searches through LLMs'
cognitive abilities and confidence dynamics. We dive deep into understanding
the alignment between their self-assessed confidence and actual performance. We
exploit these models with diverse sets of questionnaires and real-world
scenarios and extract how LLMs exhibit confidence in their responses. Our
findings reveal intriguing instances where models demonstrate high confidence
even when they answer incorrectly. This is reminiscent of the Dunning-Kruger
effect observed in human psychology. In contrast, there are cases where models
exhibit low confidence with correct answers revealing potential underestimation
biases. Our results underscore the need for a deeper understanding of their
cognitive processes. By examining the nuances of LLMs' self-assessment
mechanism, this investigation provides noteworthy revelations that serve to
advance the functionalities and broaden the potential applications of these
formidable language models. | Aniket Kumar Singh, Suman Devkota, Bishal Lamichhane, Uttam Dhakal, Chandra Dhakal | 2023-09-28T03:50:09Z | http://arxiv.org/abs/2309.16145v1 | # The Confidence-Competence Gap in Large Language Models: A Cognitive Study
###### Abstract
Large Language Models (LLMs) have acquired ubiquitous attention for their performances across diverse domains. Our study here searches through LLMs' cognitive abilities and confidence dynamics. We dive deep into understanding the alignment between their self-assessed confidence and actual performance. We exploit these models with diverse sets of questionnaires and real-world scenarios and extract how LLMs exhibit confidence in their responses. Our findings reveal intriguing instances where models demonstrate high confidence even when they answer incorrectly. This is reminiscent of the Dunning-Kruger effect observed in human psychology. In contrast, there are cases where models exhibit low confidence with correct answers revealing potential underestimation biases. Our results underscore the need for a deeper understanding of their cognitive processes. By examining the nuances of LLMs' self-assessment mechanism, this investigation provides noteworthy revelations that serve to advance the functionalities and broaden the potential applications of these formidable language models.
Natural Language Processing Large Language Models Dunning-Kruger in LLMs Simulation Cognitive Biases Machine Learning AI Evaluation Meta-cognition Artificial Intelligence
## 1 Introduction
Ever since the Transformer [1] model was introduced in 2017, we have seen remarkable advancements in the field of Natural Language Processing (NLP) and the recent advent of Large Language Models (LLMs). LLMs have progressed from generating few responses to developing abundant erudite essays. These language models are capable of performing better and learning on their own [2]. Large language models have impacted a wide array of fields in a short time. As we all know, the threshold of medical sciences and health is very high. Language models have been proven to be smart enough to cross those barriers [3]. These models have been performing different human activities like teaching, organizing business, advertising, being an agent, and content writing. As these models improve and evolve, their behavior becomes increasingly attractive, but at the same time, it is necessary to assess their behaviors from different angles. In recent years, we have seen that these models have emerging capability for attaining human-like intelligence[4]. Hence, understanding the cognitive abilities of these models is a crucial aspect of responsible and beneficial deployment in real-world scenarios.
Our study is inspired by cognitive science and psychology to investigate the intricacies of LLMs behavior to uncover the mechanism underlying successes and failures at times [5][6]. Even though these models have showcased their capabilities in generating human-like text, solving complex problems, and reasoning about the world, the mechanism governing their decision-making remains opaque. As these models are deployed in search engines, writing tools, and other commercial applications, it is essential to understand how these models behave, such as how they think, the mistakes they make, and how they make decisions [6]. Adopting innovative evaluation approaches like adaptive testing
[5] and investigating their capacity for empathy [7], our study seeks to shed light on the cognitive aspects of LLMs. While we understand these models don't understand things like humans, their skills could change how we think about intelligence. This insight could help intelligence better match what we expect from them in the future. In addition, our study seeks to find if there is a similarity between LLMs behavior and a cognitive phenomenon known as the Dunning-Kruger effect. The Dunning-Kruger effect observed in humans is when people overestimate and underestimate themselves [8]. We carefully inspect the confidence levels revealed by LLMs while they are responding to diverse sets of problems. Even when LLMs don't possess the human capacity of self-awareness, studying their responses and relating them with perceived confidence might offer valuable insight into their self-assessment with correctness. The motivation for this study rises from the fact that as these models get better, it is essential to understand how confident they are in what they do, which will eventually make these models work well in real-life situations.
David Dunning and Justin Kruger conducted several experiments in 1999 [8][9]. Dunning and Kruger performed initial research on the phenomenon. Their research finding was very compelling. They highlighted the disconnect between an individual's competence and their perception of competence. Our study investigates quantifying self-perceived ability, which is measured through absolute relative confidence levels. This study reveals if a higher confidence level co-relates with higher accuracy. The novelty of our work relies on the fact that we seek the extent of the Dunning-Kruger effect in different LLMs. We dive deep and rigorously into finding out if the models overestimate or underestimate their abilities in specific contexts. Our study reveals appealing perceptions of LLMs' behavior, including situations where models like GPT-4 exhibit high confidence even when their responses are incorrect. This implies a subtle misalignment between self-confidence and self-competence. Likewise, we observed cases where models provided correct answers with shallow confidence, posing queries on underestimation biases. These findings reveal a comparison with the Dunning-Kruger effect. In this well-known cognitive phenomenon, individuals tend to overestimate their abilities in certain domains by clarifying the intricate relationship between cognitive capabilities and levels of confidence in LLMs. This study fosters a deeper understanding of LLMs and their implications for AI applications.
## 2 Related Works
There are several research on large language models. Starting from the approach to information retrieval to the language model replacing human participants [10], the improvement in the capabilities of language models has set an exponential trend. A significant example of advancement in natural language processing is ChatGPT [11]. Ouyang et al aligned language models by fine-tuning with a wide range of feedback [12]. Liang and the team presented a holistic evaluation of these models where they validated 25 findings concerning different situations [13]. Schick et al presented how language models are capable of teaching themselves [14]. Kraus and the team talk about how language models need to be accurate and integrate their resources to deliver more precise responses [15]. Yogatama et al analyzed the state of the art of natural language understanding and investigated to evaluate task-independence of the knowledge [16]. They also assessed a metric based on the test data to determine how quickly an existing model can learn new tasks. The study conducted by Acerbi and Stubbersfield examines if LLMs show biases, and they conclude that the presence of biases is widespread in model training data [17]. Our study here focuses on designing the test categories with different levels depending on the questions' complexity. Seven different language models were tested, and their responses were evaluated.
Drawing inspiration from human cognitive biases, Erik Jones and J. Steinhardt [18] study the failures of LLMs, focusing on the need to detect inaccurate behaviors. Hongbin Ye et al.'s study on hallucinations in LLMs [19] aligns with our skepticism on LLM-generated outputs, although our work focuses primarily on confidence calibration. They discuss the methods for the detection and improvement of hallucinations by providing a taxonomy of hallucinations. Furthermore, [20] investigated empathy in LLMs, highlighting the significance of social skills. In our paper, we examine the confidence scores(self-assessment scores) before and after the LLMs answer the questions, which aligns with Jiaxin Huang et al.'s work [21], where they demonstrate the self-improving capabilities of LLMs. Finally, Zhen Lin, Shubhendu Trivedi, and Jimeng Sun's study [22] study on uncertainty quantification and the trustworthiness of the models, which relates to our work through confidence estimation. These works highlight the necessity for a thorough understanding of LLM behavior, ranging from cognitive biases and self-improvement to the aspect that our paper focuses on self-assessment and confidence of LLMs.
Methodology
In this section, we outline our experimental design and procedures for model selection, categorization of test scenarios, and the framework for model interaction. Our goal is to provide a comprehensive overview of the methodology behind our study. For our investigation, we carefully selected a diverse set of large language models (LLMs) to participate in our experiment. These LLMs represent a spectrum of language generation capabilities and are essential for assessing how different models perceive their competence. The selected models include:
* GPT-4, GPT-3.5
* BARD, GooglePaLM 2
* LLaMA-2, with three configurations:
* 7 billion parameters
* 13 billion parameters
* 70 billion parameters
* Claude-instant, Claude-2
These models were chosen to ensure a comprehensive evaluation of self-assessment abilities across different language generation systems. We employed the native chat interfaces for each model, optimized for their memory and context window capabilities. For open-source models, we leveraged POE.com, a platform by Quora offering access to various LLMs.
### Test Categories
Our experiment encompasses a range of distinct test categories, each containing questions of varying complexity. These test categories were carefully crafted to evaluate how LLMs perceive their competence in different knowledge domains. Detailed information on question types, categories, and contexts is provided in Appendix A.
The experiment included the following test categories:
1. **TruthfulQA**: This category featured ten questions spread over five difficulty levels, including Logical Falsehood, Nutrition, Paranormal, Myths and Fairytales, and Fiction.
2. **TruthfulQA Extended**: Comprising ten questions spread over five difficulty levels, this category included Proverbs, Superstitions, Misquotations, Misconception, and Conspiracies.
3. **Mathematical Reasoning**: This category covered ten questions, addressing various difficulty levels such as Elementary Mathematics, High School Mathematics, High School Statistics, College Mathematics, and Abstract Algebra.
4. **LSAT Reasoning**: Consisting of ten questions based on five distinct contexts, each with two associated questions, difficulty escalated from levels 1 to 5.
The dataset we utilized for this purpose was created with a combination of Benchmarking datasets for LLMs and LSAT Reasoning tests [23][24][25][26]. For a comprehensive understanding of the question types, levels, and contexts, please refer to the Appendix A.1. By structuring our methodology in this way, we aim to provide a detailed and organized account of our experimental procedures, ensuring transparency and rigor in our study.
#### 3.1.1 Prompt Construction
In constructing our prompts, we have placed a strong emphasis on maintaining data uniformity and ensuring consistent input structure for each model. To accomplish this objective, we adopted a three-tiered prompting approach. The center of this endeavor is to formulate inquiries in a manner conducive to the comprehension of the language model, thereby mitigating the likelihood of errors resulting from misinterpretation of the posed questions. Our foundational method was the "Simple Prompting technique," a direct and uncomplicated approach that catered to the basic needs of our research. However, for cases where a more nuanced prompting strategy is necessary for a particular model or questions, we have employed the "Chain of Thoughts" (CoT) [27] technique. This method carefully sequences related prompts to foster deeper model engagement and understanding. For those instances where models require even more elaborate and diverse perspectives, we have employed the "Tree of Thoughts" (ToT) [27] approach. With this technique, we were able to branch out prompts, enabling models to comprehend better and respond to a broader spectrum of related concepts. While our primary goal was to deliver uniform prompts across all models, the integration of the CoT and ToT methods ensured that the distinct needs of specific models were met without undermining the overall consistency of our data.
#### 3.1.2 Prompt Response Observations
During our comprehensive evaluation, we presented a standardized set of questions to each language model, meticulously monitoring fluctuations in their confidence levels, all the while refraining from providing any explicit cues pertaining to the complexity of the questions posed. Subsequently, we report the salient findings and performance characteristics of each model. GPT-4 consistently manifested a commendable stability in its confidence levels. Notably, this model displayed an excellent aptitude for processing and generating responses to simple prompts. GPT-3.5 demonstrated adequate prompt comprehension, required minimal prompting, and exhibited increased confidence during the study. Bard maintained a stable confidence level. It exhibited an impressive facility for generating coherent responses to simple prompts without necessitating the deployment of advanced prompting techniques. Google PaLM2 initially displayed well with simple prompts but started generating questions and self-assessing confidence. LLaMA-7B exceeded the performance expectations, showed better prompt comprehension, and rated confidence separately for AC(Absolute Confidence) and RC(Relative Confidence) on individual problems. LLaMA-13B exhibited impressive comprehension speed but struggled with real number questions and showed hesitancy with certain topics. However, it demonstrated perceptible enhancements in response quality when presented with the Chain of Thought (CoT) prompts, along with intermittent reference to prior topics. LLaMA-70B consistently demonstrated a high proficiency in prompt comprehension and, on average, displayed higher levels of confidence in its generated response. Claude-Instant began with lower confidence but gained assurance, emphasizing reliance on training data. Claude-2 responded confidently to simple prompts but struggled with advanced mathematical and LSAT Reasoning, displaying lower confidence and expressing a lack of training for such challenges.
### Creation of Survey Dataset
To rigorously evaluate the performance of Large Language models across various categories and difficulty levels, we have curated an extensive dataset. This dataset not only records the responses generated by the LLMs but also encompasses their self-assessed confidence levels, both before and after their interactions. This offers a clear understanding of the model's intrinsic capabilities and self-awareness. The evaluation of LLMs is determined upon the examination of their diverse answers or responses to the posed questions. Within our dataset, we have incorporated distinct variables that capture the confidence levels of LLMS not only prior to responding to the questions but also subsequent to providing their responses. This inclusive approach enables us to assess the alterations in their confidence levels before and after generating the response.
Table 3 in Appendix A.2 provides a detailed description of the variables used in this study. The variables represent the problem's category, difficulty level, their confidence before and after answering the questions, and then the correctness of the response of LLMs. In the field of advanced machine learning models, particularly LLMs, evaluating their proficiency goes beyond checking how accurate their output is. It also involves understanding how well these models gauge their abilities, which they express through their confidence levels, and comparing their self-assessment with their actual performance. When we apply these ideas to LLMS, we encounter interesting questions. Do LLMs, despite their computations prowess, exhibit similarities to human cognitive biases like the Dunning-Kruger effect? Can we identify the situations where the model is overly confident or lacks confidence in its abilities based on its confidence scores? Our subsequent analyses explore these questions by examining how well the model's self-assessment aligns with its real-world performance. Calibration of confidence levels and their relationship with the accuracy of LLMs are the two significant aspects of our study. These two metrics are examined in the context of the Dunning-Kruger effect. The section oversees the confidence levels and their relation with the accuracy of the LLMs.
### Introducing Confidence Calibration Metrics
To determine the calibration of different LLMs based on their reported confidence levels, we segment our data, notably _A1_ and _A2_. The following scenarios can be considered:
1. **High Confidence, Correct Answers:** LLMs with high _A1_ score (e.g., \(A1>7\)) and correctl answer.
2. **High Confidence, Incorrect Answers:** LLMs with high _A1_ score but incorrectly answers the question.
3. **Low Confidence, Correct Answers:** LLMs with low _A1_ score (e.g., \(A1<5\)) and correct answers.
4. **Low Confidence, Incorrect Answers:** LLMs with low _A1_ score and incorrectly answers the question.
The above information on segmented analysis provides information on how well the confidence level of LLMs are calibrated and how this will relate to the Dunning-Kruger effect. We add a new variable to our dataset that measures the closeness between pre and post-questions confidence scores(_A1_ and _A2_, and _R1_ and _R2_). Our new variable _Closeness_ is defined as:
\[\text{Closeness}=\begin{cases}1&\text{if }|A1-A2|\leq 1\\ 0&\text{otherwise}\end{cases}\]
We will compare Closeness with _IsCorrect_ to assess if there's any relationship between the LLM's self-assessment accuracy and its performance.
## 4 Results
The data collection process revealed a lot of information about how LLMs behave. In this section, we will discuss the self-assessment abilities of LLMs. Based on the four scenarios created in 3.3, we counted the total number of those instances for each LLM and confidence scores (A_1, R_1, etc.). Table 1 shows results.
From Table 1, we can see that models like GPT-4 show a high number of correct answers when confident (High_Conf_Correct_A1 = 25). But if we look at the High_Conf_Incorrect scores, it is 15. While this score is not the highest compared to other models, it is high, and this means GPT-4 was always highly confident in itself while answering the questions we provided(regardless of the correctness). LLaMA-13B also shows a discrepancy in high confidence and actual performance, with High_Conf_Incorrect_A1 at 23 instances. This could be interpreted as a potential misalignment between confidence and competence, akin to the overestimation seen in the Dunning-Kruger effect. Claude-Instant has High_Con_Incorrect_A2 of 21. This means more than half of the time, Claude-Instant was highly confident after answering the question but got it incorrect. Google-PaLM, with a Low_Conf_Correct_A1 of 3, shows cases where the model is correct despite low confidence. While it is not conclusive, this could be a point of investigation for underestimation biases. Google-Bard shows similar High_Conf_Correct and High_Conf_Incorrect scores before (A1) and after answering (A2), suggesting a more stable confidence calibration similar to GPT-4. Actually, Google-Bard is also overconfident( high High_Con_Incorrect scores), similar to GPT-3.5 and GPT-4.
The evidence from our result is a strong inclination toward cognitive biases like the Dunning-Kruger effect in LLMs. While we must exercise caution before jumping to any conclusion, our data contains scenarios where LLMs' high confidence does not always correlate with correct answers and vice versa. However, these are hands-on evidence, and we don't recommend it to be definitive evidence of such psychological phenomena in these models. Our study is a practical test to determine how well these models behave. It helps us try to understand why these models sometimes act overconfident or provide incorrect information because of the way they process information or what we see because of the data they are exposed to.
### Confidence Closeness
In the section above, we looked at how the correctness of LLM is compared to their confidence. To take this one step further, we will look at their correctness when based on the variable created in section 3.3. The relation between
\begin{table}
\begin{tabular}{l|c|r r r r r|r r r} \multicolumn{1}{l|}{Metrics} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \multicolumn{1}{c}{} & Claude- & 2 & Claude- & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \multicolumn{1}{c}{} & & Instant & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline \multirow{3}{*}{High\_Con\_} & A1 & 3 & 6 & 12 & 1 & 14 & 25 & 5 & 8 & 9 \\ & A2 & 14 & 13 & 18 & 8 & 21 & 25 & 8 & 14 & 8 \\ Correct & R1 & 3 & 0 & 21 & 0 & 12 & 25 & 5 & 8 & 5 \\ & R2 & 13 & 3 & 21 & 4 & 21 & 25 & 5 & 13 & 9 \\ \hline \multirow{3}{*}{High\_Con\_} & A1 & 3 & 2 & 6 & 1 & 5 & 15 & 23 & 18 & 6 \\ & A2 & 4 & 21 & 14 & 6 & 16 & 15 & 25 & 22 & 21 \\ Incorrect & R1 & 3 & 0 & 17 & 2 & 5 & 15 & 13 & 18 & 6 \\ & R2 & 4 & 7 & 15 & 2 & 16 & 15 & 16 & 22 & 14 \\ \hline \multirow{3}{*}{Low\_Con\_} & A1 & 2 & 0 & 0 & 3 & 0 & 0 & 0 & 0 & 0 \\ & A2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ Correct & R1 & 2 & 1 & 0 & 6 & 0 & 1 & 0 & 2 \\ & R2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline \multirow{3}{*}{Low\_Con\_} & A1 & 12 & 2 & 0 & 5 & 0 & 0 & 2 & 0 & 2 \\ & A2 & 14 & 0 & 0 & 0 & 0 & 0 & 2 & 0 & 0 \\ Incorrect & R1 & 12 & 5 & 0 & 16 & 0 & 0 & 3 & 0 & 0 \\ \cline{1-1} & R2 & 14 & 0 & 0 & 6 & 0 & 0 & 2 & 0 & 0 \\ \end{tabular}
\end{table}
Table 1: Calibration of Confidence to Competence Across Various Large Language Models (LLMs) for Different Confidence Metrics (A1, A2, R1, R2)
'A1' and 'A2' on how close they are pre-task and post-task serves as an indicator for LLMs on how consistent the self-assessment is. A high value in the "Close_Correct" category implies that the model is generally correct while being confident. In addition to that, it is also an indication that the model maintains a consistent level of confidence before and after answering the question. On the other side, a high count in the "Close_Incorrect" category suggests that the models' confidence is stable even if their answers are incorrect. The results are summarized in Table 4 in Appendix B.1.
As we have seen above, GPT-4 was very confident in its response regardless of the correctness of the answer. We can see a similar pattern in this case, too. Claude-2 shows a lower "Close_Correct" but a higher "Close_InCorrect" and "Far_Correct" count. This is evidence that Claude-2 is not able to evaluate itself, as when the confidence score was closer to each other, it had 14 incorrect responses out of 40 responses. Still, when the confidence scores were far from each other, it had 15 correct out of 40. This suggests two things:1) either Claude-2 initially had a low A1. After answering the question, it increased its confidence score, A2, and then got it correct, or 2) it initially had a high A1 but later lowered its confidence, but it still got it right. The first one tells us that Claude-2 was able to change and update its evaluation correctly. Figure 1 illustrates Claude-2's confidence score to reflect their evaluating behavior. The four red dots on the x-axis tell us that Claude-2 successfully lowered its confidence score after answering the question, and the answer was incorrect. This means Claude-2 was able to successfully assess itself after looking at the question for these four instances. In most cases (shown by the green dots), when it increased its confidence after looking at the question, it got the answers correct. However, it did increase the confidence but yet got incorrect answers in some cases. A similar observation was found for LLaMA-13B, where it has high counts in "Close_InCorrect." The zero count for GoogleBard in Far_Correct and Far_Incorrect tells us that its evaluation is pretty much the same before and after answering the question. Table 4 in Appendix B.1 shows the complete result for all LLMs.
### Distribution of Confidence Scores
The facetted destiny plot in Figure 2 with the summary of statistics given in table 2 presents the distinct patterns in self-assessment across different LLMs. The mean confidence level for A1 and R1 of Claude-2 is approximately 4.8 and 4.65, respectively. These mean confidence levels are coupled with higher standard deviations of 2.91 and 2.95 simultaneously. The high standard deviation for confidence level directs toward a broad spectrum of self-perceived abilities. In addition, the post-task mean confidence level for A2 and R2 is also higher, with a higher standard deviation. Higher Standard deviation for A2 and R2 implies significant inconsistencies in self-assessment after completion of the task. Individually, the mean confidence score of A1 and R1 for Claude-Instant is 6.85 and 5.47, respectively, with a lower standard deviation of 1.03 and 1.06 simultaneously. The confidence after completing the task spiked to 8.32 and 6.82 for A2 and R2, maintaining the low variability of data around 0.83 and 0.93, respectively.
Figure 1: Comparison of A1 and A2 Scores for Claude-2.
Even though Google-Bard generally outperforms Google-PaLM across the board, both of these models maintained consistent confidence metrics. In addition, model GPT-3.5 and 4 also encompasses high mean confidence levels. GPT-4 shows a mean A1 confidence score of 8.9 with a standard deviation of 0.568. Among the LLaMA series, variability in confidence levels is more noticeable. LLaMA-13B has a standard deviation of 2.06 for A1, which is higher, While series LLaMA-70B and LLaMA-7B are in the range of 1.12 and 1.21, respectively. To summarize, the findings here are detailed in the self-assessed confidence levels with various LLMs. The destiny plot in upcoming sections will further illustrate the trends, where the curve varies in width and height, implying the observed mean and variability in confidence levels. These results underscore the fact that our analysis should consider both central tendency and dispersion for self-assessment mechanisms of LLMs.
The density plot illustrated in Figure 3 shows the distribution of confidence scores across different LLMs for both A1 and A2 scores. A similar distribution plot for R1 and R2 is in Appendix B.2 Figure 6. We can compare the distributions across different LLMs and observe how their confidence scores vary. For instance, the density plot for A1 in Figure 3 shows us that GPT-4 is very confident in most of the cases. Figure 2, 3, and 6 give us an initial picture of the variation of confidence scores in LLMs. Now we will incorporate the correctness in the mix to study how well these LLMs do.
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline LLM & \multicolumn{2}{c}{A1} & \multicolumn{2}{c}{R1} & \multicolumn{2}{c}{A2} & \multicolumn{2}{c}{R2} \\ & Mean & SD & Mean & SD & Mean & SD & Mean & SD \\ \hline Claude-2 & 4.800 & 2.911 & 4.650 & 2.957 & 5.400 & 4.241 & 5.400 & 4.235 \\ Claude-Instant & 6.850 & 1.027 & 5.475 & 1.062 & 8.325 & 0.829 & 6.825 & 0.931 \\ GoogleBard & 7.400 & 0.591 & 8.400 & 0.591 & 7.700 & 0.648 & 8.700 & 0.648 \\ GooglePaLM & 5.500 & 1.485 & 4.600 & 1.646 & 7.050 & 1.260 & 6.050 & 1.260 \\ GPT-3.5 & 7.525 & 0.877 & 7.475 & 0.877 & 8.900 & 0.672 & 8.900 & 0.672 \\ GPT-4 & 8.900 & 0.568 & 9.200 & 0.372 & 8.925 & 0.594 & 9.225 & 0.375 \\ LLaMA-13B & 7.550 & 2.062 & 6.950 & 2.136 & 7.725 & 1.921 & 7.400 & 1.892 \\ LLaMA-70B & 7.350 & 1.122 & 7.950 & 1.339 & 8.600 & 0.672 & 8.475 & 0.847 \\ LLaMA-7B & 7.250 & 1.214 & 6.600 & 1.297 & 8.025 & 1.187 & 7.525 & 0.877 \\ \hline \end{tabular}
\end{table}
Table 2: Summary Statistics of Confidence Scores by LLM Type
Figure 2: Facetted Density Plot of Confidence Levels by LLM Type. The plot reveals varying patterns of confidence distribution across different LLM types, suggesting nuanced self-perceptions in these models.
Figure 3: Density Plots of Correctness for Different Confidence Scores (A1 and A2)
### Category vs Confidence Scores
Understanding how LLMs self-assess their performance via confidence can provide valuable perception of their limitations and capabilities. Our data set provides a significant variation of LLMs performance across several categories like LSAT- Reasoning, Mathematical Reasoning, and Truthful Q&A. Our data set portrays confidence scores both before (A1 and R1) and after (A2 and R2) answering the questions in above mentioned categories.
With reference to confidence, GPT-4 was able to succeed in setting itself apart from others with consistency in its high absolute and relative pre-task and post-task confidence levels across all tested categories. Significantly, it exhibited unparalleled confidence in the LSAT Reasoning task. This implies that it has more vital abilities in logical and analytical Reasoning. In contrast, Claude-2 and Claude-Instant represented a less consistent confidence profile. Even though Claude-2 demonstrated diminished pre-task and post-task confidence levels in LSAT Reasoning, its confidence tends to improve in the truthful Q&A category. The variation of this confidence advocates Claude-2 and similar models may be optimized for specific types of tasks, ergo influencing their self-assessed confidence. For a detailed review, readers are encouraged to refer to Appendix Table B.4. The significant differences in how confident the models are could help us understand how well they work for different types of problems. When we observe the apparent differences in confidence among the models, it can provide us with valuable insights into how well they can be applied to various types of problems.
In addition, models like LLaMA-70B shows high confidence score for LSAT reasoning and Mathematical Reasoning; however, they possess lower confidence score in the Truthful Q&A category. This kind of variability across different types suggests models might have nuanced areas of expertise. Such within-model variability across different categories suggests that individual models may have nuanced areas of expertise. This tells us that we should be smart enough to pick a model based on the categories.
It is noteworthy to mention the anomaly observed with Claude-2 in LSAT Reasoning, where it recorded an extremely low confidence level, particularly for the post-task metrics (A2 and R2). While the reason for this remains elusive, it raises questions about the model's internal evaluation mechanisms or possible computational errors that need further careful observation. Our analysis uncovers the complex landscape of confidence with different LLMs and the problem categories that they are prompted to. A model like GPT-4 appeared to be general for all the various tasks maintaining high confidence. However, other LLMs seem to be specialized in a specific domain or yet to be tuned for generalization. Our findings convey the information to consider model-specific confidence evaluation while selecting a particular model
Figure 4: Average Confidence Levels by Category and LLM
for a specific task. This also opens new interest in the research area for LLMs to understand problem difficulty and correctness of the answer(Iscorrect), offering more perspective on performance and self-assessment of the model. As seen in Figure 4, there is a noticeable pattern in the confidence levels across different problem categories and LLMs.
Our data set is based on comprehensive performance metrics, and the plots are generated utilizing these datasets. Conspicuously, models like GPT-4 and LLaMA-70B hold higher post-task confidence levels across all examined categories. Mathematical Reasoning seems to be standing out with consistency in high confidence levels, recommending that models are more secure in their performance in mathematical tasks in comparison to other types of functions. Our experimental data on the 'Truthful Q&A' category displays variable performance, suggesting that the nature of a task might affect LLMs' confidence distinctively. These variations in confidence levels should be considered to have a practical implication for the development of LLMs in specializing in particular tasks.
### Problem Level vs. Confidence Scores
The dataset table in Appendix B.5 Table 6 illuminates the average confidence scores (both absolute and relative) expressed by different LLMs at different problem levels (ranging from 1 to 5). The visualization for the table is represented in Figure 5. Predominantly, it is conspicuous that as the level of problem increases, the confidence score for LLMs decreases. The pattern is very noticeable with the absolute confidence score. LLMs felt less sure about their answers as the level of the problem increased. This result advocates that LLMs may struggle to sustain high confidence when prompted with convoluted tasks.
In contrast, a relative confidence score doesn't follow this trend. Even though there is a slight reduction in relative confidence as the level of problem is increased, it is not as steep as the drop in absolute confidence. This implies that LLMs might understand their performance in comparison to others as relatively stable across different problem levels.
In addition, it is worth acknowledging that each LLM differs in their response to problem level. To illustrate, GPT-4 maintained a high confidence score across all problem levels, indicating its consistency in self-assessment of its performance. However, models like Claude-2 and Claude-Instant represented higher variability in their confidence scores as the level of the problem changed. This is another indication that some models may adapt differently to task difficulties. For example, Claude-2 shows a notable improvement in confidence levels for certain problems.
It is imperative to underscore that our analysis provides an overview of the average trends in confidence scores across different problem levels. Individual variations in model behaviors are manifested, and this observation showcases
Figure 5: Average Confidence Scores by Problem Level
the need for a nuanced understanding of how different language models respond to diverse problem complexities. In addition to this, further investigation with the incorporation of additional factors like problem content and correctness of answer could offer valuable insight into LLMs performance and confidence assessment in particular scenarios.
## 5 Discussion & Conclusion
In this study, we analyzed the self-assessment behavior of Large language models such as GPT, Claude, LLaMA, and others through their confidence scores, investigating potential parallels with the Dunning-Kruger effect. As depicted in table B.4 and figure 4 provides intriguing insights about how LLMs assess their performance across different categories. Even though our study didn't establish a solid presence of the Dunning-Kruger effect, it provided valuable observations aligning with its conceptual framework.
GPT-4 stands out with its consistency in high confidence scores across all the tested categories, especially in LSAT Reasoning tasks. Such a high confidence pattern resembles its high ability to gauge its competence accurately. Nevertheless, it is essential to be careful to jump to any conclusion as there might be other factors contributing to this trend. On the other hand, models like Claude-2 and Claude Instant displayed higher variability in their confidence scores across different categories. Claude-2 showed a relatively low confidence score for LSAT Reasoning; however, they performed better in Truthful Q&A. This difference mirrors the concept of individuals with varying abilities showing inconsistency in assessments. For now, this observation serves as a parallel instead of conclusive proof of the Dunning-Kruger effect's applicability in this context. LLaMA-70B performed better with a higher confidence score in LSAT Reasoning and Mathematical categories but had lower confidence in Truthful Q&A. This subtle variation aligns with the idea that individual LLMs might possess specialized domains of competence, akin to the Dunning-Kruger effect's recognition of skill variations among individuals.
Referencing Table 6 and Figure 5, we investigate and explore the relationship between problem-level complexity and LLM confidence scores. The observed patterns in confidence offer exciting and interesting connections to the Dunning-Kruger effect, even if they don't provide solid evidence of it. LLMs were observed to possess higher confidence scores starting at level 1. With increasing complexity, a decrease in confidence score was observed. The observed overconfidence phase is related to the overestimation part of the Dunning-Kruger effect, wherein individuals with lower abilities often overrate their competence. Different LLMs exhibited varying confidence score patterns across the problem levels, reflecting the notion that individuals with different abilities experience varying degrees of the Dunning-Kruger effect. Also, Models like GPT-4 maintained their confidence similar to the individual with high abilities making accurate self-assessments.
In a nutshell, the pattern of the LLM's confidence score provides intriguing parallels with the Dunning-Kruger effect. However, they don't provide solid evidence of its presence in the behavior of LLMs. To provide a sturdy connection, further research with statistical analysis and a broader set of variables is vital. Our findings, nevertheless, pave the way for deeper exploration of LLMs with the Dunning-Kruger effect by showing the relationship between self-assessment and competence in artificial intelligence. The underlying intricacies of LLM behavior, the biases, and the confidence framework demand a more in-depth, comprehensive exploration. It opens doors to a myriad of questions that deserve attention, hinting at a treasure trove of insights awaiting to dive deep. Delving deeper into this convergence of psychology and artificial intelligence offers a promising frontier, potentially unlocking novel insights into AI behavior and ethics. The observations from this study beckn a broader exploration, suggesting that the mysteries of AI cognition, akin to human nuances, are both vast and awaiting discovery. |
2302.14420 | Estimation-of-Distribution Algorithms for Multi-Valued Decision
Variables | The majority of research on estimation-of-distribution algorithms (EDAs)
concentrates on pseudo-Boolean optimization and permutation problems, leaving
the domain of EDAs for problems in which the decision variables can take more
than two values, but which are not permutation problems, mostly unexplored. To
render this domain more accessible, we propose a natural way to extend the
known univariate EDAs to this setting. Different from a naive reduction to the
binary case, our approach avoids additional constraints.
Since understanding genetic drift is crucial for an optimal parameter choice,
we extend the known quantitative analysis of genetic drift to EDAs for
multi-valued variables. Roughly speaking, when the variables take $r$ different
values, the time for genetic drift to become significant is $r$ times shorter
than in the binary case. Consequently, the update strength of the probabilistic
model has to be chosen $r$ times lower now.
To investigate how desired model updates take place in this framework, we
undertake a mathematical runtime analysis on the $r$-valued \leadingones
problem. We prove that with the right parameters, the multi-valued UMDA solves
this problem efficiently in $O(r\ln(r)^2 n^2 \ln(n))$ function evaluations.
This bound is nearly tight as our lower bound $\Omega(r\ln(r) n^2 \ln(n))$
shows.
Overall, our work shows that our good understanding of binary EDAs naturally
extends to the multi-valued setting, and it gives advice on how to set the main
parameters of multi-values EDAs. | Firas Ben Jedidia, Benjamin Doerr, Martin S. Krejca | 2023-02-28T08:52:40Z | http://arxiv.org/abs/2302.14420v2 | # Estimation-of-Distribution Algorithms for Multi-Valued Decision Variables
###### Abstract
With apparently all research on estimation-of-distribution algorithms (EDAs) concentrated on pseudo-Boolean optimization and permutation problems, we undertake the first steps towards using EDAs for problems in which the decision variables can take more than two values, but which are not permutation problems. To this aim, we propose a natural way to extend the known univariate EDAs to such variables. Different from a naive reduction to the binary case, it avoids additional constraints.
Since understanding genetic drift is crucial for an optimal parameter choice, we extend the known quantitative analysis of genetic drift to EDAs for multi-valued variables. Roughly speaking, when the variables take \(r\) different values, the time for genetic drift to become significant is \(r\) times shorter than in the binary case. Consequently, the update strength of the probabilistic model has to be chosen \(r\) times lower now.
To investigate how desired model updates take place in this framework, we undertake a mathematical runtime analysis on the \(r\)-valued LeadingOnes problem. We prove that with the right parameters, the multi-valued UMDA solves this problem efficiently in \(O(r\log(r)^{2}n^{2}\log(n))\) function evaluations.
Overall, our work shows that EDAs can be adjusted to multi-valued problems, and it gives advice on how to set the main parameters.
_Keywords:_ Estimation-of-distribution algorithms, univariate marginal distribution algorithm, evolutionary algorithms, genetic drift, LeadingOnes benchmark.
Introduction
Estimation-of-distribution algorithms (EDAs [27]) are randomized search heuristics that evolve a probabilistic model of the search space (that is, a probability distribution over the search space). In contrast to solution-based algorithms such as classic evolutionary algorithms, which only have the choice between the two extreme decisions of keeping or discarding a solution, EDAs can take into account the information gained from a function evaluation also to a smaller degree. This less short-sighted way of reacting to new insights leads to several proven advantages, e.g., that EDAs can be very robust to noise [17, 24]. Since the evolved distributions often have a larger variance, EDAs can also be faster in exploring the search space, in particular, when it comes to leaving local optima, where they have been shown to significantly outperform simple evolutionary algorithms [19, 8, 33].
In contrast to classic evolutionary algorithms, which have been extensively used for various types of search spaces, to the best of our knowledge, EDAs so far have been only used for problems with binary decision variables and for permutation problems.
Since this might be a lost opportunity, we undertake the first steps towards also using EDAs for problems with decision variables taking more than two values (but different from permutation problems). We first note that the strong dependencies that distinguish a permutation problem from just a problem defined on \(\{1,\ldots,n\}^{n}\) have led to very particular EDAs for permutation problems. We therefore did not see how to gain any insights from these results for general multi-valued problems.
We therefore define EDAs for multi-valued decision variables from scratch, that is, without building on any related existing work. We note that, in principle, one could transform a multi-valued problem into a binary one by having, for each variable taking \(r\) different values, \(r\) binary variables, each indicating that the variable has the corresponding value. This would lead to a constrained optimization problem with the additional constraints that exactly one of these variables can take the value \(1\). This might be a feasible approach, but since such constraints generally impose additional difficulties, we propose a way that does not need an additional treatment of constraints (in other words, we set up our EDAs in a way that these constraints are satisfied automatically).
We defer the details to Section 4.2 and only sketch the rough idea of our approach here. For each variable taking \(r\) values, without loss of generality the values \(\{0,\ldots,r-1\}\), we have \(r\) sampling frequencies \(p_{0},p_{1},\ldots,p_{r-1}\) that always add up to \(1\). When sampling a value for the variable, we do this mutually exclusively, that is, the variable takes the value \(i\) with probability \(p_{i}\)
This mutual exclusion in the sampling immediately gives that the frequency update does not violate the property that the frequencies add up to 1. Consequently, this appears to be a convenient (and in fact very natural) set-up for a multi-valued EDA. We note that there are some non-trivial technical questions to be discussed when working with frequencies borders, such as \(\left[\frac{1}{n},1-\frac{1}{n}\right]\) in the classical binary case, but we also come up with a simple and natural solution for this aspect.
As a first step towards understanding this multi-valued EDA framework, we study how prone it is to genetic drift. Genetic drift in EDAs means that sampling frequencies not only move because of a clear signal induced by the objective function but also due random fluctuations in the sampling process. This has the negative effect that even in the complete absence of a fitness signal, the EDA develops a preference for a particular value of this decision variable. From a long sequence of works, see Section 5 for the details, it is well understood how the time for this genetic-drift effect to become relevant depends on the parameters of the EDAs [13]. Consequently, if one plans to run the EDA for a certain number of iterations, then this quantification tells the user how to set the parameters as to avoid genetic drift within this time period.
Since such a quantification is apparently helpful in the application of EDAs, we first extend this quantification to multi-valued EDAs. When looking at the relatively general tools used Doerr and Zheng [13], this appears straightforward, but it turns out that such a direct approach does not give the best possible result. The reason is that for multi-valued decision variables, the martingale describing a frequency of a neutral variable over time has a lower variance (in the relevant initial time interval). To profit from this, we use a fairly technical martingale concentration result of McDiarmid [25], which, to the best our our knowledge, has not been used before in the analysis of randomized search heuristics. Thanks to this result, we show that the time for genetic drift to become relevant is (only) by a factor of \(r\) lower than in the case of binary decision variables (Theorem 3).
We use this result to conduct a mathematical runtime analysis of the multi-valued univariate marginal distribution algorithm (\(r\)-UMDA) on the \(r\)-valued LeadingOnes problem in the regime with low genetic drift. This problem is interesting since a typical optimization process optimizes the variable sequentially in a fixed order. Consequently, in a run of an EDA on LeadingOnes, there is typically always one variable with undecided sampling frequency that has a strong influence on the fitness. Hence, this problem is suitable to study how fast an EDA reacts to a strong fitness signal.
Our runtime analysis shows that also in the multi-valued setting, EDAs can react fast to a strong fitness signal. Since now the frequencies start at the
value \(\frac{1}{r}\), the time to move a frequency is a little longer, namely \(\Theta(r\log(r))\) instead of constant when the sample size \(\lambda\) is by a sufficient constant factor larger than the selection size \(\mu\). This still appears to be a small price for having to deal with \(r\) decision alternatives. This larger time also requires that the model update has to be chosen more conservatively as to prevent genetic drift (for this, we profit from our analysis of genetic drift), leading to another \(\log(r)\) factor in the runtime. In summary, we prove that the UMDA can optimize the \(r\)-valued LeadingOnes problem in time \(O(r(\log(r))^{2}n^{2}\log(n))\) (Theorem 6), a bound that agrees with the one shown Doerr and Krejca [10] for the classical case \(r=2\).
Overall, our work shows that \(r\)-valued EDAs can be effective problem solvers, and it detects no reason for the up-to-now hesitation to use such EDAs in practice.
This work is organized as follows. We describe previous works in the following section and set the notation in the subsequent section. In Section 4, we propose our multi-valued EDA framework. Our main technical results, the analysis of genetic drift and the runtime analysis for the LeadingOnes problem, can be found in Sections 5 and 6. The paper ends with a short conclusion.
## 2 Related Work
Since the technical sections of this work contain three relatively independent topics--the definition of multi-valued EDAs, genetic drift, and a runtime analysis on the LeadingOnes benchmark--we present the previous works relevant to these topics in the respective sections. We hope that this eases the reading of this paper.
This being a theoretical work, we do not discuss in detail how EDAs have been successfully used to solve real-worlds optimization problems and refer to the surveys [22, 27].
Theoretically oriented works have accompanied the development and use of EDAs for a long time, see, e.g., the early works on genetic drift described in Section 5. The first mathematical runtime analysis of an EDA was conducted by Droste [14]. This seminal work, showing an asymptotically tight bound for the runtime of the compact genetic algorithm on the OneMax benchmark, already contains many ideas that are now frequently used in the runtime analysis of EDAs. It also observed that EDAs optimize problems in a very different manner, visible from the different runtimes shown on two linear functions, which contrasts the famous analysis of how the \((1+1)\) EA optimizes linear functions by Drose, Jansen, and Wegener [15]. Interestingly,
apart from the works of one research group [4, 3, 5], Droste's ground-breaking work [14] was not followed up by other runtime analyses for around ten years. Since then, starting with works like [6, 16, 32, 21], the runtime analysis of EDAs has become very active and has, despite the technical challenges in analyzing such complex algorithms, produced many fundamental results and a good understanding of some of the working principles of EDAs. We refer to the recent survey [20] for more details.
## 3 Preliminaries
We denote by \(\mathbb{N}\) the set of all natural numbers, including \(0\), and by \(\mathbb{R}\) the set of all real numbers. Additionally, for \(a,b\in\mathbb{N}\), let \([a..b]=[a,b]\cap\mathbb{N}\), and let \([a]=[1..a]\). When we say that a random process is a martingale and do not specify a filtration, then we mean that the process is a martingale with respect to its natural filtration. Further, for all \(n\in\mathbb{N}_{\geq 1}\) and \(p\in\mathbb{R}_{\geq 0}^{n}\), we denote the \(1\)-norm of \(p\), that is, the sum of the entries of \(p\), by \(\|p\|_{1}\).
Let \(n\in\mathbb{N}_{\geq 1}\) and \(r\in\mathbb{N}_{\geq 2}\). We consider the maximization of functions of the form \(f\colon\left[0..r-1\right]^{n}\to\mathbb{R}\), which we call _r-valued fitness functions_. Whenever we mention an \(r\)-valued fitness function, we implicitly assume that its dimension \(n\) and the cardinality \(r\) of its domain are given. We call each \(x\in\left[0..r-1\right]^{n}\) an _individual_, and we call \(f(x)\) the _fitness_ of \(x\).
We say that a random variable \(Y\)_stochastically dominates_ another random variable \(X\), not necessarily defined on the same probability space, denoted by \(X\preceq Y\), if and only if for all \(\lambda\in\mathbb{R}\), we have \(\Pr[X\leq\lambda]\leq\Pr[Y\leq\lambda]\).
## 4 Multi-Valued EDAs
In this section, we generalize the three common univariate EDAs for the binary decision variable to multi-valued decision variables. We call these variants _multi-valued EDAs_. To this end, we briefly discuss the binary case in Section 4.1 before presenting our framework in Section 4.2. In our presentation, we concentrate on the UMDA [26] and then briefly present the generalizations of the other two common univariate EDAs.
### Binary EDAs
Binary EDAs refer to EDAs for _pseudo-Boolean_ optimization, that is, the optimization of functions \(f\colon\{0,1\}^{n}\to\mathbb{R}\). This setting is a special case of optimizing \(r\)-valued fitness functions, for \(r=2\). The probabilistic model of univariate EDAs in this domain is a length-\(n\) vector \(p\) of probabilities (the
frequency vector_), where the probability (the _frequency_) at position \(i\in[n]\) denotes the probability that a sample has a \(1\) at position \(i\), independent of the other positions. Formally, for all \(x,y\in\{0,1\}^{n}\), it holds that \(\Pr[x=y]=\prod_{i\in[n]}(p_{i}y^{i_{i}}\cdot(1-p_{i})^{1-y_{i}})\), where we assume that \(0^{0}=1\).
Binary EDAs commonly take at least a parameter \(\lambda\in\mathbb{N}_{\geq 1}\) (the _population size_) as well as a pseudo-Boolean fitness function \(f\) as input and optimize \(f\) as follows: Initially, the frequency vector \(p\) models the uniform distribution, that is, each frequency is \(1/n\). Then, in an iterative manner, the algorithm produces \(\lambda\) samples (the _population_) independently via \(p\), and it updates \(p\) based on these samples and their fitness. This process is repeated until a user-defined termination criterion is met.
In order to prevent frequencies from only producing a single value (which is the case if a frequency is \(0\) or \(1\)), after the frequency vector is updated, it is typically restricted to the interval \([1/n,1-1/n]\). That is, if the frequency is less than \(1/n\), it is set to \(1/n\), and if it is greater than \(1-1/n\), it is set to \(1-1/n\). The extreme values of this interval are referred to as the _borders_, and the value \(1/n\) is called the _margin_ of the algorithm.
**UMDA.** Algorithm 1 shows the _univariate marginal distribution algorithm (UMDA)_[26], which is a well established binary EDA, both in the empirical [27] and the theoretical [11] domain. Next to the population size \(\lambda\in\mathbb{N}_{\geq 1}\) and a fitness function, the UMDA also utilizes a parameter \(\mu\in[\lambda]\), called the _selection size_. In each iteration, the UMDA selects \(\mu\) out of the \(\lambda\) samples that have the best fitness (breaking ties uniformly at random). Each frequency is then set to the relative frequency of \(1\)s at the respective position (line \(6\)). Afterwards, the frequencies are restricted to lie within the frequency borders.
### The Multi-Valued EDA Framework
We propose a framework for EDAs for optimizing \(r\)-valued fitness functions. We call the resulting EDAs \(r\)-valued EDAs. Our framework closely follows the one presented in Section 4.1. That is, an \(r\)-valued EDA starts with a probabilistic model initialized to represent the uniform distribution, and it then generates iteratively \(\lambda\in\mathbb{N}_{\geq 1}\) samples independently, based on its model. This model is then updated and afterwards restricted such that it does not contain the extreme probabilities \(0\) and \(1\).
The difference to the framework for binary EDAs lies in how the probabilistic model of \(r\)-valued EDAs is represented and how it is restricted from containing extreme probabilities.
**The probabilistic model.** The probabilistic model of an \(r\)-valued EDA is an \(n\times r\) matrix \((p_{i,j})_{(i,j)\in[n]\times[0..r-1]}\) (the _frequency matrix_), where each row
\(i\in[n]\) forms a vector \(p_{i}\coloneqq(p_{i,j})_{j\in[0..r-1]}\) (the _frequency vector at position \(i\)_) of probabilities (the _frequencies_) that sum to \(1\). As in the binary case, samples from \(p\) are created independently for each position. When creating an individual \(x\in[0..r-1]^{n}\), then, for all \(i\in[n]\) and all \(j\in[r-1]\), the probability that \(x_{i}\) has value \(j\) is \(p_{i,j}\). Formally, for all \(x,y\in[0..r-1]^{n}\), it holds that \(\Pr[x=y]=\prod_{i\in[n]}\prod_{j\in[0..r-1]}(p_{i,j})^{\mathbbm{1}_{y_{i}=j}}\), where we assume that \(0^{0}=1\).
The frequency matrix \(p\) is initialized such that each frequency is \(1/r\), representing the uniform distribution. When performing an update to \(p\), it is important to make sure that each row sums to \(1\).
**Restricting the probabilistic model.** The aim of restricting the frequency matrix \(p\) is to clamp all frequencies, for some values \(a,b\in[0,1]\) (the _lower_ and _upper border_, respectively) with \(a\leq 1/r\leq b\), to \([a,b]\). That is, if a frequency \(q\) is less than \(a\), it should be \(a\) after the restriction, and if it is greater than \(b\), it should be \(b\) afterwards. For such a restriction, it is important for each row \(i\in[n]\) that the frequency vector \(p_{i}\) sums to \(1\) after the restriction. This process is not straightforward. If \(q\notin[a,b]\), and \(q\) is updated to \(q^{\prime}\in[a,b]\), then this creates a change in probability mass of \(q^{\prime}-q\). Hence, simply updating \(q\) to \(q^{\prime}\) can result in all frequencies of \(p_{i}\) summing to a value other than \(1\) after the restriction.
We address the problem above as follows. To this end, let \(a,b\in[0,1]\) be the lower and upper border, respectively, with \(a\leq 1/(r-1)-1/(r(r-1))\) and \(b=1-a(r-1)\). Further, let \(i\in[n]\) be a row of the frequency matrix we wish to restrict, let \(\overline{p}_{i}\in[0,1]^{n}\) be the frequency vector after the update but before the restriction (with \(\|\overline{p}_{i}\|_{1}=1\)), and let \(p_{i}^{+}\in[a,b]^{n}\) be the vector
after clamping it to \([a,b]\) but before taking care that the frequencies sum to \(1\). We define the _restriction of \(\,\overline{p}_{i}\) to \([a,b]\)_, denoted by \(p^{\prime}_{i}\), to be the vector where each frequency's share above \(a\) is reduced by the surplus of the probability relatively to the share above \(a\). Formally, for all \(j\in[0..r-1]\), it holds that
\[p^{\prime}_{i,j}=(p^{+}_{i,j}-a)\frac{1-ar}{\|p^{+}_{i}-(a)_{k\in[n]}\|_{1}}+a. \tag{1}\]
Note that \(1-ar=\|\overline{p}_{i}-(a)_{k\in[n]}\|_{1}\) denotes how much probability mass _should_ be in the frequency vector, above \(a\). The resulting frequency vector \(p^{\prime}_{i}\) sums to \(1\), since
\[\sum\nolimits_{j\in[0..r-1]}p^{\prime}_{i,j} =\frac{1-ar}{\|p^{+}_{i}-(a)_{k\in[n]}\|_{1}}\sum\nolimits_{j\in [0..r-1]}(p^{+}_{i,j}-a)+\sum\nolimits_{j\in[0..r-1]}a\] \[=1-ar+ar=1.\]
Further, each frequency is at least \(a\), since this value is added at the end of eq. (1) and since \(p^{+}_{i,j}\geq a\) by definition of \(p^{+}_{i}\). Last, since each frequency is at least \(a\) after restricting, the largest a frequency can be is \(1-(r-1)a=b\).
In order to disallow the extreme frequencies \(0\) and \(1\) but to stay close to the binary case, we propose to choose the upper border as \(1-1/n\). Following our ideas above, this implies that the lower border is \(1/((r-1)n)\). This is consistent with the binary case but generalizes to the \(r\)-valued domain.
We say that an EDA is _without margins_ if and only if the lower border is \(0\) and the upper border is \(1\). That is, the restriction of the frequencies does not take place.
\(r\)**-UMDA.** We generalize the UMDA (Algorithm 1) to the \(r\)-UMDA (Algorithm 2), utilizing our framework. Like the UMDA, the \(r\)-UMDA has three parameters, namely the population size \(\lambda\in\mathbb{N}_{\geq 1}\), the selection size \(\mu\in[\lambda]\), and the \(r\)-valued fitness function \(f\). It also updates its frequencies analogously to the UMDA by choosing \(\mu\) best individuals from the population of size \(\lambda\) and then setting each frequency at position \(i\in[n]\) for value \(j\in[0..r-1]\) to the relative frequency of value \(j\) at position \(i\) among the \(\mu\) best individuals (line 7). We note that this results in a valid frequency vector for each row \(i\in[n]\), since
\[\sum_{j\in[0..r-1]}\frac{1}{\mu}\sum_{k\in[\mu]}\mathds{1}_{x^{(t,k)}_{i}=j}= \frac{1}{\mu}\sum_{k\in[\mu]}\sum_{j\in[0..r-1]}\mathds{1}_{x^{(t,k)}_{i}=j}= \frac{1}{\mu}\sum_{k\in[\mu]}1=1.\]
\(r\)**-Pbil.** Another popular univariate EDA is _population-based incremental learning_ (PBIL [2]). It operates very similarly to the UMDA, with the only difference being in how it performs an update. In contrast to the UMDA, the
PBIL does not set a frequency to the relative frequency of respective values at a position but, instead, computes the convex combination of the relative frequency with the current frequency value in its frequency vector. To this end, it utilizes a parameter \(\rho\in[0,1]\), the _scaling factor_.
We generalize the PBIL to the \(r\)-PBIL (Algorithm 3). Each frequency vector of the \(r\)-PBIL sums to \(1\) (before the restriction) because it is a convex combination of the \(r\)-UMDA's update (which sums to \(1\)) and the current frequency vector (which also sums to \(1\)).
\(r\)**-cGa.** Another popular univariate EDA is the _compact genetic algorithm_ (cGA [18]). The cGA only has a single parameter \(K\in\mathbb{R}_{>0}\), the _hypothetical population size_, and it creates only two samples each iteration. It ranks these two samples by fitness and then adjusts each frequency by \(\frac{1}{K}\) such that the frequency of the value of the better sample is increased and that of the worse sample decreased.
We generalize the cGA to the \(r\)-cGA (Algorithm 4). Each frequency vector of the \(r\)-cGA sums to \(1\) after the update (before the restriction) because exactly one entry is increased by \(\frac{1}{K}\) and exactly one value is decreased by this amount (noting that this can be the same frequency, in which case no change is made overall).
```
1\(t\gets 0\);
2\(p^{(0)}\leftarrow(\frac{1}{r})_{(i,j)\in[n]\times[0..r-1]}\);
3repeat// iteration \(t\)
4\(P^{(t)}\leftarrow\) population of \(\lambda\) individuals, independently sampled from \(p^{(t)}\);
5\(\{x^{(t,k)}\}_{k\in[\mu]}\leftarrow\) multiset of \(\mu\) individuals from \(P^{(t)}\) with the highest fitness (breaking ties uniformly at random);
6for\((i,j)\in[n]\times[0..r-1]\)do
7\(\overline{p}_{i,j}^{(t+1)}\leftarrow(1-\rho)p_{i,j}^{(t)}+\frac{\rho}{\mu} \sum_{k\in[\mu]}\mathds{1}_{x_{i}^{(t,k)}=j}\);
8\(p^{(t+1)}\leftarrow\) restriction of \(\overline{p}^{(t+1)}\) to \(\left[\frac{1}{(r-1)n},1-\frac{1}{n}\right]\), as described in eq. (1);
9\(t\gets t+1\);
10
11until termination criterion met;
```
**Algorithm 3** The \(r\)-PBIL with parameters \(\lambda\in\mathbb{N}_{\geq 1}\), \(\mu\in[\lambda]\), and \(\rho\in[0,1]\), maximizing an \(r\)-valued fitness function \(f\)
## 5 Genetic Drift
We prove an upper bound on the effect of genetic drift for \(r\)-valued EDAs (Theorem 3) in a similar fashion as Doerr and Zheng [13] for binary decision variables. This allows us to determine parameter values for EDAs that avoid the usually unwanted effect of genetic drift. The main novelty of our result over that by Doerr and Zheng [13] is that we use a slightly technical martingale concentration result due to McDiarmid [25] that allows one to profit from small variances. Such an approach is necessary. If one directly applies the methods presented by Doerr and Zheng [13], one obtains estimates for the genetic drift times that are by a factor of \(\Theta(r)\) lower than ours (that is, the genetic drift effect appears \(r\) times stronger).
In Sections 5.1 and 5.2, we first present a general introduction to the phenomenon of genetic drift. In Section 5.3, we then prove a concentration result on neutral positions (Theorem 3). Last, in Section 5.4, we consider the setting of weak preference.
### Introduction to Genetic Drift
In EDAs, _genetic drift_ means that a frequency does not reach or approach one of the extreme values 0 or 1 because of a clear signal from the objective function but due to random fluctuations from the stochasticity of the process.
While there is no proof that genetic drift is always problematic, the general opinion is that this effect should better be avoided. This is supported by the following observations and results: (i) When genetic drift is strong, many frequencies (in the binary case) approach the extreme values \(0\) and \(1\) and, consequently, the behavior of the EDA comes close to the one of a mutation-based EA, so the advantages of an EDA might be lost. (ii) The vast majority of the runtime results for EDAs, especially those for harder scenarios like noise [17] or multimodality [8], have only been shown in regimes with low genetic drift. (iii) For some particular situations, a drastic performance from genetic drift was proven. For example, the UMDA with standard selection pressure but small population size \(\lambda\in\Omega(\log(n))\cap o(n)\) has a runtime exponential in \(\lambda\) on the DeceptiveLeadingBlocks problem [23]. In contrast, when the population size is large enough to prevent genetic drift, here \(\lambda=\Omega(n\log(n))\), then the runtime drops to \(O(\lambda n)\) with high probability.
Genetic drift in EDAs has been studied explicitly since the ground-breaking works of Shapiro [29; 30; 31], and it appears implicitly in many runtime analyses (all that use sufficiently large population sizes to prevent frequencies from reaching a boundary value different from what the fitness indicates). The most final answer to the genetic-drift problem for univariate EDAs, including clear suggestions to choose the parameters as to avoid genetic drift, was given by Doerr and Zheng [13]. In the case of the UMDA (and binary decision variables, that is, the classic model), their work shows that a
neutral frequency (defined in Section 5.2) stays with high probability in the middle range \([0.25,0.75]\) for the first \(T\) iterations if \(\mu=\omega(T)\). This bound is tight. When regarding \(n\) frequencies together, a value of \(\mu=\Omega(T\log(n))\) with implicit constant computable from [13, Theorem 2] ensures with high probability that all frequencies stay in the middle range for at least \(T\) iterations. Hence these bounds give a clear indication how to choose the selection size \(\mu\) when aiming to run the UMDA for a given number of iterations. We note that the quantification of genetic drift can also be used to design automated ways to choose parameters, see the work by Doerr and Zheng [12], when no a-priori estimate on \(T\) is available.
Given the importance of a good understanding of genetic drift, we now analyze genetic drift for multi-valued EDAs, more specifically, for the \(r\)-UMDA. We are optimistic that, analogous to the work by Doerr and Zheng [13], very similar arguments can be applied for other main univariate EDAs.
### Martingale Property of Neutral Positions
Genetic drift is usually studied via _neutral_ positions of a fitness function. Let \(f\) be an \(r\)-valued fitness function. We call a position \(i\in[n]\) (as well as, for an individual \(x\in[0..r-1]^{n}\), its corresponding variable \(x_{i}\) and the associated frequencies of an EDA) _neutral_ (w.r.t. to \(f\)) if and only if, for all \(x\in[0..r-1]^{n}\), the value \(x_{i}\) has no influence on the value of \(f\), that is, if and only if for all individuals \(x,x^{\prime}\in\left[0..r-1\right]^{n}\) such that for all \(j\in[n]\setminus\{i\}\) it holds that \(x_{j}=x^{\prime}_{j}\), we have \(f(x)=f(x^{\prime})\).
An important property of neutral variables that we capitalize on in our analysis of genetic drift is that their frequencies in typical EDAs without margins form martingales [13]. This observation extends the corresponding one for EDAs for binary representations. We make this statement precise for the \(r\)-UMDA.
**Lemma 1**.: _Let \(f\) be an \(r\)-valued position, and let \(i\in[n]\) be a neutral position of \(f\). Consider the \(r\)-UMDA without margins optimizing \(f\). For each \(j\in[0..r-1]\), the frequencies \((p_{i,j}^{(t)})_{t\in\mathbb{N}}\) are a martingale._
Proof.: Let \(j\in[0..r-1]\). Since the algorithm has no margins, in each iteration \(t\in\mathbb{N}\), no restriction takes place, so it holds that \(p_{i,j}^{(t+1)}=\frac{1}{\mu}\sum_{k\in[\mu]}\mathds{1}_{x_{i}^{(t,k)}=j}\). Since \(i\) is neutral, the selection of the \(\mu\) best individuals is not affected by the values at position \(i\) of the \(\lambda\) samples. Consequently, for each \(k\in[\mu]\), the value \(x_{i}^{(t,k)}\) follows a Bernoulli distribution with success probability \(p_{i,j}^{(t)}\).
Hence, \(\mathds{E}[\mathds{1}_{x_{i}^{(t,k)}=j}\mid p_{i,j}^{(t)}]=p_{i,j}^{(t)}\). Further, by linearity of expectation, we get
\[\mathds{E}\big{[}p_{i,j}^{(t+1)}\mid p_{i,j}^{(t)}\big{]}=\frac{1}{\mu}\sum_{k \in[\mu]}\mathds{E}\Big{[}\mathds{1}_{x_{i}^{(t,k)}=j}\mid p_{i,j}^{(t)}\Big{]} =\frac{1}{\mu}\sum_{k\in[\mu]}p_{i,j}^{(t)}=p_{i,j}^{(t)},\]
proving the claim.
As in previous works on genetic drift, the martingale property of neutral frequencies allows to use strong martingale concentration results. Since in our setting the frequencies start at a value of \(\frac{1}{r}\), we can only tolerate smaller deviations from this value, namely up to \(\frac{1}{2r}\) in either direction. With the methods of Doerr and Zheng [13], this reduces the genetic drift by a factor of \(\Theta(r^{2})\). We therefore use a stronger martingale concentration result, namely [25, Theorem 3.15], which allows to exploit the lower sampling variance present at frequencies in \(\Theta(\frac{1}{r})\). We note that we adjust the theorem by incorporating comments by McDiarmid, especially [25, eq. (41)], mentioning that the absolute value in eq. (41) should be around the sum, not around the maximum, as also observed by Doerr and Zheng [13].
**Theorem 2** (Martingale concentration result based on the variance [25, Theorem 3.15 and eq. (41)]).: _Let \((X_{t})_{t\in\mathbb{N}}\) be a martingale with respect to a filtration \((\mathcal{F}_{t})_{t\in\mathbb{N}}\). Further, for all \(t\in\mathbb{N}_{\geq 1}\), denote the deviation by \(\mathrm{dev}_{t}\coloneqq|X_{t}-X_{t-1}|\). In addition, let \(b=\sup_{t\in\mathbb{N}}\mathrm{dev}_{t}\), and assume that \(b\) is finite. Last, for all \(t\in\mathbb{N}\), let \(\hat{v}_{t}=\sup\sum_{s\in[t]}\mathrm{Var}[X_{s}-X_{s-1}\mid\mathcal{F}_{s-1}]\). Then for all \(t\in\mathbb{N}\) and all \(\varepsilon\in\mathbb{R}_{\geq 0}\), it holds that_
\[\Pr\Bigl{[}\max_{s\in[0..t]}|X_{s}-\mathds{E}[X_{0}]|\geq\varepsilon\Bigr{]} \leq 2\exp\biggl{(}-\frac{\varepsilon^{2}}{2\hat{v}_{t}+2b\varepsilon/3} \biggr{)}.\]
### Upper Bound on the Genetic-Drift Effect of a Neutral Position
By utilizing Theorem 2, we show for how long the frequencies of the \(r\)-UMDA at neutral positions stay concentrated around their initial value of \(\frac{1}{r}\).
**Theorem 3**.: _Let \(f\) be an \(r\)-valued fitness function, and let \(i\in[n]\) be a neutral position of \(f\). Consider the \(r\)-UMDA optimizing \(f\). Let \(T\in\mathbb{N}\) and \(j\in[0..r-1]\). Then_
\[\Pr\Bigl{[}\max_{s\in[0..T]}\,\Big{|}p_{i,j}^{(s)}-\frac{1}{r}\Big{|}\geq\frac {1}{2r}\Bigr{]}\leq 2\exp\biggl{(}-\frac{\mu}{12Tr+(4/3)r}\biggr{)}.\]
Proof.: We apply the same proof strategy as in the proof of [13, Theorem 1]. That is, we aim to apply Theorem 2. Naturally, one would apply the theorem to the sequence of frequencies \((p_{i,j}^{(t)})_{t\in\mathbb{N}}\). However, since the deviation of \(p_{i,j}\) is very large, namely \(1\), we consider instead a more fine-grained process \((Z_{t})_{t\in\mathbb{N}}\), which, roughly speaking, splits each iteration of the \(r\)-UMDA into \(\mu\) sections, each of which denotes that an additional sample is added to the update. Formally, for all \(t\in\mathbb{N}\) and \(a\in[0..\mu-1]\), let
\[Z_{t\mu+a}=p_{i,j}^{(t)}(\mu-a)+\sum\nolimits_{k\in[a]}\mathds{1}_{x_{i}^{(t +1,k)}=j}.\]
Note that, for all \(t\in\mathbb{N}_{\geq 1}\), it holds that \(Z_{t\mu}=\mu p_{i,j}^{(t)}\). Thus, the natural filtration \((\mathcal{F}_{t})_{t\in\mathbb{N}}\) of \(Z\) allows us to measure \(p_{i,j}\).
In order to apply Theorem 2, we check that its assumptions are met. To this end, we first show that \(Z\) is a martingale. Since \(i\) is neutral, the selection of the \(\mu\) best individuals is not affected by the values at position \(i\) of the \(\lambda\) samples. Consequently, for all \(k\in[\mu]\), the random variable \(x_{i}^{(t,k)}\) follows a Bernoulli distribution with success probability \(p_{i,j}^{(t)}\). Thus, we get for all \(t\in\mathbb{N}\) and \(a\in[0..\mu-2]\) that
\[\mathds{E}[Z_{t\mu+a+1}-Z_{t\mu+a}\mid\mathcal{F}_{t\mu+a}]=-p_{i,j}^{(t)}+ \mathds{E}[\mathds{1}_{x_{i}^{(t,a+1)}=j}\mid\mathcal{F}_{t\mu+a}]=0, \tag{2}\]
and further, by the definition of \(p_{i,j}^{(t+1)}\), that
\[\mathds{E} \big{[}Z_{(t+1)\mu}-Z_{t\mu+\mu-1}\mid\mathcal{F}_{t\mu+\mu-1} \big{]}\] \[=\mu\,\mathds{E}[p_{i,j}^{(t+1)}\mid\mathcal{F}_{t\mu+\mu-1}]-p_{ i,j}^{(t)}-\mathds{E}\big{[}\sum\nolimits_{k\in[\mu-1]}\mathds{1}_{x_{i}^{(t,k)}=j} \mid\mathcal{F}_{t\mu+\mu-1}\big{]}\] \[=\sum\nolimits_{k\in[\mu]}\mathds{E}[\mathds{1}_{x_{i}^{(t,k)}=j} \mid\mathcal{F}_{t\mu+\mu-1}]-p_{i,j}^{(t)}-\sum\nolimits_{k\in[\mu-1]} \mathds{E}[\mathds{1}_{x_{i}^{(t,k)}=j}\mid\mathcal{F}_{t\mu+\mu-1}]\] \[=\mathds{E}[\mathds{1}_{x_{i}^{(t,\mu)}=j}\mid\mathcal{F}_{t\mu+ \mu-1}]-p_{i,j}^{(t)}=0, \tag{3}\]
showing that \(Z\) is a martingale.
We take an alternative view of the event \(\{\max_{s\in[0..T]}\mid p_{i,j}^{(s)}-\frac{1}{r}\mid\geq\frac{1}{2r}\}\), whose probability we aim to bound. Note that this event is equivalent to \(\{\exists s\in[0..T]\colon|p_{i,j}^{(s)}-\frac{1}{r}|\geq\frac{1}{2r}\}\). A superset of this event is the event where we stop at the first iteration such that the inequality holds. To this end, let \(S=\inf\{t\in\mathbb{N}\mid Z_{t}\notin[\frac{\mu}{2r},\frac{3\mu}{2r}]\}\) be a stopping time (with respect to \(\mathcal{F}\)). From now on, we consider the stopped process \(\widetilde{Z}\) of \(Z\) with respect to \(S\). That is, for all \(t\in\mathbb{N}\), it holds that \(\widetilde{Z}_{t}=Z_{\min\{t,S\}}\). Since \(Z\) is a martingale, so is \(\widetilde{Z}\).
Let \(t\in\mathbb{N}\), and let \(Y_{t}\) be a Bernoulli random variable with success probability \(p_{i,j}^{(\lfloor t/\mu\rfloor)}\) that is \(\mathcal{F}_{t}\)-measurable. Note that by eqs. (2) and (3), disregarding the expected values, by eq. (4), it holds that
\[\widetilde{Z}_{t+1}-\widetilde{Z}_{t}=(Y_{t}-p_{i,j}^{(\lfloor t/\mu\rfloor)}) \cdot\mathds{1}_{t<S}. \tag{4}\]
Thus, the maximum deviation \(b\) of \(\widetilde{Z}\) is \(1\). Further, let \(\hat{v}_{t}\) denote the sum of variances, as defined in Theorem 2. Then, since \(p_{i,j}^{(\lfloor t/\mu\rfloor)}\) and \(\mathds{1}_{t<S}\) are \(\mathcal{F}_{t}\)-measurable and since, due to \(\widetilde{Z}\) being stopped, it holds that \(p_{i,j}^{(\lfloor t/\mu\rfloor)}\cdot\mathds{1}_{t<S}\in[\frac{1}{2r},\frac{3 }{2r}]\), we get
\[\mathrm{Var}\Big{[}\widetilde{Z}_{t+1}-\widetilde{Z}_{t}\mid\mathcal{F}_{t} \Big{]}=\mathrm{Var}[Y_{t}\cdot\mathds{1}_{t<S}\mid\mathcal{F}_{t}]=p_{i,j}^{( \lfloor t/\mu\rfloor)}\Big{(}1-p_{i,j}^{(\lfloor t/\mu\rfloor)}\Big{)}\cdot \mathds{1}_{t<S}\leq\frac{3}{2r}.\]
Hence, \(\hat{v}_{t}\leq\frac{3t}{2r}\).
Let \(\widetilde{p}\) denote the stopped process of \(p_{i,j}\) with respect to \(S\). Applying Theorem 2 with \(t=\mu T\) and our estimates above, noting that \(\widetilde{Z}_{0}=\frac{\mu}{r}\), yields
\[\mathrm{Pr}\Bigg{[}\max_{s\in[0..T]}\!\!\left|\widetilde{p}_{s}- \frac{1}{r}\right|\geq\frac{1}{2r}\Bigg{]}=\mathrm{Pr}\Bigg{[}\max_{s\in[0..T] }\!\!\left|\widetilde{p}_{s}-\mathds{E}[\widetilde{p}_{0}]\right|\geq\frac{1}{ 2r}\Bigg{]}\] \[=\mathrm{Pr}\Bigg{[}\max_{s\in[0..T]}\frac{1}{\mu}\big{|} \widetilde{Z}_{s\mu}-\mathds{E}[\widetilde{Z}_{0}]\big{|}\geq\frac{1}{2r} \Bigg{]}\leq\mathrm{Pr}\Bigg{[}\max_{s\in[0..t]}\!\!\left|\widetilde{Z}_{s}- \mathds{E}[\widetilde{Z}_{0}]\right|\geq\frac{\mu}{2r}\Bigg{]}\] \[\leq 2\exp\!\left(-\frac{(\mu/(2r))^{2}}{2\cdot 3\mu T/(2r)+(2/3) \mu/(2r)}\right)=2\exp\!\left(-\frac{\mu^{2}}{12Tr+(4/3)r}\right)\!\!.\]
Since we only need to consider the stopped process, as explained above, and since \(\widetilde{p}\) is identical to \(p_{i,j}\) until the process stops, the result follows.
### Upper Bound for Positions with Weak Preference
A position is rarely neutral for a given fitness function. However, we prove that the results on neutral positions translate to positions where one value is better than all other values. This is referred to as _weak preference_. Formally, we say that an \(r\)-valued fitness function \(f\) has a _weak preference for a value \(j\in[0..r-1]\) at a position \(i\in[n]\)_ if and only if, for all \(x_{1},...,x_{n}\in[0..r-1]\), it holds that
\[f(x_{1},..,x_{i-1},x_{i},x_{i+1},...,x_{n})\leq f(x_{1},..,x_{i-1},j,x_{i+1},...,x_{n}).\]
We now adapt Lemma 7 by Doerr and Zheng [13] to the \(r\)-UMDA.
**Theorem 4**.: _Consider two r-valued fitness functions \(f,g\) to optimize using the \(r\)-UMDA, such that without loss of generality, the first position of f weakly prefers 0 and the first position of g is neutral._
_Let \(p\) correspond to the frequency matrix of \(f\) and \(q\) to the frequency matrix of \(g\), both defined by the \(r\)-UMDA. Then, for all \(t\in\mathbb{N}\), it holds that \(q_{1,0}^{(t)}\preceq p_{1,0}^{(t)}\)._
Proof.: We prove our claim by induction on the number of iterations \(t\). For the base case \(t=0\), all frequencies are \(1/r\). Hence, \(q_{1,0}^{(0)}\preceq p_{1,0}^{(0)}\).
For the induction step, let \(t\in\mathbb{N}_{\geq 1}\) and let \(j\in[0..r-1]\). Further, let \(Y_{j}\sim\operatorname{Bin}\bigl{(}\mu,q_{0,j}^{(t)}\bigr{)}\). Since \(0\) is a neutral position of \(g\), the selection of the \(\mu\) best individuals is not affected by the values at position \(0\) of the \(\lambda\) samples. Thus, \(q_{1,j}^{(t+1)}=\frac{1}{\mu}Y\). Further, since \(f\) weakly prefers \(0\)s, defining \(Y_{j}^{\prime}\sim\operatorname{Bin}\bigl{(}\mu,p_{0,j}^{(t)}\bigr{)}\), it holds that \(p_{1,j}^{t+1}\gtrsim\frac{1}{\mu}Y^{\prime}\).
Analogously to Doerr and Zheng [13], we note that since \(p_{1,0}^{(t)}\) stochastically dominates \(q_{1,0}^{(t)}\) by induction hypothesis, there exists a coupling of the two probability spaces that describe the states of the two algorithms at iteration \(t\) in such a way that \(p_{1,0}^{(t)}\geq q_{1,0}^{(t)}\) for any point \(w\) in the coupling probability space. For such a \(w\), it then follows that \(Y_{j}\preceq Y_{j}^{\prime}\), as the success probability of the former is bounded from above by that of the latter. Hence, \(q_{1,j}^{(t+1)}=\frac{1}{\mu}Y\preceq\frac{1}{\mu}Y^{\prime}\preceq p_{1,j}^{( t+1)}\), which proves the claim.
We now apply Theorem 4 and extend Theorem 3 to positions with weak preference.
**Theorem 5**.: _Let \(f\) be an \(r\)-valued fitness function with a weak preference for \(0\) at position \(i\in[n]\). Consider the \(r\)-UMDA optimizing \(f\). Let \(T\in\mathbb{N}\). Then_
\[\Pr\biggl{[}\min_{s\in[0..T]}p_{i,0}^{(s)}\leq\frac{1}{2r}\biggr{]}\leq 2\exp \biggl{(}-\frac{\mu}{12Tr+(4/3)r}\biggr{)}. \tag{5}\]
Proof.: Let \(g\) be an \(r\)-valued fitness function with neutral position \(i\). Let \(q\) be the frequency matrix of the \(r\)-UMDA optimizing \(g\). By Theorem 4, it follows for all \(s\in\mathbb{N}\) that \(p_{i,0}^{(s)}\) stochastically dominates \(q_{i,0}^{(s)}\). Applying Theorem 3 to \(g\) for position \(i\), we have
\[\Pr\biggl{[}\min_{s\in[0..T]}q_{i,0}^{(s)}\leq\frac{1}{2r}\biggr{]}\leq 2\exp \biggl{(}-\frac{\mu}{12Tr+(4/3)r}\biggr{)}.\]
Using the stochastic domination yields the tail bound for \(f\).
## 6 Runtime Analysis of the \(r\)-UMDA
We analyze the runtime of the \(r\)-UMDA (Algorithm 2) on an \(r\)-valued variant of LeadingOnes. We start by describing the previous runtime results of EDAs on LeadingOnes (Section 6.1), then define the \(r\)-LeadingOnes problem formally (Section 6.2), and finally state and prove our main result (Theorem 6, Section 6.3).
### Previous Runtime Analyses of EDAs on LeadingOnes
In contrast to OneMax (another popular theory benchmark function), LeadingOnes is not that extensively studied for EDAs. This is surprising, as LeadingOnes is interesting as a benchmark for univariate EDAs, since the function introduces dependencies among the different positions of a bit string, but the model of univariate EDAs assumes independence. However, since LeadingOnes only has a single local maximum, known runtime results are rather fast.
In a first mathematical runtime analysis of an EDA, however, using the unproven no-error-assumption (which essentially states that there is no genetic drift), it was shown that the UMDA optimizes the LeadingOnes benchmark in expected time \(O(\lambda n)\). This was made rigorous by Chen et al. [5] with a proof that the UMDA with population size \(\Omega(n^{2+\varepsilon})\) optimizes LeadingOnes in time \(O(\lambda n)\) with high probability. Here the relatively large required population stems from the, then, incomplete understanding of genetic drift.
In a remarkable work [6], Dang and Lehre prove a runtime of \(O(n\lambda\log(\lambda)+n^{2})\), only assuming that the sample size \(\lambda\) is at least logarithmic. Hence this result applies both to regimes without and with genetic drift. In the regime with genetic drift, however, the dependence on \(\lambda\) is slightly worse than in the result by Chen et al. [5]. This was improved by Doerr and Krejca [10], where an \(O(n\lambda\log(\lambda))\) upper bound was shown for the whole regime \(\lambda=\Omega(n\log(n))\) of low genetic drift. More precisely, when \(\mu=\Omega(n\log(n))\) and \(\lambda=\Omega(\mu)\), both with sufficiently large implicit constants, then the runtime of the UMDA on LeadingOnes is \(O(n\lambda\log(\frac{\lambda}{\mu}))\) with high probability. We note that the analysis by Doerr and Krejca [10] is technically much simpler than the previous ones, in particular, it avoids the complicated level-based method used by Dang and Lehre [6]. We note that also lower bounds [24, 10] and runtimes in the presence of noise have been regarded. Since we have no such results, we refer to the original works.
Besides the UMDA, LeadingOnes was considered in the analysis of newly introduced univariate EDAs. Interestingly, each of these algorithms optimizes LeadingOnes in \(O(n\log(n))\) with high probability. This runtime is faster by a factor of \(n/\log(n)\) when compared to classical EAs, and it suggests that LeadingOnes is a rather easy problem for EDAs. Friedrich, Kotzing, and Krejca [16] proved the first of these results for their _stable compact genetic algorithm_ (scGA), which introduces an artificial bias into its update process that is overcome by the LeadingOnes function. How
ever, it was later proven that the scGA fails on the typically easy OneMax function [9], highlighting that the scGA is not a good EDA in general.
The next result was proven by Doerr and Krejca [9], who introduce the _significance-based compact genetic algorithm_ (sig-cGA). The sig-cGA saves a history of good individuals and only updates a frequency when the number of bits in the history of that position significantly deviates from its expectation. This algorithm also performs well on OneMax.
The last result was proven recently by Ajimakin and Devi [1], who introduce the _competing genes evolutionary algorithm_ (cgEA). The cgEA utilizes the Gauss-Southwell score as a quality metric for the positions of its samples. Iteratively, it picks the position \(i\) with the best score and creates a new population by letting each individual of the previous population compete against a copy of it where the bit at position \(i\) is flipped. Based on the best individuals created this way, the frequency at position \(i\) is immediately set to either 0 or 1, whichever value turns out to be better. This approach works very well for a variety of theory benchmarks, as proven by the authors.
### The \(r\)-LeadingOnes Benchmark
The \(r\)-LeadingOnes function (eq. (6)) is a generalization of the classical LeadingOnes benchmark [28] from the binary to the multi-valued domain. Before we define the generalization, we briefly present the LeadingOnes function.
**LeadingOnes.**LeadingOnes[28] is one of the most commonly mathematically analyzed benchmark functions, both in the general domain of evolutionary computation [11] as well as in the domain of EDAs [20]. For a bit string of length \(n\in\mathbb{N}_{\geq 1}\), it returns the number of consecutive 1s, starting from the leftmost position. Formally, LeadingOnes\(\colon\{0,1\}^{n}\to[0..n]\) is defined as \(x\mapsto\sum_{i\in[n]}\prod_{j\in[i]}x_{i}\). The function has a single local maximum at the all-1s string, which is also its global maximum.
\(r\)**-LeadingOnes.** Inspired by LeadingOnes from the binary domain, we define \(r\)-LeadingOnes\(\colon[0..r-1]^{n}\to[0..n]\) as the function that returns the number of consecutive 0s, starting from the leftmost position. Formally,
\[r\text{-LeadingOnes}\colon x\mapsto\sum_{i\in[n]}\prod_{j\in[i]}\mathds{1}_{ \{x_{j}=0\}}. \tag{6}\]
In contrast to the binary case, the single local optimum of \(r\)-LeadingOnes is the all-0s string, which is also its global optimum.
### Runtime Result
We analyze the runtime of the \(r\)-UMDA (Algorithm 2) on the \(r\)-LeadingOnes benchmark (eq. (6)) in the regime with low genetic drift. Compared to the binary case, we get an extra factor of order \(r\log(r)^{2}\) in the runtime. The factor of \(r\) is a result of the increased waiting time to see a certain position out of \(r\). The factor of \(\log(r)^{2}\) stems from the choice to stay in the regime with low genetic drift as well as for the time it takes a frequency to get to the upper border. Our result is a generalization of the binary case.
**Theorem 6**.: _Let \(s\in\mathbb{R}_{\geq 1}\). Consider the \(r\)-UMDA optimizing \(r\)-LeadingOnes with \(\lambda\geq 3se\mu\), \(\mu\geq 24(n+1)r\ln(n)(1+\log_{2s}(r))\), and \(n\geq 4r\). Then with a probability of at least \(1-\frac{2}{n}-\log_{2s}(2r)n^{2-0.5n}\), the frequency vector corresponding to the value \(0\) converges to \((1-\frac{1}{n})_{i\in[n]}\) in \(n\log_{2s}(2r)\) iterations._
_This implies that after \(\lambda n\log_{2s}(2r)\) fitness function evaluations, the \(r\)-UMDA samples the optimum with the success probability above._
The basic premise for our proof is that for the entirety of the considered iterations, frequencies corresponding to the value \(0\) remain above a given threshold since \(r\)-LeadingOnes weakly prefers \(0\) at all positions. We define this threshold as \(\frac{1}{2r}\), and we show that in a sequential manner, position by position, the frequencies corresponding to \(0\) are brought to \(1-\frac{1}{n}\) within a given number of iterations until all positions are covered.
First, we provide a guarantee on the concentration of all the probabilities during the entirety of the algorithm's runtime, in a way to avoid genetic drift and to remain above a minimal threshold for all frequencies.
**Lemma 7**.: _Let \(s\in\mathbb{R}_{\geq 1}\). Consider the \(r\)-UMDA with \(\lambda\geq\mu\geq 24(n+1)r\ln(n)(1+\log_{2s}(r))\) optimizing a function that weakly prefers \(0\) at every position. Then with a probability of at least \(1-\frac{2}{n}\), for each \(i\in[n]\), the frequency \(p_{i,0}^{(t)}\) remains above \(\frac{1}{2r}\) for the first \(n(1+\log_{2s}(r))\) iterations._
Proof.: By Theorem 5 with \(T=n(1+\log_{2s}(r))\), we have for all \(i\in[n]\) that
\[\Pr\biggl{[}\min_{k=1,\ldots,T}p_{i,0}^{(k)}\leq\frac{1}{2r}\biggr{]}\leq 2 \exp\biggl{(}-\frac{\mu}{12n(1+\log_{2s}(r))r+\frac{4r}{3}}\biggr{)}.\]
Since \(\mu\geq 24(n+1)r\ln(n)(1+\log_{2s}(r))\), we get
\[\Pr\biggl{[}\min_{k=1,\ldots,T}p_{i,0}^{(k)}\leq\frac{1}{2r} \biggr{]} \leq 2\exp\biggl{(}-\frac{24(n+1)r\ln(n)(1+\log_{2s}(r))}{12n(1+ \log_{2s}(r))r+\frac{4r}{3}}\biggr{)}\] \[\leq 2\exp\biggl{(}-\frac{24(n+1)\ln(n)(1+\log_{2s}(r))}{12(n+1)(1 +\log_{2s}(r))}\biggr{)}\] \[\leq 2\exp(-2\ln(n)).\]
Hence, it follows that
\[\Pr\Bigl{[}\min_{k=1,\ldots,T}p_{i,0}^{(k)}\leq\frac{1}{2r}\Bigr{]}\leq\frac{2}{n ^{2}}.\]
Applying a union bound over all \(n\) positions yields the result.
In the proof of our next result, we apply the following Chernoff bound. We apply it in order to quantify the number of iterations necessary to converge every position \(i\in[n]\).
**Theorem 8** (Chernoff bound [7, Theorem 1.10.5]).: _Let \(k\in\mathbb{N}_{\geq 1},\delta\in[0,1]\), and let \(X\) be the sum of \(k\) independent random variables each taking values in \([0,1]\). Then_
\[\Pr[X\leq(1-\delta)\ \mathds{E}[X]]\leq\exp\biggl{(}-\frac{\delta^{2}\, \mathds{E}[X]}{2}\biggr{)}.\]
An important concept for our analysis, following the approach by Doerr and Krejca [10], is that a position is _critical_. Informally, a position is critical if and only if the frequencies corresponding to value \(0\) are for all smaller positions at the upper border. Our runtime proof relies on showing that the \(r\)-UMDA quickly increases the frequency of a critical position to the upper border, thus making the next position critical. Formally, let \(t\in\mathbb{N}\). We call a position \(i\in[n]\)_critical_ for the \(r\)-UMDA on \(r\)-LeadingOnes in iteration \(t\), if and only if for all \(k\in[i-1]\), it holds that \(p_{k,0}^{(t)}=1-\frac{1}{n}\), and that \(p_{i,0}^{(t)}<1-\frac{1}{n}\).
We now show that once a position \(i\in[n]\) becomes critical, with high probability, with \(s\in\mathbb{R}_{\geq 1}\) being an appropriate value separating \(\lambda\) from \(\mu\) (that is, defining the selection pressure), it takes less than \(n\log_{2s}(r+1)\) iterations to bring the frequency of the value \(0\) to the upper border \(1-\frac{1}{n}\). We also prove that it remains there for a sufficient number of iterations until the convergence of the frequency matrix.
**Lemma 9**.: _Let \(s,u\in\mathbb{R}_{\geq 1}\). Consider the \(r\)-UMDA optimizing \(r\)-LeadingOnes with \(\lambda\geq 3se\mu\) and \(\mu\in\mathbb{N}_{\geq 1}\). Consider an iteration \(t\in\mathbb{N}\) such that position \(i\in[n]\) is critical, and let \(b\in\mathbb{R}_{>0}\) such that \(p_{i,0}^{(t)}\geq b\geq\frac{2}{n}\). Then with a probability of at least \(1-u\log_{2s}(\frac{1}{b})\exp\Bigl{(}-\frac{s\mu b}{24}\Bigr{)}\), it holds for all \(\theta\in\Bigl{[}\log_{2s}(\frac{1}{b})..u\log_{2s}(\frac{1}{b})\Bigr{]}\) that \(p_{i,0}^{(t+\theta)}=1-\frac{1}{n}\)._
Proof.: We start by proving that, for all \(\theta\in[0..u\log_{2s}(\frac{1}{b})]\), the frequency \(p_{i,0}^{(t+\theta)}\) multiplies by at least \(2s\) during an update, with high probability (and is then restricted). To this end, and let \(t^{\prime}\in[t..t+\theta]\), and assume that \(p_{i,0}^{(t^{\prime})}\geq b\), and
that position \(i\) or a position greater than \(i\) is critical (where we assume, for convenience, that if all frequencies for value \(0\) are \(1-\frac{1}{n}\), then position \(n+1\) is critical). Furthermore, let \(X\) denote the number of sampled individuals in iteration \(t^{\prime}\) that have at least \(i\) leading \(0\)s. Note that \(p_{i,0}^{(t)}\geq b\) by assumption as well as that \(i\) is critical in iteration \(t\). We discuss later via induction why these assumptions also hold for iteration \(t^{\prime}\).
We consider the process of sampling a single individual. Since position at least \(i\) is critical, by definition, for all \(k\in[i-1]\), we have \(p_{k,0}^{(t^{\prime})}=1-\frac{1}{n}\). Hence, the probability that all these positions are sampled as \(0\) for this individual is \((1-\frac{1}{n})^{i-1}\geq(1-\frac{1}{n})^{n-1}\geq\frac{1}{e}\). This yields \(\mathds{E}[X]\geq\frac{\lambda p_{i,0}^{(t^{\prime})}}{e}\), and since \(\lambda\geq 3se\mu\), this yields \(\mathds{E}[X]\geq 3s\mu p_{i,0}^{(t^{\prime})}\).
By the Chernoff bound (Theorem 8) and by the assumption \(p_{i,0}^{(t^{\prime})}\geq b\), we get
\[\Pr\biggl{[}X\leq\frac{5}{2}s\mu p_{i,0}^{(t^{\prime})}\biggr{]} \leq\Pr\biggl{[}X\leq\frac{5}{6}\,\mathds{E}[X]\biggr{]}\leq\exp \biggl{(}-\frac{\mathds{E}[X]}{72}\biggr{)}\] \[\leq\exp\biggl{(}-\frac{s\mu p_{i,0}^{(t^{\prime})}}{24}\biggr{)} \leq\exp\biggl{(}-\frac{s\mu b}{24}\biggr{)}.\]
We consider \(\overline{p}_{i,0}^{(t^{\prime}+1)}\) as defined in Section 4.2, which is the updated frequency before being restricted to \(\left[\frac{1}{(r-1)n},1-\frac{1}{n}\right]\). Since \(\overline{p}_{i,0}^{(t^{\prime}+1)}\geq\min(\frac{X}{\mu},1)\) by the definition of the update of the \(r\)-UMDA, we have
\[\Pr\biggl{[}\overline{p}_{i,0}^{(t^{\prime}+1)}\leq\min\biggl{(}\frac{5}{2} sp_{i,0}^{(t^{\prime})},1\biggr{)}\biggr{]}\leq\Pr\biggl{[}X\leq\frac{5}{2}s\mu p _{i,0}^{(t^{\prime})}\biggr{]}\leq\exp\biggl{(}-\frac{s\mu b}{24}\biggr{)}.\]
In order to update \(p_{i,0}^{(t^{\prime})}\), the frequency vector \(\overline{p}_{i}^{(t^{\prime}+1)}\) is restricted to the interval \(\left[\frac{1}{(r-1)n},1-\frac{1}{n}\right]\), which entails that the updated frequency \(p_{i,0}^{(t^{\prime}+1)}\) may reduce when compared to \(\overline{p}_{i,0}^{(t^{\prime}+1)}\). However, since the restriction adds at most the lower border (that is, \(\frac{1}{(r-1)n}\)) to a frequency, _any_ restriction rule adds at most a probability mass of \(\frac{1}{n}\) to the frequency vector. We assume pessimistically that, in order for the frequencies to sum to \(1\), this mass is entirely subtracted from \(\overline{p}_{i,0}^{(t^{\prime}+1)}\) during the restriction (noting that this does not take place once \(\overline{p}_{i,0}^{(t^{\prime}+1)}\geq 1-\frac{1}{n}\), as this means that it is set to the upper border instead). Further, the assumption \(p_{i,0}^{(t^{\prime})}\geq b\geq\frac{2}{n}\) yields that \(\frac{5}{2}sp_{i,0}^{(t^{\prime})}-\frac{1}{n}\geq 2sp_{i,0}^{(t^{\prime})}\).
Hence, we get that
\[\Pr\biggl{[}p_{i,0}^{(t^{\prime}+1)}<\min\biggl{(}2sp_{i,0}^{(t^{ \prime})},1-\frac{1}{n}\biggr{)}\biggr{]}\] \[\leq\Pr\biggl{[}p_{i,0}^{(t^{\prime}+1)}<\min\biggl{(}\frac{5}{2} sp_{i,0}^{(t^{\prime})}-\frac{1}{n},1-\frac{1}{n}\biggr{)}\biggr{]}\leq\exp\biggl{(}- \frac{s\mu b}{24}\biggr{)}.\]
By induction on the iteration \(t^{\prime}\) (starting at \(t\)), it follows that, with an additional failure probability of at most \(\exp\Bigl{(}-\frac{s\mu b}{24}\Bigr{)}\) per iteration, the assumptions that \(p_{i,0}^{(t^{\prime})}\geq b\) and that position at least \(i\) is critical are satisfied.
Starting from iteration \(t\), a union bound over the next \(u\log_{2s}(\frac{1}{b})\) iterations yields that the frequency \(p_{i,0}\) continues growing exponentially with a factor of \(2s\) for the next \(u\log_{2s}(\frac{1}{b})\) iterations with probability at least \(1-u\log_{2s}(\frac{1}{b})\exp\Bigl{(}-\frac{s\mu b}{24}\Bigr{)}\). Since, by assumption, \(p_{i,0}^{(t)}\geq b\), it reaches \(1-\frac{1}{n}\) after at most \(\log_{2s}(\frac{1}{b})\) iterations during that time, concluding the proof.
We now prove our main result.
Proof of Theorem 6.: Since \(r\)-LeadingOnes weakly prefers \(0\)s at all positions \(i\in[n]\), by Lemma 7, with a probability of at least \(1-\frac{2}{n}\), for all \(i\in[n]\), the frequency \(p_{i,0}\) remains above \(\frac{1}{2r}\) for the first \(n(1+\log_{2s}(r))\) iterations.
For each position \(i\in[n]\), we apply Lemma 9 with \(b=\frac{1}{2r}\) and \(u=n\), noting that the assumption \(b\geq\frac{2}{n}\) is satisfied, since we assume \(n\geq 4r\). Hence, for each \(i\in[n]\), with a probability of at least \(1-\log_{2s}(2r)n^{1-0.5n}\), after at most \(\log_{2s}(2r)\) iterations, the frequency \(p_{i,0}\) is set to \(1-\frac{1}{n}\) and remains there for at least \((n-1)\log_{2s}(2r)\) iterations. Further, by a union bound over all \(n\) frequency vectors, the above holds for all frequency vectors, with probability at least \(1-\log_{2s}(2r)n^{2-0.5n}\).
Combining everything, with probability at least \(1-\frac{2}{n}-\log_{2s}(2r)n^{2-0.5n}\), it holds by induction on position \(i\) that once position \(i\) is critical, the frequency \(p_{i,0}\) reaches \(1-\frac{1}{n}\) in at most \(\log_{2s}(2r)\) iterations and remains there until at least iteration \(n\log_{2s}(2r)\). Since position \(0\) is critical in iteration \(0\), it follows that the frequencies for value \(0\) are set, in increasing order of their position, to \(1-\frac{1}{n}\). After at most \(n\log_{2s}(2r)\) iterations, all such frequencies are at the upper border, which proves the first part of the claim.
For the second part, note that once \(p_{n,0}=1-\frac{1}{n}\), the population of the \(r\)-UMDA in that iteration contains at least \((1-\frac{1}{n})\mu\) times the optimum. Further, each iteration accounts for \(\lambda\) fitness function evaluations. This proves the second claim.
Conclusion
We have proposed the first EDAs to optimize problems with multi-valued decision variables. Our analysis of the genetic-drift effect and our runtime analysis on the multi-valued version of LeadingOnes have shown that the increase in decision values does not result in significant difficulties. Although there may be a slightly stronger genetic drift (requiring a more conservative model update, that is, a higher selection size \(\mu\) for the UMDA) and slightly longer runtimes, these outcomes are to be expected given the increased complexity of the problem. We hope that our findings will inspire researchers and practitioners to embrace the benefits of EDAs for multi-valued decision problems, beyond the previously limited application to permutations and binary decision variables.
|